repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
OWASP/Nettacker | automation | 321 | This project isn't shown in https://owasp.org/projects | So its visibility can be very small.
I think you should talk with someone of OWASP website staff.
Thanks | closed | 2020-07-19T15:12:23Z | 2020-07-19T16:04:38Z | https://github.com/OWASP/Nettacker/issues/321 | [] | q2dg | 2 |
jeffknupp/sandman2 | rest-api | 37 | sqlalchemy_utils.PasswordType makes JSONEncoder's serialization barf | Here's my stacktrace:
```
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/flask/json.py", line 83, in default
return _json.JSONEncoder.default(self, o)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/json/encoder.py", line 173, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: b'$2a$12$0rN/M0JPI3ChHNlxBnhoqeNyaC95otDUbflNsjY5O9XvEAlLiUETi' is not JSON serializable
```
You just might wanna make the api machinery a little more robust, to handle bytes serialization.
| closed | 2016-07-31T02:53:17Z | 2016-08-05T20:28:43Z | https://github.com/jeffknupp/sandman2/issues/37 | [
"invalid",
"wontfix"
] | Datamance | 3 |
JoeanAmier/TikTokDownloader | api | 58 | 关于UserAgent 旁边的参数 | 我看代码里写了几条UserAgent的值,并且还有对应的二维数组。这些二维数组的值是怎么获取到的,或者可以加更多的UserAgent吗 | open | 2023-09-06T08:03:39Z | 2023-09-08T10:22:46Z | https://github.com/JoeanAmier/TikTokDownloader/issues/58 | [] | BaoStorm | 3 |
allenai/allennlp | nlp | 4,862 | save git status when run commands | Sometimes after changing many versions of the code, I'm confused about how I got this result. It would be nice if allennlp could log the current git status to `serialization_dir` when running `train` command.
Here is an example of a transformers record(`git_log.json`):
```
{
"repo_id": "<git.repo.base.Repo '/data/wts/transformers/.git'>",
"repo_sha": "b01ddc9577b87f057e163d49563ee3f74f4810cf",
"repo_branch": "master",
"hostname": "XXX-GPUSERVER-144"
}
``` | open | 2020-12-14T11:30:54Z | 2022-08-10T03:41:38Z | https://github.com/allenai/allennlp/issues/4862 | [
"Good First Issue",
"Contributions welcome",
"Feature request"
] | tshu-w | 12 |
nerfstudio-project/nerfstudio | computer-vision | 2,868 | How to find Nerf++ in Nerfstudio? | I have used Lenovo's computer system.How do I find the Nerf++ in the NerfStudio?Hope someone can help me. | closed | 2024-02-03T12:07:17Z | 2024-02-04T09:57:05Z | https://github.com/nerfstudio-project/nerfstudio/issues/2868 | [] | shehuirenwy | 1 |
OFA-Sys/Chinese-CLIP | nlp | 95 | Chinese-CLIP是如何修改context length的?How does Chinese-CLIP change the context length? | 我看到cn-clip是能够修改tokenizer的context length,但是我没有找到相关的代码是如何实现这个的。
在clip中,tokenizer的max context length为77,因为text-encoder在训练的时候就是如此。所以我想问一下,cn-clip是如何做到的呢?具体的代码又是在哪?
I see that cn-clip is able to modify the context length of the tokenizer, but I can't find the relevant code to implement this.
In clip, the tokenizer has a max context length of 77, because the text-encoder does this when training. So I would like to ask, how does cn-clip do it? Where is the code?
| closed | 2023-04-28T11:56:21Z | 2023-05-27T14:42:55Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/95 | [] | DengXianqi | 2 |
OpenInterpreter/open-interpreter | python | 1,050 | Generated code is trimmed when using "-m gemini/gemini-pro" | ### Describe the bug
Please note that I'm using the `gemini/gemini-pro` implementation (which uses Google AI Studio / free) instead of the `gemini-pro` implementation (which uses Google Vertex AI / trial).
### Command ###
docker run --rm -it --name interpreter-instance openinterpreter interpreter -m gemini/gemini-pro
### Output ###
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
▌ Model set to gemini/gemini-pro
Open Interpreter will require approval before running code.
Use interpreter -y to bypass this.
Press CTRL-C to exit.
> How many files are on my desktop?
We were unable to determine the context window of this model. Defaulting to 3000.
If your model can handle more, run interpreter --context_window {token limit} --max_tokens {max tokens per response}.
Continuing...
```
import os
num_files = len(os.listdir('/
Would you like to run this code? (y/n)
y
```
import os
num_files = len(os.listdir('/
Cell In[2], line 3
num_files = len(os.listdir('/
^
SyntaxError: unterminated string literal (detected at line 3)
### Reproduce
Run:
`docker run --rm -it --name interpreter-instance openinterpreter interpreter -m gemini/gemini-pro`
Ask:
`> How many files are on my desktop?`
### Expected behavior
Generated code should work.
### Screenshots

### Open Interpreter version
0.2.0
### Python version
3.11
### Operating System name and version
OL8
| open | 2024-03-02T04:04:59Z | 2024-10-31T08:34:20Z | https://github.com/OpenInterpreter/open-interpreter/issues/1050 | [
"Bug",
"More Information Required"
] | kripper | 8 |
microsoft/unilm | nlp | 755 | LayoutLM V2 error srcIndex < srcSelectDimSize | **Describe**
I am using LayoutLM V2 model. I am trying to finetune the the model by using my custom dataset. I got bellow error message.
Please tell me how to resolve the error.
you can download the code and dataset along with notebook
https://drive.google.com/file/d/1VdTvn580pGgVBlN03UX5alaFqSbc8Q5_/view?usp=sharing
Downloading: 100% 765M/765M [00:10<00:00, 74.4MB/s]
Downloading: 100% 226k/226k [00:00<00:00, 24.5MB/s]
Downloading builder script: 6.33kB [00:00, 8.55MB/s]
***** Running training *****
Num examples = 80
Num Epochs = 30
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 1
Gradient Accumulation steps = 1
Total optimization steps = 2400
0% 0/2400 [00:00<?, ?it/s]Traceback (most recent call last):
File "layoutlmV2/train.py", line 124, in <module>
trainer.train()
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1371, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1609, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2300, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2332, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 1238, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 906, in forward
inputs_embeds=inputs_embeds,
File "/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 756, in _calc_text_embeddings
embeddings = self.embeddings.LayerNorm(embeddings)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/normalization.py", line 190, in forward
input, self.normalized_shape, self.weight, self.bias, self.eps)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2486, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
0% 0/2400 [00:00<?, ?it/s] | open | 2022-06-10T10:45:35Z | 2023-02-22T08:50:39Z | https://github.com/microsoft/unilm/issues/755 | [] | koyelseba | 3 |
bendichter/brokenaxes | matplotlib | 49 | Hide the break | Want to hide the break between the axes and want to make it continuous. Any solution?
And also if is it possible to show only one number at the break for this continuous axis? | closed | 2020-05-19T10:24:02Z | 2020-06-03T04:17:17Z | https://github.com/bendichter/brokenaxes/issues/49 | [] | shivanshi13 | 3 |
man-group/arctic | pandas | 500 | import _compress results in no suitable image found | #### Arctic Version
```
1.59.0
```
#### Arctic Store
```
from . import _compress as clz4
```
#### Platform and version
macOSx 10.11.6
#### Description of problem and/or code sample that reproduces the issue
when I run a simple import arctic, I get a problem where it gets hung on
from . import _compress as clz4
The error it produces is:
```
ImportError: dlopen(python3.6/site-packages/arctic/_compress.cpython-36m-darwin.so, 2): no suitable image found. Did find: lib/python3.6/site-packages/arctic/_compress.cpython-36m-darwin.so: mach-o, but wrong architecture
```
From what I have found out through google searches of this type of error, it seems like this is caused by a 32 bit install of either python or the package in question, however my python seems to be fine in other cases. I have tried different types of versions of arctic but to no avail. I have a hunch this has something to do with my computer and is not an arctic issue, but I thought I would try asking here before I punt my computer.
Thanks for any suggestions
| closed | 2018-02-06T06:33:37Z | 2018-08-25T15:41:31Z | https://github.com/man-group/arctic/issues/500 | [] | cavnerj | 2 |
davidteather/TikTok-Api | api | 519 | get_Video_By_Url issues | I use get_Video_By_Url to download video,it doesn' t work

| closed | 2021-03-04T04:49:16Z | 2021-03-20T18:24:42Z | https://github.com/davidteather/TikTok-Api/issues/519 | [] | xyjw | 1 |
matplotlib/matplotlib | matplotlib | 29,008 | [Bug]: intersphinx on meson-python is broken | ### Bug summary
Since recently, sphinx-build error with ( e.g. https://app.circleci.com/pipelines/github/matplotlib/matplotlib/33363/workflows/c7423837-956c-4d75-85de-93e55fbdb8a5/jobs/85311):
> intersphinx inventory 'https://meson-python.readthedocs.io/en/stable/objects.inv' not readable due to ValueError: unknown or unsupported inventory version: ValueError('invalid inventory header: <!doctype html>')
Actually, https://meson-python.readthedocs.io/en/stable/objects.inv does not exist and is redirected to https://mesonbuild.com/meson-python/
This issue is reported upstream: https://github.com/mesonbuild/meson-python/issues/693
| closed | 2024-10-22T08:56:12Z | 2024-10-22T09:38:40Z | https://github.com/matplotlib/matplotlib/issues/29008 | [
"Documentation: build"
] | timhoffm | 1 |
Avaiga/taipy | automation | 2,293 | Have part or dialog centered to the element clicked | ### Description
Here, I have clicked on an icon and I have a dropdown menu of labels next to where I clicked:

Here, I have clicked on icon and I see a dialog/part showing up next to where I clicked:

I want to do that generically to put anything in this part. If I click somewhere else, this dialog should disappear.
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-11-29T10:51:56Z | 2024-12-17T18:15:45Z | https://github.com/Avaiga/taipy/issues/2293 | [
"🖰 GUI",
"🟨 Priority: Medium",
"✨New feature",
"🔒 Staff only"
] | FlorianJacta | 15 |
yeongpin/cursor-free-vip | automation | 35 | 大佬为啥账号配置显示有资源,但是问答提示不行 | 

request id:
f9bcb26c-a7a6-43c6-82a6-b284cc3889f3
| closed | 2025-01-17T10:01:57Z | 2025-01-17T16:13:01Z | https://github.com/yeongpin/cursor-free-vip/issues/35 | [] | geeklx | 0 |
jupyter-incubator/sparkmagic | jupyter | 516 | Release on Anaconda missing PySpark3 | The latest release of Anaconda on the Anaconda channel is missing PySpark3. The conda-forge channel appears to be fine.
https://anaconda.org/anaconda/sparkmagic/files | closed | 2019-03-12T22:47:01Z | 2019-06-27T14:38:08Z | https://github.com/jupyter-incubator/sparkmagic/issues/516 | [] | jaipreet-s | 2 |
ranaroussi/yfinance | pandas | 2,262 | Intraday data returned omits last daily datapoint | Running a demo using the below code and returning the expected range of data, but for each day the closing datapoint at 1600 is being omitted.
```
tick = 'vxf'
en = datetime.now()
st = en - timedelta(days = 59)
data = yf.Ticker(tick).history(interval='30m', start=st.strftime('%Y-%m-%d'), end=en.strftime('%Y-%m-%d'))
```
```
data.head(30)
Out[98]:
Open High ... Stock Splits Capital Gains
Datetime ...
2024-12-16 09:30:00-05:00 200.839996 202.119995 ... 0.0 0.0
2024-12-16 10:00:00-05:00 202.110001 202.537506 ... 0.0 0.0
2024-12-16 10:30:00-05:00 202.270004 202.740005 ... 0.0 0.0
2024-12-16 11:00:00-05:00 202.531296 202.531296 ... 0.0 0.0
2024-12-16 11:30:00-05:00 202.220001 202.419998 ... 0.0 0.0
2024-12-16 12:00:00-05:00 202.285004 202.583496 ... 0.0 0.0
2024-12-16 12:30:00-05:00 202.516800 202.619995 ... 0.0 0.0
2024-12-16 13:00:00-05:00 202.630005 202.850006 ... 0.0 0.0
2024-12-16 13:30:00-05:00 202.759995 202.809998 ... 0.0 0.0
2024-12-16 14:00:00-05:00 202.764999 202.850006 ... 0.0 0.0
2024-12-16 14:30:00-05:00 202.570007 202.919998 ... 0.0 0.0
2024-12-16 15:00:00-05:00 202.820007 202.820007 ... 0.0 0.0
2024-12-16 15:30:00-05:00 202.361404 202.361404 ... 0.0 0.0
2024-12-17 09:30:00-05:00 201.100006 201.449997 ... 0.0 0.0
2024-12-17 10:00:00-05:00 200.559998 200.639999 ... 0.0 0.0
2024-12-17 10:30:00-05:00 199.479996 199.904999 ... 0.0 0.0
2024-12-17 11:00:00-05:00 199.939102 200.309296 ... 0.0 0.0
2024-12-17 11:30:00-05:00 199.991501 200.559998 ... 0.0 0.0
2024-12-17 12:00:00-05:00 200.259995 200.360001 ... 0.0 0.0
2024-12-17 12:30:00-05:00 200.039993 200.470001 ... 0.0 0.0
2024-12-17 13:00:00-05:00 200.506500 200.539993 ... 0.0 0.0
2024-12-17 13:30:00-05:00 200.041595 200.309006 ... 0.0 0.0
2024-12-17 14:00:00-05:00 200.270004 200.309998 ... 0.0 0.0
2024-12-17 14:30:00-05:00 199.990005 199.990005 ... 0.0 0.0
2024-12-17 15:00:00-05:00 199.820007 199.820007 ... 0.0 0.0
2024-12-17 15:30:00-05:00 199.494995 200.089996 ... 0.0 0.0
2024-12-18 09:30:00-05:00 200.539993 200.710007 ... 0.0 0.0
2024-12-18 10:00:00-05:00 199.759995 199.960007 ... 0.0 0.0
2024-12-18 10:30:00-05:00 199.639999 200.210007 ... 0.0 0.0
2024-12-18 11:00:00-05:00 200.100006 200.115005 ... 0.0 0.0
[30 rows x 8 columns]
```
I've seen [this issue](https://github.com/ranaroussi/yfinance/issues/1445) and wonder if it may be related, but can anyone shed some light and whether there's a way to return the actual closing bar?
Obviously using the opening value of the next day bar would be a different value than the close at 1600 so can't just use that as a stand-in. | closed | 2025-02-12T20:26:48Z | 2025-02-12T22:11:18Z | https://github.com/ranaroussi/yfinance/issues/2262 | [] | cppt | 1 |
FactoryBoy/factory_boy | sqlalchemy | 352 | Custom provider declaration example in add_provider documentation | Example https://factoryboy.readthedocs.io/en/latest/reference.html#factory.Faker.add_provider showing how to create `SmileyProvider` would be nice.
BTW where are sources for the docs? https://github.com/FactoryBoy/factory_boy/blob/master/docs/reference.rst don't have following section. | open | 2017-03-09T12:17:37Z | 2017-11-01T00:12:32Z | https://github.com/FactoryBoy/factory_boy/issues/352 | [] | buoto | 2 |
JoeanAmier/TikTokDownloader | api | 380 | 建议加下载文件命名自定义功能 | 几个字段可以自定义设置
假设按照以下这些来定义
发布日期:YYYYMMDD
发布时间:hhmm
发布用户:user
作品标题:title
作评ID:id
这边举个例子,我个人是按这样命名保存的
YYYY.MM.DD_hhmm_user_title | open | 2025-01-16T16:55:55Z | 2025-01-17T01:05:27Z | https://github.com/JoeanAmier/TikTokDownloader/issues/380 | [] | yingfeng-i | 2 |
huggingface/datasets | numpy | 7,378 | Allow pushing config version to hub | ### Feature request
Currently, when datasets are created, they can be versioned by passing the `version` argument to `load_dataset(...)`. For example creating `outcomes.csv` on the command line
```
echo "id,value\n1,0\n2,0\n3,1\n4,1\n" > outcomes.csv
```
and creating it
```
import datasets
dataset = datasets.load_dataset(
"csv",
data_files ="outcomes.csv",
keep_in_memory = True,
version = '1.0.0')
```
The version info is stored in the `info` and can be accessed e.g. by `next(iter(dataset.values())).info.version`
This dataset can be uploaded to the hub with `dataset.push_to_hub(repo_id = "maomlab/example_dataset")`. This will create a dataset on the hub with the following in the `README.md`, but it doesn't upload the version information:
```
---
dataset_info:
features:
- name: id
dtype: int64
- name: value
dtype: int64
splits:
- name: train
num_bytes: 64
num_examples: 4
download_size: 1332
dataset_size: 64
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
```
However, when I download from the hub, the version information is missing:
```
dataset_from_hub_no_version = datasets.load_dataset("maomlab/example_dataset")
next(iter(dataset.values())).info.version
```
I can add the version information manually to the hub, by appending it to the end of config section:
```
...
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
version: 1.0.0
---
```
And then when I download it, the version information is correct.
### Motivation
### Why adding version information for each config makes sense
1. The version information is already recorded in the dataset config info data structure and is able to parse it correctly, so it makes sense to sync it with `push_to_hub`.
2. Keeping the version info in at the config level is different from version info at the branch level. As the former relates to the version of the specific dataset the config refers to rather than the version of the dataset curation itself.
## A explanation for the current behavior:
In [datasets/src/datasets/info.py:159](https://github.com/huggingface/datasets/blob/fb91fd3c9ea91a818681a777faf8d0c46f14c680/src/datasets/info.py#L159C1-L160C1
), the `_INCLUDED_INFO_IN_YAML` variable doesn't include `"version"`.
If my reading of the code is right, adding `"version"` to `_INCLUDED_INFO_IN_YAML`, would allow the version information to be uploaded to the hub.
### Your contribution
Request: add `"version"` to `_INCLUDE_INFO_IN_YAML` in [datasets/src/datasets/info.py:159](https://github.com/huggingface/datasets/blob/fb91fd3c9ea91a818681a777faf8d0c46f14c680/src/datasets/info.py#L159C1-L160C1
)
| open | 2025-01-21T22:35:07Z | 2025-01-30T13:56:56Z | https://github.com/huggingface/datasets/issues/7378 | [
"enhancement"
] | momeara | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,894 | Deduplicated notification emails | ### Proposal
Notification emails from errors should be deduplicated to prevent spamming. This should be according to content / stack trace, not timestamp. As in, if I get `FooException` 100 times in an hour, I want maybe the first email and then a summary ("100 cases in the last hour") to let me know it's repeating. An email per instance is impossibly many.
### Motivation and context
in a 10 minute window, the application sent a thousand emails to our admin, and our email provider assumed with this was spam and froze our account. | open | 2023-12-16T09:23:52Z | 2023-12-16T09:23:52Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3894 | [
"T: Feature",
"Triage"
] | brassy-endomorph | 0 |
hbldh/bleak | asyncio | 586 | Using write without response causes exception on disconnect | * bleak version: 0.12.0
* Python version: 3.9.5
* Operating System: macOS 11.4
* BlueZ version (`bluetoothctl -v`) in case of Linux: N/A
### Description
It seems that calling `write_gatt_char` with `response` set to False results in an exception being thrown when `disconnect()` is called.
### What I Did
Here is an example program that reproduces the issue:
```python
#!env/bin/python
import asyncio
from bleak import BleakScanner, BleakClient, BleakError
uuid = '4831911B-DE54-409F-8750-0172C5A43BEF'
write_char = 'CED94322-6692-4A12-87D5-6F2764762B2A'
test_data = b'\x00' * 20
def callback(handle, value):
print("received data: " + str(value))
async def find_device(device_uuid):
target_device = None
tries = 0
while target_device == None and tries < 5:
devices = await BleakScanner.discover()
for device in devices:
if "uuids" in device.metadata and device_uuid.lower() in device.metadata["uuids"]:
target_device = device
tries = tries + 1
return BleakClient(target_device)
async def reproduce_exception():
device = await find_device(uuid)
await device.connect()
await device.start_notify(write_char.lower(), callback)
await device.write_gatt_char(write_char.lower(), test_data, False)
await device.stop_notify(write_char.lower())
await device.disconnect()
if __name__ == '__main__':
event_loop = asyncio.get_event_loop()
event_loop.run_until_complete(reproduce_exception())
```
When I run that script, I get the following output:
```
¯\_(ツ)_/¯ ble_testing:./minimal_example.py
Future exception was never retrieved
future: <Future finished exception=BleakError('disconnected') created at /usr/local/Cellar/python@3.9/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py:424>
source_traceback: Object created at (most recent call last):
File "/Users/adamincera/code/ble_testing/./minimal_example.py", line 38, in <module>
event_loop.run_until_complete(reproduce_exception())
File "/usr/local/Cellar/python@3.9/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 629, in run_until_complete
self.run_forever()
File "/usr/local/Cellar/python@3.9/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 596, in run_forever
self._run_once()
File "/usr/local/Cellar/python@3.9/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 1882, in _run_once
handle._run()
File "/usr/local/Cellar/python@3.9/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/Users/adamincera/code/ble_testing/./minimal_example.py", line 32, in reproduce_exception
await device.write_gatt_char(write_char.lower(), test_data, False)
File "/Users/adamincera/code/ble_testing/env/lib/python3.9/site-packages/bleak/backends/corebluetooth/client.py", line 319, in write_gatt_char
success = await self._delegate.write_characteristic(
File "/Users/adamincera/code/ble_testing/env/lib/python3.9/site-packages/bleak/backends/corebluetooth/PeripheralDelegate.py", line 166, in write_characteristic
future = self._event_loop.create_future()
File "/usr/local/Cellar/python@3.9/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 424, in create_future
return futures.Future(loop=self)
bleak.exc.BleakError: disconnected
¯\_(ツ)_/¯ ble_testing:
```
Running the same script with `response` set to True results in no error message. | closed | 2021-07-02T04:15:15Z | 2021-07-07T17:23:49Z | https://github.com/hbldh/bleak/issues/586 | [
"Backend: Core Bluetooth"
] | adamincera | 3 |
charlesq34/pointnet | tensorflow | 258 | Why the accuracy of train is high and the result of val is poor | Hello, I am a newbie in deep learning.
I would like to ask, I use the part_seg program to classify a large-scale urban point cloud data set (500m*500m). The training data is the data of the assigned categories in this data set, and the verification data is part of this data set.
I divide the input training data into a label (city), and then divide it into four parts (ground, wall, roof, vegetation)
training point number: 300,000
total point number: 2,000,000
val point number:150,000
During the training process, the accuracy of train continuously increased to 90%, and the loss continued to decrease to 0.4. I understand this accuracy rate is the category predicted by the train data/input category of the train data.
However, the accuracy and loss of val have no significant trend, the accuracy is only 45% and fluctuates constantly, and the loss fluctuates around 2.
At the same time, no matter the characteristics of the input data are XYZ, XYZRI, XYZRID (XYZ, Return number, Intensity, Density of points), the final result is similar
What caused this? Because only looking at the process of train, very good results are obtained, but val is very poor. Any suggestions for improvement? Or should I use sem_seg instead of part_seg?
Thanks in advance.
| open | 2020-11-30T09:10:04Z | 2021-03-19T20:22:34Z | https://github.com/charlesq34/pointnet/issues/258 | [] | yasongguo | 1 |
keras-team/keras | tensorflow | 20,136 | Keras 3 doesn't map dictionary inputs by "key" | The code below runs in Tensorflow 2.11 (keras 2) but not in tf-nightly (Keras 3.4.1 ). I think Keras 3 doesn't map inputs by dict key
Epoch 1/10
Traceback (most recent call last):
File "/home/wangx286/rnn-base-caller/base_caller/scripts/example_metric.py", line 32, in <module>
model.fit({'before': x_train, 'after': y_train}, epochs=10, batch_size=32)
File "/home/wangx286/miniconda3/envs/tf216/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/wangx286/miniconda3/envs/tf216/lib/python3.10/site-packages/keras/src/models/functional.py", line 244, in _adjust_input_rank
raise ValueError(
ValueError: Exception encountered when calling Functional.call().
**Invalid input shape for input Tensor("data_1:0", shape=(None,), dtype=float32). Expected shape (None, 20), but input has incompatible shape (None,)**
Arguments received by Functional.call():
• inputs={'before': 'tf.Tensor(shape=(None, 20), dtype=float32)', 'after': 'tf.Tensor(shape=(None,), dtype=float32)'}
• training=True
• mask={'before': 'None', 'after': 'None'}
```
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model
# Define the model using the Functional API
x = tf.keras.Input(shape=(20,), name="before", dtype=tf.float32)
y = tf.keras.Input(shape=(), name="after", dtype=tf.float32)
tmp = Dense(64, activation='relu')(x)
outputs = Dense(1, activation='sigmoid')(tmp)
class DummyLossLayer(tf.keras.layers.Layer):
def call(self, *x):
self.add_loss(tf.keras.losses.BinaryCrossentropy(from_logits=True)(*x))
return x
outputs, _ = DummyLossLayer()(outputs, y)
model = Model(inputs=[x, y], outputs=outputs)
# Compile the model with the custom metric
model.compile(optimizer='adam')
# Dummy data for demonstration
x_train = np.random.random((1000, 20))
y_train = np.random.randint(2, size=(1000,)).astype(np.float32)
# Train the model
model.fit({'before': x_train, 'after': y_train}, epochs=10, batch_size=32)
``` | closed | 2024-08-19T19:33:43Z | 2024-08-21T04:08:16Z | https://github.com/keras-team/keras/issues/20136 | [
"type:Bug"
] | MeowTheCat | 2 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,289 | Use classification visualizers directly from predictions, targets and logits? | Hi,
I work on classification problems and really like the design of the classification visualizers and their plots.
Nevertheless, I am a pytorch user. I usually store the model's output on test set as "prediction", "target", and "logits" (probability of each class).
It looks to me that classification report, confusion matrix, ROCAUC, precision-recall curves, class prediction error, discrimination threshold can be achieved using these three inputs.
Is there an easy way to adapt my workflow to it?
Thanks | closed | 2022-11-29T20:57:18Z | 2022-11-29T21:48:00Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1289 | [] | 2533245542 | 1 |
ultrafunkamsterdam/undetected-chromedriver | automation | 916 | detected at mastersportal.com | ### Link: `https://www.mastersportal.com/studies/294909/environmental-economics-and-management.html?ref=search_card`
### Code:

### Result:


| closed | 2022-11-24T11:41:01Z | 2022-12-03T23:11:36Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/916 | [] | Alexei007 | 0 |
littlecodersh/ItChat | api | 427 | File "/usr/local/lib/python3.6/site-packages/itchat/components/login.py", line 213, in show_mobile_login self.loginInfo['url'], self.loginInfo['pass_ticket']) KeyError: 'pass_ticket' | itchat 1.3.5 版本。
这个在login.py 里面提示,`self.loginInfo['pass_ticket'])
KeyError: 'pass_ticket'`,是啥情况啊 | closed | 2017-06-26T13:51:02Z | 2018-02-28T04:13:17Z | https://github.com/littlecodersh/ItChat/issues/427 | [
"question"
] | lucasjinreal | 8 |
vitalik/django-ninja | pydantic | 1,222 | Resolve method is not being called in a nested schema | Hey everyone, I am trying to output a nested schema in a response but the resolve method of the nested schema is not being called and therefore, the following error is being raised:
The main schema for the response (shortened for brevity):
```
class DetailedAlbumOut(Schema):
id: int
artists: List[ArtistOut]
title: str
```
The ArtistOut schema used:
```
class ArtistOut(Schema):
id: int
name: str
url: str
@staticmethod
def resolve_url(obj):
artist_url = reverse("api-1.0:retrieve_artist", kwargs={"id": obj.id})
return obj.request.build_absolute_uri(artist_url)
```
The artist object only has id and name fields. The URL attribute is calculated using the resolve method. When I use the ArtistOut schema on its own, it works perfectly. However, when I try to output the DetailedAlbumOut response, I get the following error:
```
pydantic_core._pydantic_core.ValidationError: 1 validation error for NinjaResponseSchema
response.artists.0.url
Field required [type=missing, input_value=<DjangoGetter: <Artist: Jay-Z>>, input_type=DjangoGetter]
```
From playing around with it to try to figure out how to get it to work, it seems like the resolve_url method is not being called. If I remove the url field and the resolve method, then the output works in that I get the id and name of each artist, but I would like to have the the url to the artist resource included.
Any help is much appreciated. Let me know if you have any further questions or need anything to be clarified, thank you! | closed | 2024-07-07T20:29:41Z | 2024-07-11T04:35:19Z | https://github.com/vitalik/django-ninja/issues/1222 | [] | millejon | 4 |
indico/indico | flask | 6,018 | Unschedule contribution icon change | Currently, the unschedule contribution icon is a trash can. It is confusing for users, who think the action will be deleting the contribution altogether.
There is no "clock with cross" icon in the icomoon collection, but maybe just a cross would do? Or a composition of icons?
| open | 2023-11-01T14:47:02Z | 2023-11-01T14:47:19Z | https://github.com/indico/indico/issues/6018 | [
"enhancement",
"new-timetable"
] | javfg | 0 |
amdegroot/ssd.pytorch | computer-vision | 343 | line 83, in __call__\n label_idx = self.class_to_ind[name]\nKeyError: | open | 2019-05-08T08:47:24Z | 2022-09-09T09:08:08Z | https://github.com/amdegroot/ssd.pytorch/issues/343 | [] | sxyxf66 | 11 | |
aidlearning/AidLearning-FrameWork | jupyter | 50 | How to reorganize keyboard layout? | I wish there would be a-left-arrow button,how could I do this?
thanks a lot | closed | 2019-09-12T07:00:04Z | 2019-09-14T04:16:20Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/50 | [] | dobefore | 1 |
deepinsight/insightface | pytorch | 2,714 | error installing on windows 11 |
C:\Users\jeffr\AppData\Local\Temp\pip-install-fw8krnga\insightface_ac45d088a86942fe933a54455be8f4c2\insightface\thirdparty\face3d\mesh\cython\mesh_core.h(4): fatal error C1083: Cannot open include file: 'stdio.h': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\bin\HostX86\x64\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for insightface
Successfully built filterpy
Failed to build insightface
ERROR: Could not build wheels for insightface, which is required to install pyproject.toml-based projects
| open | 2025-01-03T11:55:56Z | 2025-03-10T03:06:35Z | https://github.com/deepinsight/insightface/issues/2714 | [] | J-Ai-57 | 1 |
donnemartin/data-science-ipython-notebooks | machine-learning | 64 | alexnet.ipynb contains incomplete architecture of alexnet(2 cnn layers missing) | Alexnet implementation in tensorflow has incomplete architecture where 2 convolution neural layers are missing. This issue is in reference to the python notebook mentioned below.
https://github.com/donnemartin/data-science-ipython-notebooks/blob/master/deep-learning/tensor-flow-examples/notebooks/3_neural_networks/alexnet.ipynb
| open | 2019-04-09T18:08:10Z | 2020-09-27T16:39:19Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/64 | [
"needs-review"
] | harshitsaini | 5 |
mwaskom/seaborn | matplotlib | 3,352 | Categorical scatter plots on symlog-scaled axis | Hi,
On the current dev version (eb2b5a2) and matplotlib 3.7.1, consider the following code that uses `stripplot` to draw two points on unscaled and symlog-scaled yaxis:
```python
import seaborn as sns
import matplotlib.pyplot as plt
x = [0.1,2]
y = [0.1,5]
fig, axs = plt.subplots(ncols=2)
sns.stripplot(x=x, y=y, ax=axs[0])
axs[0].set_yscale("symlog", base=10, linthresh=1)
axs[1].set_yscale("symlog", base=10, linthresh=1)
sns.stripplot(x=x, y=y, ax=axs[1])
axs[0].set_ylim(0,10**4)
axs[1].set_ylim(0,10**4)
axs[0].set_title("stripplot on unscaled axis")
axs[1].set_title("stripplot on symlog-scaled axis")
```

The plot on the already-scaled yaxis contain values that were not provided. The plot changes (but still erroneous) if I set `linthresh` to something different (for example if using the linthresh default of 2).
This also happens with `pointplot`. It works as expected with log-scaled axis or with pure matplotlib scatter calls. Couldn't reproduce using seaborn 0.12.2.
| closed | 2023-05-01T09:18:52Z | 2023-08-20T22:08:34Z | https://github.com/mwaskom/seaborn/issues/3352 | [
"bug",
"mod:categorical"
] | MaozGelbart | 0 |
python-security/pyt | flask | 36 | Add readthedocs | If you look at https://github.com/trailofbits/manticore/blob/master/README.md you can see a nice link at the top to the docs. I'll write the docs once the layout is there, please see
https://www.slideshare.net/mobile/JohnCosta/how-to-readthedocs
(So the [easy] issues are good for new people who want to start contributing to look at.) | closed | 2017-04-25T17:55:22Z | 2017-07-13T00:47:01Z | https://github.com/python-security/pyt/issues/36 | [
"enhancement",
"easy"
] | KevinHock | 9 |
peerchemist/finta | pandas | 27 | possible error in calculation | may be you can double check..
but i think this line https://github.com/peerchemist/finta/blob/master/finta/finta.py#L798
should be
ohlc["down_move"] = -ohlc["low"].diff() | closed | 2019-04-28T17:04:30Z | 2019-05-05T11:36:51Z | https://github.com/peerchemist/finta/issues/27 | [] | livinter | 2 |
davidsandberg/facenet | computer-vision | 757 | when i train a classifier on own images,the accuracy = 0 | open | 2018-05-24T08:01:28Z | 2018-07-18T13:54:49Z | https://github.com/davidsandberg/facenet/issues/757 | [] | chankillo | 1 | |
benbusby/whoogle-search | flask | 559 | [QUESTION] How to use social media alternatives using url parameter | The official instance, by default, doesn't use social media alternatives.
Since I am using Cookie AutoDelete extension, if I change config, it won't persist.
So is it possible to do this via url parameter? | closed | 2021-11-28T04:51:45Z | 2021-12-17T15:48:31Z | https://github.com/benbusby/whoogle-search/issues/559 | [
"question"
] | specter78 | 5 |
Kanaries/pygwalker | pandas | 654 | Make Streamlit Bike Sharing app contained within Pygwalker universe | I had two issues when trying to convert the Streamlit Bike Sharing app to a panel app.
- The [gw_config.json](https://github.com/Kanaries/pygwalker-in-streamlit/blob/main/gw_config.json) cannot be used by `GraphicWalker` React directly. I need to unwrap it by taking the inner `configuration` when using with `GraphicWalker` React. There is no explanation. And as far as I can see this is created outside the Pygwalker universe.
- The `range` filter in the spec can as far as I can see not be created via the `GraphicWalker` UI. My guess is that its coming from outside Pygwalker universe or manually added. This is also hard to understand and creates confusion.
- | open | 2024-11-06T04:18:36Z | 2024-11-09T04:06:50Z | https://github.com/Kanaries/pygwalker/issues/654 | [] | MarcSkovMadsen | 2 |
openapi-generators/openapi-python-client | fastapi | 545 | Delete request with body | **Is your feature request related to a problem? Please describe.**
In our API we handle a lot of items and came to a point where we want to delete a lot of this items at the same time. Our first approach was to call a DELETE on every single ID. This works, but it is very slow.
Then we added a new delete functionality where we have only one DELETE call with a json body with a lot of IDs. I know, that this is not the "normal" way to do it, but it works fine and is not forbidden in openapi I think.
The problem with the openapi-python-client is, that it creates an httpx.delete call. And the httpx library does not allow a body for a DELETE. In the httpx github I found this thread: [https://github.com/encode/httpx/discussions/1587](https://github.com/encode/httpx/discussions/1587)
So a DELETE with a body is possible if you use httpx.request instead of http.delete.
**Describe the solution you'd like**
After a short look into the openapi-python-client code I have an easy solution for this problem. I just changed every httpx call into a httpx.request call and added the endpoint.method in the _get_kwargs method.
Here are my changes:
[endpoint_module.py.jinja.txt](https://github.com/openapi-generators/openapi-python-client/files/7726188/endpoint_module.py.jinja.txt)
For me this works pretty good and does not change any other behavior. If nothing speaks against it I would like to PR this change.
| closed | 2021-12-16T09:53:02Z | 2022-01-19T15:28:43Z | https://github.com/openapi-generators/openapi-python-client/issues/545 | [
"✨ enhancement"
] | MalteBecker | 4 |
python-restx/flask-restx | flask | 480 | How do I document the required header in and endpoint, when I'm passing a model to expect() instead of a regparse? | Regparser is not good at documented nested models, so I have switched to passing a model into the `@api.expect()` decorator
`@api.expect(models.input_model_request_generation, validate=True)`
However that does not document my headers.
For headers, the documentation says the following

But I cannot use both parser and model to document. Passing parser to a subsequest `@api.doc()` decorator doesn't seem to work either.
How do I document the expected headers when using models in `expect()` ?
| closed | 2022-10-11T07:58:03Z | 2022-10-11T09:46:01Z | https://github.com/python-restx/flask-restx/issues/480 | [
"question"
] | db0 | 2 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 177 | GPU训练速度 | 想知道作者训练时是用的什么规格的GPU?这边想训练自己的模型,但是200h+有些太长,所以在考虑增加多块2080Ti比较好还是提高到Titan比较好呢? | open | 2020-03-29T11:03:01Z | 2020-04-25T06:04:04Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/177 | [] | ASlepnir | 3 |
mirumee/ariadne | api | 320 | Subscriptions "Complete Example" breaks under 0.10.0 | I'm working with Subscriptions in ariadne using the exact code from the [Complete Example](https://ariadnegraphql.org/docs/subscriptions#complete-example) in the docs. When I load the playground and issue the query `subscription { counter }`, instead of a working counter, an "unsupported operand" error is raised:
```
Traceback (most recent call last):",
File \"/Users/...REDACTED.../virtualenvs/graphql-Pfr5HTvn/lib/python3.7/site-packages/graphql/execution/execute.py\", line 625, in resolve_field_value_or_error",
result = resolve_fn(source, info, **args)",
File \"/Users/...REDACTED.../app/__init__.py\", line 26, in counter_resolver",
return count + 1",
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'"
```
What's wild is that the example in the docs works with ariadne 0.8.0 and 0.9.0, but not 0.10.0. I'm not familiar enough with the internals, though, to spot the issue in the [0.9.0...0.10.0 diff](https://github.com/mirumee/ariadne/compare/0.9.0...0.10.0).
(For completeness, I'm using Python 3.7.6, ariadne 0.10.0, under uvicorn 0.11.2 and gunicorn 20.0.4.) | closed | 2020-02-14T05:28:55Z | 2021-01-01T16:24:09Z | https://github.com/mirumee/ariadne/issues/320 | [
"bug",
"roadmap"
] | command-tab | 8 |
ageitgey/face_recognition | python | 993 | How can I add Percentage rate? | * face_recognition version: 1.2.3
* Python version: 3.6.8
* Operating System: windows 10
### Description
Hi Adam, Firstly thanks for great library, I already 21k faces(one of mine) encoded and insert to sql , I am testing three different photos of mine. Results are below.
First photo : 40 different person (one of them is mine)
Second photo : 20 different person (one of them is mine)
Third photo : 77 different person (one of them is mine)
So I wanna say mine photo %100 matched, other photos %94, %54, %69.... İs there any way to do? | closed | 2019-12-03T15:22:07Z | 2021-05-27T01:38:43Z | https://github.com/ageitgey/face_recognition/issues/993 | [] | HAKANMAZI | 4 |
JaidedAI/EasyOCR | machine-learning | 412 | Missing information while extracting text from similar images | I have similar set of images from which I am trying to extract. On some images it is working good but on certain it misses necessary information. The images have texts written on them in German.
In the first image the information "Verkauft" could not be extracted, while in the next image it was extracted. I had such images and roughly only 50% of times the text "Verkauft" is extracted.


What could be the probable cause of this? Does anyone have any input on this? | closed | 2021-04-01T11:30:21Z | 2021-04-06T17:53:16Z | https://github.com/JaidedAI/EasyOCR/issues/412 | [] | RishikMani | 2 |
modin-project/modin | data-science | 7,315 | Avoid unnecessary length checks in `df.squeeze` | It is possible that when `axis=1` in squeeze we still check `len(self.index)`, which is never necessary when `axis=1`. Link to code here: https://github.com/modin-project/modin/blob/eac3c77baf456c7bd7e1e5fde81790a4ed3ebb27/modin/pandas/dataframe.py#L2074-L2084
This is an easy fix, also see https://github.com/snowflakedb/snowpark-python/pull/1767 | closed | 2024-06-14T15:48:36Z | 2024-09-20T18:46:25Z | https://github.com/modin-project/modin/issues/7315 | [] | sfc-gh-dpetersohn | 0 |
scikit-optimize/scikit-optimize | scikit-learn | 1,130 | BayesSearchCV returns different results when n_points is changed | Hello,
I'm using the 'unofficial' version of BayesSearchCV with multimetrics. In order to improve parallel processing and speed up run times with my new machine, I increased the n_points parameter from the default 1.
However, for every different value of n_points I used, I got different sets of results all else being the same. It does consistently return the same results for the same n_points value across repeat runs.
To eliminate the 'unofficial' factor, I installed the official release 0.9 and repeated the runs to get the same outcomes as earlier i.e. different scores with different n_points, but the same scores for a specific n_points as earlier.
- Has anyone come across this issue before or point me to a way to resolve it?
- Or maybe that's how it is supposed to work, in which case, point me to some documentation on how to interpret and manage it?
Please let me know if you'd like more info.
Thanks in advance,
Narayan | open | 2022-10-14T02:16:47Z | 2022-10-14T12:17:53Z | https://github.com/scikit-optimize/scikit-optimize/issues/1130 | [] | RNarayan73 | 0 |
autokey/autokey | automation | 635 | Support for multiple languages (l10n) | ## Classification:
UI/Usability
## Reproducibility:
Always
## AutoKey version:
Not relevant
## Used GUI:
Gtk
## Installed via:
Package manager
## Linux distribution:
Not relevant
## Summary:
Currently GUI speaks only English. It would be great if support for other languages is added.
## Steps to reproduce:
Run `autokey-gtk`, use program.
## Expected result:
User has an option to switch to non-English language (if translated).
## Actual result:
Impossible atm | open | 2021-12-01T15:25:58Z | 2023-12-10T06:36:59Z | https://github.com/autokey/autokey/issues/635 | [
"enhancement",
"help-wanted",
"user interface"
] | jose1711 | 4 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 633 | Installation solutions for people with multiple python versions? | I've been doing a lot of troubleshooting trying to get this to work. though I _believe_ im on python 3.8 right now, ive been installing everything using **pip** instead of **pip3**. thus installing tensorflow version 1.15 didnt work, instead i installed the newest tensorflow. installing everything else worked fine.
I attempted to test "demo_cli.py" normally, I got "no module named 'numpy'" so I used python3 when running the command instead and got "ModuleNotFoundError: no module named 'tensorflow.contrib'
I dont know what to do now, is there a way i can choose to use an older python in the command line, because i do in fact have both 2.7 and 3.7 installed | closed | 2021-01-20T00:15:48Z | 2021-01-20T17:31:28Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/633 | [] | Woolton | 2 |
coqui-ai/TTS | python | 3,299 | [Bug] CUDA crash when running xttx inference in Fastapi for streaming endpoint. | ### Describe the bug
I am using code at: https://github.com/hengjiUSTC/xtts-streaming-server/blob/main/server/main.py Building a Fastapi server for streaming TTS service. Got following error
```
Traceback (most recent call last):
File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 277, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 250, in listen_for_disconnect
message = await receive()
File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 587, in receive
await self.message_event.wait()
File "/opt/conda/lib/python3.10/asyncio/locks.py", line 214, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f1252414c40
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
| result = await app( # type: ignore[func-returns-value]
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
| return await self.app(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
| await super().__call__(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
| raise exc
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
| await self.app(scope, receive, _send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
| raise exc
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
| await self.app(scope, receive, sender)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
| raise e
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
| await self.app(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
| await route.handle(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
| await self.app(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/routing.py", line 69, in app
| await response(scope, receive, send)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 270, in __call__
| async with anyio.create_task_group() as task_group:
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 658, in __aexit__
| raise BaseExceptionGroup(
| exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap
| await func()
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/responses.py", line 262, in stream_response
| async for chunk in self.body_iterator:
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/concurrency.py", line 63, in iterate_in_threadpool
| yield await anyio.to_thread.run_sync(_next, iterator)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 49, in run_sync
| return await get_async_backend().run_sync_in_worker_thread(
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2103, in run_sync_in_worker_thread
| return await future
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 823, in run
| result = context.run(func, *args)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/starlette/concurrency.py", line 53, in _next
| return next(iterator)
| File "/home/ubuntu/xtts-streaming-server/server/main.py", line 147, in predict_streaming_generator
| for i, chunk in enumerate(chunks):
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
| response = gen.send(None)
| File "/home/ubuntu/xtts-streaming-server/server/venv/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 633, in inference_stream
| text_tokens = torch.IntTensor(self.tokenizer.encode(sent, lang=language)).unsqueeze(0).to(self.device)
| RuntimeError: CUDA error: an illegal memory access was encountered
| CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
| For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
| Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
|
+------------------------------------
```
### To Reproduce
runnning https://github.com/hengjiUSTC/xtts-streaming-server/blob/main/server/main.py at AWS g4dn.xlarage. With 16GB Gpu and 8G cpu. Using newest 0.20.6 release.
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
TTS version 0.20.6
pytorch version 2.1.1 install with pip
CUDA version:
>>> print(torch.version.cuda)
12.1
CUDNN version:
>>> print(torch.backends.cudnn.version())
8905
python 3.10.9
OS Ubuntu
GPU: nvidia T4 16GB
```
### Additional context
I think the error do comes with xttx module when running for long time. Does any one have idea why this happening?
| closed | 2023-11-24T10:40:24Z | 2024-09-12T11:14:08Z | https://github.com/coqui-ai/TTS/issues/3299 | [
"bug"
] | hengjiUSTC | 8 |
jupyterlab/jupyter-ai | jupyter | 385 | Allow REQUESTS_CA_BUNDLE | Re: https://github.com/jupyterlab/jupyter-ai/issues/321#issuecomment-1714127620
### Problem
* using different OpenAI base url for chat UI I am getting connection failed or timeout error
* my OpenAI base url starts with https
### Proposed Solution
* allow adding https connection option with certificate
### Additional context
* related post https://github.com/jupyterlab/jupyter-ai/issues/321#issuecomment-1714127620
| closed | 2023-09-12T07:34:32Z | 2024-06-26T15:58:08Z | https://github.com/jupyterlab/jupyter-ai/issues/385 | [
"enhancement",
"scope:chat-ux"
] | sqlreport | 2 |
nerfstudio-project/nerfstudio | computer-vision | 3,078 | 如何评估和渲染结果? | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2024-04-16T03:17:07Z | 2024-04-26T06:29:50Z | https://github.com/nerfstudio-project/nerfstudio/issues/3078 | [] | Fanjunyi55 | 2 |
unit8co/darts | data-science | 2,119 | Cannot get optuna gridSearch to work. TypeError: Unknown type of parameter:series, got:TimeSeries | I am following your guides on optuna and Ray tune. With ray tune i keep getting time out error and dont know why , but i will start asking about optuna. I want to use lightgbm (as i understand it , any model in darts , i should be able to use). Will ask about optuna since i did manage to get it to work some time ago with tensorflow.
I am testing such a simple model as possible just to see if it works and then i can make it more complex. Which seems like a good just get a constant torrent of errors either way.
Here is the code (again following your guide)
ts = TimeSeries.from_dataframe(edfs, 'dt', ['Interval_Sum'])
ts = ts.drop_before(pd.Timestamp("2023-08-30"))
ts_train, ts_val = ts.split_after(pd.Timestamp("2023-10-01"))
# define objective function
def objective(trial):
max_depth = trial.suggest_categorical("max_depth", [2, 3])
num_leaves = trial.suggest_categorical("num_leaves", [2, 3])
lags = trial.suggest_categorical("lags", [3])
pruner = PyTorchLightningPruningCallback(trial, monitor="val_loss")
early_stopper = EarlyStopping("val_loss", min_delta=0.001, patience=3, verbose=True)
callbacks = [pruner, early_stopper]
pl_trainer_kwargs = {
"accelerator": "auto",
"callbacks": callbacks,
}
torch.manual_seed(42)
# build the TCN model
model = LightGBMModel(
series=ts_train,
# metric = rmse,
forecast_horizon = 3,
max_depth = max_depth,
num_leaves = num_leaves,
lags = lags
)
# train the model
model.fit(
series=ts_train,
val_series=ts_val,
# num_loader_workers=num_workers,
)
# reload best model over course of training
model = TCNModel.load_from_checkpoint("tcn_model")
# Evaluate how good it is on the validation set, using sMAPE
preds = model.predict(series=train, n=ts_val)
smapes = smape(ts_val, preds, n_jobs=-1, verbose=True)
smape_val = np.mean(smapes)
return smape_val if smape_val != np.nan else float("inf")
# for convenience, print some optimization trials information
def print_callback(study, trial):
print(f"Current value: {trial.value}, Current params: {trial.params}")
print(f"Best value: {study.best_value}, Best params: {study.best_trial.params}")
# optimize hyperparameters by minimizing the sMAPE on the validation set
if __name__ == "__main__":
study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=100, callbacks=[print_callback])
when i run type() on my data it says the same as it does in your example, so i dont know what is going on.
More of error messege:
[W 2023-12-13 11:17:43,700] Trial 0 failed with parameters: {'max_depth': 2, 'num_leaves': 2, 'lags': 3} because of the following error: TypeError('Unknown type of parameter:series, got:TimeSeries').
Traceback (most recent call last):
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\optuna\study\_optimize.py", line 200, in _run_trial
value_or_values = func(trial)
File "C:\Users\Magnus\AppData\Local\Temp\ipykernel_19460\55056152.py", line 40, in objective
model.fit(
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\darts\models\forecasting\lgbm.py", line 267, in fit
super().fit(
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\darts\models\forecasting\regression_model.py", line 1617, in fit
super().fit(
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\darts\models\forecasting\regression_model.py", line 722, in fit
self._fit_model(
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\darts\models\forecasting\regression_model.py", line 1795, in _fit_model
super()._fit_model(
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\darts\models\forecasting\regression_model.py", line 544, in _fit_model
self.model.fit(training_samples, training_labels, **kwargs)
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\lightgbm\sklearn.py", line 895, in fit
super().fit(X, y, sample_weight=sample_weight, init_score=init_score,
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\lightgbm\sklearn.py", line 748, in fit
self._Booster = train(
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\lightgbm\engine.py", line 271, in train
booster = Booster(params=params, train_set=train_set)
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\lightgbm\basic.py", line 2605, in __init__
train_set.construct()
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\lightgbm\basic.py", line 1815, in construct
self._lazy_init(self.data, label=self.label,
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\lightgbm\basic.py", line 1517, in _lazy_init
params_str = param_dict_to_str(params)
File "c:\Users\Magnus\Desktop\code\timeSeries\venvTS\lib\site-packages\lightgbm\basic.py", line 294, in param_dict_to_str
raise TypeError(f'Unknown type of parameter:{key}, got:{type(val).__name__}')
TypeError: Unknown type of parameter:series, got:TimeSeries
[W 2023-12-13 11:17:43,702] Trial 0 failed with value None. | closed | 2023-12-13T10:23:49Z | 2023-12-14T08:50:49Z | https://github.com/unit8co/darts/issues/2119 | [
"triage"
] | Allena101 | 1 |
home-assistant/core | asyncio | 140,373 | VMB7IN state is 0 | ### The problem
The measurement state for a VMB7IN entity stays at 0.0
<img width="1428" alt="Image" src="https://github.com/user-attachments/assets/dda46a7b-b729-4460-be21-413c7b01f7b2" />
(the counter is still working)
<img width="1422" alt="Image" src="https://github.com/user-attachments/assets/4eba6215-d2cf-4755-8f41-f25ae4e42cbc" />
### What version of Home Assistant Core has the issue?
core-2025.3.1
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
velbus
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/velbus
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-11T13:05:05Z | 2025-03-17T07:31:56Z | https://github.com/home-assistant/core/issues/140373 | [
"integration: velbus"
] | CasperBE | 8 |
coqui-ai/TTS | deep-learning | 2,997 | [Bug]Training using multiple GPU's | ### Describe the bug
RuntimeError: [!] 2 active GPUs. Define the target GPU by `CUDA_VISIBLE_DEVICES`. For multi-gpu training use `TTS/bin/distribute.py`.
But i cannot find distribute.py in that location also distribute.py is in TTS/utils/distribute.py
I am trying use multiple GPU for training on custom data, but i face the above error when i start the training.
### To Reproduce
python train.py
### Expected behavior
Training should start
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.0+cu117",
"TTS": "0.16.6",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.0",
"version": "#1 SMP Thu Aug 31 10:29:22 EDT 2023"
}
}
```
### Additional context
_No response_ | closed | 2023-09-25T21:56:46Z | 2024-09-03T01:13:48Z | https://github.com/coqui-ai/TTS/issues/2997 | [
"bug"
] | 18Raksha | 5 |
strawberry-graphql/strawberry | asyncio | 3,154 | Make HTTP request data available when logging errors | <!--- Provide a general summary of the changes you want in the title above. -->
When logging errors, I am not aware of a method to add the IP address and similar info to the logged data. Specifically, I'm looking to set the base properties of [GCP HTTP request log entries](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#HttpRequest).
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
The question is, is there a straightforward method to do this? Or does it require diving into the tracing extensions?
<!-- A few sentences describing what it is. -->
Variables available when throwing the exception:

Variables available when handling the exception in the log handler:

How the project's loggers are set up:
```py
log_gcp_handler = {
"class": "google.cloud.logging.handlers.StructuredLogHandler",
"labels": {"process": django_process},
"project_id": gcs_project_id,
}
...
"loggers": {
"": {
"handlers": ["log_gcp_handler"],
"level": log_level,
"propagate": False,
},
"django.channels.server": {
"handlers": ["log_gcp_handler"],
"level": "WARNING",
"propagate": False,
},
"django.request": {
"handlers": ["log_gcp_handler"],
"level": "ERROR",
"propagate": False,
},
"strawberry.execution": {
"handlers": ["log_gcp_handler"],
"level": log_level,
"propagate": False,
},
},
``` | open | 2023-10-17T09:01:41Z | 2025-03-20T15:56:26Z | https://github.com/strawberry-graphql/strawberry/issues/3154 | [] | moritz89 | 0 |
django-import-export/django-import-export | django | 1,026 | How can we import a csv file with ANSI encoding | I'm trying to import a file that has ANSI encoding, Accented characters like "ç" etc.
The import shows an error as mentioned below.
> Imported file has a wrong encoding: 'utf-8' codec can't decode byte 0xf3 in position 18710: invalid continuation byte
Changing it to utf-8 breaks the characters into "Blockers" | closed | 2019-11-06T08:50:47Z | 2020-05-28T07:25:50Z | https://github.com/django-import-export/django-import-export/issues/1026 | [
"stale"
] | farhankn | 3 |
holoviz/panel | jupyter | 6,946 | Tabulator selectable is broken | panel==1.4.4
I believe Tabulator js has changed how `selectable` works and Panel needs to adapt. It will change even for from 5.5 (current panel version) to 6.2 (latest js version).
```python
import panel as pn
import pandas as pd
import numpy as np
pn.extension("tabulator")
sel_df = pd.DataFrame(np.random.randn(3, 5), columns=list('ABCDE'))
select_table = pn.widgets.Tabulator(sel_df, selectable='toggle', selection=[0], disabled=True)
pn.Column(select_table, select_table.param.selection).servable()
```
I expect to be able to select one row and when I select another it change to that other one. Instead I select both. `checkbox-single` is not working either.

## Workaround
Set `selectable=1` instead of `toggle`.
## Additional Context
- Tabulator 5.5 docs on row selection https://tabulator.info/docs/5.5/select
-
| closed | 2024-06-28T10:14:17Z | 2024-07-28T20:17:42Z | https://github.com/holoviz/panel/issues/6946 | [
"component: tabulator"
] | MarcSkovMadsen | 2 |
aiortc/aiortc | asyncio | 331 | Error using MediaRecorder creating HLS segments | Hi, this is a great library.
I am attempting to use the MediaRecorder to create hls segments, but ffmpeg encounteres the following error during transmuxing:
`Application provided invalid, non monotonically increasing dts to muxer in stream 1: 2217000 >= 2217000
`
I am using the MediaRecorder as in the server example, except adding both audio and video tracks from a peer connection. The error only occurs on the video tracks, audio works perfectly.
I create the MediaRecorder objects as follows:
`HLS_MANIFEST = "live/playlist.m3u8"
HLS_SEGMENTS = "live/%s.ts"
HLS_OPTS = {
'hls_list_size': '3',
'hls_time': '4',
'hls_segment_type': 'mpegts',
'hls_flags': 'delete_segments+discont_start',
'hls_start_number_source': 'datetime',
'strftime': '1',
'use_localtime': '1',
'hls_segment_filename': HLS_SEGMENTS,
}
`
`recorder = HLSRecorder(HLS_MANIFEST, format='hls', options=HLS_OPTS)`
Any insight into this would be appreciated, thanks! | closed | 2020-04-06T19:51:06Z | 2022-03-11T17:56:00Z | https://github.com/aiortc/aiortc/issues/331 | [] | tlaz4 | 16 |
pallets/quart | asyncio | 99 | LifespanFailure Quart 11.3 | I am getting the following error in my app in the latest version:
```python
File "app.py", line 118, in <module>
app.run(host='0.0.0.0', port=port)
File "C:\xxxxxx\Anaconda3\envs\api\lib\site-packages\quart\app.py", line 1615, in run
loop.run_until_complete(task)
File "C:\xxxxxx\Anaconda3\envs\api\lib\asyncio\base_events.py", line 583, in run_until_complete
return future.result()
File "C:\xxxxxx\Anaconda3\envs\api\lib\asyncio\futures.py", line 181, in result
raise self._exception
File "C:\xxxxxx\Anaconda3\envs\api\lib\asyncio\tasks.py", line 249, in __step
result = coro.send(None)
File "C:\xxxxxx\Anaconda3\envs\api\lib\site-packages\hypercorn\asyncio\__init__.py", line 39, in serve
await worker_serve(app, config, shutdown_trigger=shutdown_trigger)
File "C:\xxxxxx\Anaconda3\envs\api\lib\site-packages\hypercorn\asyncio\run.py", line 66, in worker_serve
raise exception
File "C:\xxxxxx\Anaconda3\envs\api\lib\asyncio\tasks.py", line 251, in __step
result = coro.throw(exc)
File "C:\xxxxxx\Anaconda3\envs\api\lib\site-packages\hypercorn\asyncio\lifespan.py", line 30, in handle_lifespan
await invoke_asgi(self.app, scope, self.asgi_receive, self.asgi_send)
File "C:\xxxxxx\Anaconda3\envs\api\lib\site-packages\hypercorn\utils.py", line 203, in invoke_asgi
await app(scope, receive, send)
await self.asgi_app(scope, receive, send)
File "C:\xxxxxx\Anaconda3\envs\api\lib\site-packages\quart\app.py", line 2076, in asgi_app
await asgi_handler(receive, send)
File "C:\xxxxxx\Anaconda3\envs\api\lib\site-packages\quart\asgi.py", line 205, in __call__
await send({"type": "lifespan.startup.failed", "message": str(error)})
File "C:\xxxxxx\Anaconda3\envs\api\lib\site-packages\hypercorn\asyncio\lifespan.py", line 77, in asgi_send
raise LifespanFailure("startup", message["message"])
ThreadPoolExecutor-0_0'.'
```
```python
if __name__ == '__main__':
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('-p', '--port', type=int, default=5000)
args = parser.parse_args()
port = args.port
app.run(host='0.0.0.0', port=port)
```
Using version 10.0 gets rid of the error. | closed | 2020-02-26T21:12:00Z | 2022-07-05T01:59:06Z | https://github.com/pallets/quart/issues/99 | [] | slyduda | 6 |
unionai-oss/pandera | pandas | 932 | @pa.check_types won't validate in MyPy using type hints that trigger the method | #### Question about pandera
The actual code I am using and testing:
```python
import typing
import pandera as pa
from pandera.typing import DataFrame, Index, Series
from pandera.typing.common import DataFrameBase
class EntitySchema(pa.SchemaModel):
"""EntitySchema - base class for nodes and edges.
I contain three simple things:
* An index
* A UUID entity_id
* A string entity_type with valida values of node or edge.
"""
index: Index[int]
entity_id: Series[str] = pa.Field(
nullable=False,
str_matches=r"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$",
)
entity_type: Series[str] = pa.Field(isin=["node", "edge"], nullable=False)
class EdgeSchema(EntitySchema):
"""EdgeSchema - schema for edges with src and dst UUIDs."""
entity_type: Series[str] = pa.Field(isin=["edge"], nullable=False)
src: Series[str] = pa.Field(
nullable=False,
str_matches=r"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$",
)
dst: Series[str] = pa.Field(
nullable=False,
str_matches=r"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$",
)
```
The unit test that confuses me... mypy barfs on the type hints required to use `@pa.check_types`. Is this a bug or am I dumb? Just assuming the latter based on long experience. Recommend you assume same unless otherwise indicated :)
```python
import pytest
def test_transformed_edge_schema(get_good_edge_df) -> None:
"""Test the entity schema using a pd.DataFrame with all good records."""
class WeightedEdgeSchema(EdgeSchema):
weight: pa.typing.Series[float] = pa.Field(gt=0)
@pa.check_types(lazy=True)
def transform(df: pa.typing.DataFrame[EdgeSchema]) -> pa.typing.DataFrame[WeightedEdgeSchema]:
df["weight"] = df["entity_id"].apply(lambda x: random.uniform(0, 1))
# If I don't explicitly validate here, the returned schema is EdgeSchema, and not WeightedEdgeSchema
# mypy barfs. This should not happen.
# return df
return WeightedEdgeSchema.validate(df)
transform(get_good_edge_df)
```
Why won't this code pass mypy checks unless I validate the DataFrame myself, negating the reason to use `pa.check_types`?
| open | 2022-09-01T08:32:38Z | 2022-09-01T08:32:38Z | https://github.com/unionai-oss/pandera/issues/932 | [
"question"
] | rjurney | 0 |
OFA-Sys/Chinese-CLIP | nlp | 371 | 关于调用cn_clip进行特征提取报错格式错误的问题 | Text的特征提取代码如下:
import cn_clip.clip as clip
from cn_clip.clip import load_from_name, available_models
class TextCLIPModel(nn.Module):
def __init__(self, config, device):
super().__init__()
self.device = device
self.model, self.preprocess = self._load_model(config)
def _load_model(self, config):
model, preprocess = load_from_name(config.clip_model_name, download_root=config.download_root)
model.to(self.device) # 将模型移动到指定设备
model.eval()
return model, preprocess
def forward(self, texts):
tokens = clip.tokenize(texts).to(self.device)
with torch.no_grad():
text_features = self.model.encode_text(tokens)
text_features /= text_features.norm(dim=-1, keepdim=True) # 归一化特征
return text_features

输入的文本txt格式:

输入的data.json格式如下:

输入的img是data文件夹:

我想问一下,这样输入数据是调用有问题,还是原始数据格式有问题 | open | 2024-12-02T09:01:24Z | 2024-12-02T09:01:24Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/371 | [] | Seing-yu | 0 |
httpie/cli | python | 1,252 | Choco installed packages conflict with the user's own site-packages | See [this](https://discord.com/channels/725351238698270761/799982808122523648/924860935074635777) thread on our discord server for details. We should try to be more isolated for package installations on windows. | open | 2021-12-27T09:26:24Z | 2021-12-28T10:39:07Z | https://github.com/httpie/cli/issues/1252 | [
"windows",
"packaging",
"low-priority"
] | isidentical | 0 |
AirtestProject/Airtest | automation | 949 | airtest多次run case以后手机变得卡顿 | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
使用"python3 -m airtest run {case} --device Android:///”启动airtest run case,多次run以后手机(MI9SE 安卓10 MIUI12.0.3/华为nova7SE 安卓10 EMUI10.1.1都有出现)变得卡顿,甚至出现poco crash(已向poco team report)。
(在这里粘贴traceback或其他报错信息)
[11:16:49][INFO]<airtest.core.api> Try finding:
Template(D:\code_py3\pandsta\scripts\cases\run_time\pic_android\\ConfActivityNormal\btnLeaveBO.png)
[11:16:50][DEBUG]<airtest.core.api> try match with SURFMatching
Traceback (most recent call last):
File "D:\environment\Python37-32\lib\site-packages\airtest\aircv\keypoint_matching_contrib.py", line 118, in init_detector
self.detector = cv2.xfeatures2d.SURF_create(self.HESSIAN_THRESHOLD, upright=self.UPRIGHT)
cv2.error: OpenCV(4.5.2) C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-14oozfdh\opencv_contrib\modules\xfeatures2d\src\surf.cpp:1029: error: (-213:The function/feature is not implemented) This algorithm is patented and is excluded in this configuration; Set OPENCV_ENABLE_NONFREE CMake option and rebuild the library in function 'cv::xfeatures2d::SURF::create'
[11:16:50][DEBUG]<airtest.core.api> 'surf'/'sift'/'brief' is in opencv-contrib module. You can use 'tpl'/'kaze'/'brisk'/'akaze'/'orb' in CVSTRATEGY, or reinstall opencv with the contrib module.
[11:16:50][DEBUG]<airtest.core.api> try match with TemplateMatching
[11:16:50][DEBUG]<airtest.aircv.template_matching> [Template] threshold=0.7, result={'result': (547, 456), 'rectangle': ((103, 384), (103, 528), (991, 528), (991, 384)), 'confidence': 0.9999993443489075}
[11:16:50][DEBUG]<airtest.aircv.template_matching> find_best_result() run time is 0.09 s.
[11:16:50][DEBUG]<airtest.core.api> match result: {'result': (547, 456), 'rectangle': ((103, 384), (103, 528), (991, 528), (991, 384)), 'confidence': 0.9999993443489075}
airtest: run case exception: com.netease.open.libpoco.sdk.exceptions.NodeHasBeenRemovedException: Node was no longer alive when query attribute "name". Please re-select.
|-- Remote Traceback --|
com.netease.open.libpoco.sdk.exceptions.NodeHasBeenRemovedException: Node was no longer alive when query attribute "name". Please re-select.
at com.netease.open.libpoco.Node.getAttr(Node.java:81)
at com.netease.open.libpoco.sdk.AbstractNode.enumerateAttrs(AbstractNode.java:71)
at com.netease.open.libpoco.sdk.AbstractDumper.dumpHierarchyImpl(AbstractDumper.java:34)
at com.netease.open.libpoco.sdk.AbstractDumper.dumpHierarchy(AbstractDumper.java:24)
at com.netease.open.libpoco.sdk.AbstractDumper.dumpHierarchy(AbstractDumper.java:20)
at java.lang.reflect.Method.invoke(Native Method)
at com.netease.open.hrpc.backend.RpcServer.onRequest(RpcServer.java:171)
at com.netease.open.hrpc.backend.RpcServer.serve(RpcServer.java:57)
at fi.iki.elonen.NanoHTTPD$HTTPSession.execute(NanoHTTPD.java:840)
at fi.iki.elonen.NanoHTTPD$ClientHandler.run(NanoHTTPD.java:189)
at java.lang.Thread.run(Thread.java:929)
|-- Remote Traceback end --|
server-mode: case result :{"case_name": "leave_bo", "case_result": "False", "info": {}, "ostype": "android"}
executor: receive case result: {"case_name": "leave_bo", "case_result": "False", "info": {}, "ostype": "android", "ip": "10.100.162.238"}
executor: send case result success: {"case_name": "leave_bo", "case_result": "False", "info": {}, "ostype": "android", "ip": "10.100.162.238"}
executor: received case: {"case_name": "uninstall", "uninstall_param": {}}
server-mode: received stop run
server-mode: received case: uninstall
server-mode: get run case :>>{"case_name": "uninstall", "stop": "True"}
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
我没有使用AirtestIDE运行项目,而是先使用poco获取到ui_tree后把相关控件截图出来,然后再进行touch等操作。截图如下:

**预期效果**
不卡顿
**python 版本:** `python3.7`
**airtest 版本:** `1.1.3`
**pocoui 版本:** `1.0.82`
**设备:**
- 型号: [MI9SE 安卓10 MIUI12.0.3]
- 系统: [华为nova7SE 安卓10 EMUI10.1.1]
**其他相关环境信息**
(PC端使用windows 10)
| closed | 2021-08-04T01:42:36Z | 2021-10-14T02:29:12Z | https://github.com/AirtestProject/Airtest/issues/949 | [] | ZhangOscar | 3 |
marshmallow-code/flask-smorest | rest-api | 99 | Deserialization at point of request handling | Hi!
First off, let me say that this library is the closest thing to what I've been looking for as an API framework in flask. Awesome job pulling in the best practices of API framework tools! I will try to put a few hours a week to helping in any way I can.
My question. As shown in your documentation, even though we validate data with marshmallow schema's, in our handlers, we end up receiving a dictionary with the data. I was curious what the rationale for that is? Why not pass the formed object to the handler.
Today it looks like:
```
@blp.route('/')
class Pets(MethodView):
@blp.arguments(PetSchema)
@blp.response(PetSchema, code=201)
def post(self, new_data):
"""Add a new pet"""
item = Pet.create(**new_data)
return item
```
instead of
```
@blp.route('/')
class Pets(MethodView):
@blp.arguments(PetSchema)
@blp.response(PetSchema, code=201)
def post(self, pet):
"""Add a new pet"""
pet = Pet.create(
name=pet.name,
age=pet.age)
return pet.to_dict()
```
In my mind it clearly separates concerns and creates an explicit object to handle. Having a dictionary actually may influence me (and others) to tie the schemas parameter name directly to my database model parameter names so that I can quickly move through them. That starts getting into the magical land of https://marshmallow-sqlalchemy.readthedocs.io/en/latest/
Cheers!
George
| closed | 2019-09-18T15:01:13Z | 2019-09-20T15:13:47Z | https://github.com/marshmallow-code/flask-smorest/issues/99 | [
"question"
] | georgesequeira | 2 |
Lightning-AI/pytorch-lightning | data-science | 20,045 | Skip certain step during training | ### Bug description
I want to ignore some batch step during training, how can I write the code? Any suggestions would be appreciated.Thanks in advance.
The chatGPT answer below:
```
def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
# Define steps to skip
steps_to_skip = {199, 302, 493, 1283}
if self.trainer.global_step in steps_to_skip:
return -1 # Skip training this step
```
Is this correct?
Thanks!
### What version are you seeing the problem on?
master
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | closed | 2024-07-04T13:13:00Z | 2024-07-14T10:32:56Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20045 | [
"question",
"ver: 2.2.x"
] | real-junjiezhang | 3 |
home-assistant/core | python | 141,110 | Can't connect after reboot | ### The problem
After update HA Core to 2025.3.4 my Tado can't no more connect !

### What version of Home Assistant Core has the issue?
2025.3.4
### What was the last working version of Home Assistant Core?
2025.3.3
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
Enregistreur: homeassistant.components.tado.config_flow
Source: components/tado/config_flow.py:131
intégration: Tado (documentation, problèmes)
S'est produit pour la première fois: 13:27:04 (2 occurrences)
Dernier enregistrement: 13:27:31
Unexpected exception
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 131, in async_step_reconfigure
await validate_input(self.hass, user_input)
File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 52, in validate_input
tado = await hass.async_add_executor_job(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Tado, data[CONF_USERNAME], data[CONF_PASSWORD]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.13/site-packages/PyTado/interface/interface.py", line 46, in __init__
self._http = Http(
~~~~^
username=username,
^^^^^^^^^^^^^^^^^^
...<2 lines>...
debug=debug,
^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 153, in __init__
self._id, self._token_refresh = self._login()
~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 333, in _login
raise TadoException(
f"Login failed for unknown reason with status code {response.status_code}"
)
PyTado.exceptions.TadoException: Login failed for unknown reason with status code 403
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-22T12:29:08Z | 2025-03-23T01:03:09Z | https://github.com/home-assistant/core/issues/141110 | [] | beckynet | 14 |
newpanjing/simpleui | django | 3 | 再提个建议啊 | 作者你好:
在使用了你这个插件后,在首页左下角会有你项目的git地址,这个怎么去掉呢?虽然我很支持你,但如果不去掉这个地址,实在是无法应用到项目中,也不利于贵项目的推广呀。
如下:
Simpleui
项目主页:https://www.88cto.com/project/simpleui/
Github:https://github.com/newpanjing/simpleui | closed | 2018-12-13T14:47:35Z | 2018-12-21T03:46:20Z | https://github.com/newpanjing/simpleui/issues/3 | [] | wthahaha | 4 |
cupy/cupy | numpy | 8,779 | `cupy.ravel` behaves differently with `numpy.ravel` | ### Description
As claimed in [NumPy doc](https://numpy.org/doc/stable/reference/generated/numpy.ravel.html):
> When order is ‘K’, it will preserve orderings that are neither ‘C’ nor ‘F’, but won’t reverse axes:
```py
>>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a
array([[[ 0, 2, 4],
[ 1, 3, 5]],
[[ 6, 8, 10],
[ 7, 9, 11]]])
>>> a.ravel(order='C')
array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11])
>>> a.ravel(order='K')
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
```
I found that `cupy.ravel` does not keep this spec when the swapaxes's output is manipulated by other functions and then used as the `ravel`'s input.
### To Reproduce
```py
import numpy as np
import cupy as cp
a1 = np.arange(np.pi/6, np.pi*13/6, np.pi/6).reshape(2,3,2).swapaxes(1,2)
print("np a:", a1)
b1 = np.ravel(a1, order='K')
print("np a after ravel:", b1)
a2 = cp.arange(cp.pi/6, cp.pi*13/6, cp.pi/6).reshape(2,3,2).swapaxes(1,2)
print("cp a:", a2)
b2 = cp.ravel(a2, order='K')
print("cp a after ravel:", b2)
cp.testing.assert_array_almost_equal(b1, b2) # pass
print()
c1 = np.rad2deg(a1)
d1 = np.ravel(c1, order='K')
print("np rad2deg(a) after ravel:", d1)
c2 = cp.rad2deg(a2)
d2 = np.ravel(c2, order='K')
print("cp rad2deg(a) after ravel:", d2)
cp.testing.assert_array_almost_equal(d1, d2) # fail
```
Output shows that `b1` and `b2` (`a1` and `a2` after `ravel`) are equal, but `d1` and `d2` (`rad2deg(a1)` and `rad2deg(a2)` after `ravel`) are not:
```py
np a: [[[0.52359878 1.57079633 2.61799388]
[1.04719755 2.0943951 3.14159265]]
[[3.66519143 4.71238898 5.75958653]
[4.1887902 5.23598776 6.28318531]]]
np a after ravel: [0.52359878 1.04719755 1.57079633 2.0943951 2.61799388 3.14159265
3.66519143 4.1887902 4.71238898 5.23598776 5.75958653 6.28318531]
cp a: [[[0.52359878 1.57079633 2.61799388]
[1.04719755 2.0943951 3.14159265]]
[[3.66519143 4.71238898 5.75958653]
[4.1887902 5.23598776 6.28318531]]]
cp a after ravel: [0.52359878 1.04719755 1.57079633 2.0943951 2.61799388 3.14159265
3.66519143 4.1887902 4.71238898 5.23598776 5.75958653 6.28318531]
np rad2deg(a) after ravel: [ 30. 60. 90. 120. 150. 180. 210. 240. 270. 300. 330. 360.]
cp rad2deg(a) after ravel: [ 30. 90. 150. 60. 120. 180. 210. 270. 330. 240. 300. 360.]
Traceback (most recent call last):
File "/code/test1.py", line 22, in <module>
cp.testing.assert_array_almost_equal(d1, d2) # fail
File "/usr/local/lib/python3.10/dist-packages/cupy/testing/_array.py", line 42, in assert_array_almost_equal
numpy.testing.assert_array_almost_equal(
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/lib/python3.10/dist-packages/numpy/_utils/__init__.py", line 85, in wrapper
return fun(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/numpy/testing/_private/utils.py", line 1141, in assert_array_almost_equal
assert_array_compare(compare, actual, desired, err_msg=err_msg,
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/lib/python3.10/dist-packages/numpy/testing/_private/utils.py", line 889, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 6 decimals
Mismatched elements: 8 / 12 (66.7%)
Max absolute difference among violations: 60.
Max relative difference among violations: 1.
ACTUAL: array([ 30., 60., 90., 120., 150., 180., 210., 240., 270., 300., 330.,
360.])
DESIRED: array([ 30., 90., 150., 60., 120., 180., 210., 270., 330., 240., 300.,
360.])
```
### Installation
Wheel (`pip install cupy-***`)
### Environment
```
OS : Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Python Version : 3.10.12
CuPy Version : 13.3.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 2.1.0
SciPy Version : 1.13.1
Cython Build Version : 0.29.36
Cython Runtime Version : 0.29.37
CUDA Root : /usr/local/cuda
nvcc PATH : /usr/local/cuda/bin/nvcc
CUDA Build Version : 12060
CUDA Driver Version : 12040
CUDA Runtime Version : 12060 (linked to CuPy) / 12020 (locally installed)
CUDA Extra Include Dirs : []
cuBLAS Version : 120201
cuFFT Version : 11008
cuRAND Version : 10303
cuSOLVER Version : (11, 5, 2)
cuSPARSE Version : 12101
NVRTC Version : (12, 2)
Thrust Version : 200600
CUB Build Version : 200600
Jitify Build Version : <unknown>
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce RTX 4070 Laptop GPU
Device 0 Compute Capability : 89
Device 0 PCI Bus ID : 0000:01:00.0
```
### Additional Information
_No response_ | closed | 2024-12-02T00:52:15Z | 2025-02-07T00:14:39Z | https://github.com/cupy/cupy/issues/8779 | [
"issue-checked"
] | AnonymousPlayer2000 | 2 |
Kludex/mangum | fastapi | 119 | Store the 'requestContext' in WebSocket message events | Currently just store the initial connection event data, should add a key to the scope for updating the message request context. | closed | 2020-05-21T08:27:16Z | 2020-06-28T01:52:35Z | https://github.com/Kludex/mangum/issues/119 | [
"improvement",
"websockets"
] | jordaneremieff | 0 |
Sanster/IOPaint | pytorch | 356 | [BUG] | **Model**
Which model are you using?
**Describe the bug**
A clear and concise description of what the bug is.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System Info**
Software version used
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.11.3
- torch: 2.0.1
- torchvision: 0.15.2
- Pillow: 9.4.0
- diffusers: 0.16.1
- transformers: 4.27.4
- opencv-python: 4.8.0.74
- xformers: N/A
- accelerate: N/A
- lama-cleaner: 1.2.3
- rembg: N/A
- realesrgan: N/A
- gfpgan: N/A- lama-cleaner:
- pytorch:
- CUDA:
| closed | 2023-08-04T05:19:58Z | 2023-08-30T03:27:50Z | https://github.com/Sanster/IOPaint/issues/356 | [] | szcelp | 0 |
fastapi-users/fastapi-users | fastapi | 630 | Use relative `tokenUrl` parameter for JWTAuthentication (and docs) | Currently the [JWTAuthentication docs page](https://frankie567.github.io/fastapi-users/configuration/authentication/jwt/) doesn't document the `tokenUrl` parameter, although it does document all its other parameters.
When/if this gets added, it would be worth mentioning that the `tokenUrl` must be relative (i.e. no prefixing '/') if a custom `root_path` is being used within the FastAPI app. This is because, if an absolute `tokenUrl` is used instead, then the URL of the token route will be relative to the `root_path`, but the URL in the OpenAPI spec (which is derived from `tokenUrl`) will instead be relative to the base of the URL. Then any front-end which parses the OpenAPI spec for authentication (e.g. Swagger) won't be able to find the correct URL.
This is currently documented [in the FastAPI docs](https://fastapi.tiangolo.com/tutorial/security/first-steps/#fastapis-oauth2passwordbearer) (see first tip), but would be worth mentioning in the FastAPI-Users docs as well.
Also, I think it would be worth changing the default value of `tokenUrl` from `"/token"` to `"token"`, because the relative URL works both without and with a custom `root_path`. In addition, the [full examples](https://frankie567.github.io/fastapi-users/configuration/full_example/) could be changed to reflect this too.
| closed | 2021-05-12T19:38:00Z | 2021-05-20T09:47:24Z | https://github.com/fastapi-users/fastapi-users/issues/630 | [
"documentation",
"enhancement"
] | eddsalkield | 4 |
Anjok07/ultimatevocalremovergui | pytorch | 1,749 | salom | Last Error Received:
Process: VR Architecture
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "Error(s) in loading state_dict for CascadedNet:
Missing key(s) in state_dict: "stg1_low_band_net.0.enc1.conv.0.weight", "stg1_low_band_net.0.enc1.conv.1.weight", "stg1_low_band_net.0.enc1.conv.1.bias", "stg1_low_band_net.0.enc1.conv.1.running_mean", "stg1_low_band_net.0.enc1.conv.1.running_var", "stg1_low_band_net.0.enc2.conv1.conv.0.weight", "stg1_low_band_net.0.enc2.conv1.conv.1.weight", "stg1_low_band_net.0.enc2.conv1.conv.1.bias", "stg1_low_band_net.0.enc2.conv1.conv.1.running_mean", "stg1_low_band_net.0.enc2.conv1.conv.1.running_var", "stg1_low_band_net.0.enc2.conv2.conv.0.weight", "stg1_low_band_net.0.enc2.conv2.conv.1.weight", "stg1_low_band_net.0.enc2.conv2.conv.1.bias", "stg1_low_band_net.0.enc2.conv2.conv.1.running_mean", "stg1_low_band_net.0.enc2.conv2.conv.1.running_var", "stg1_low_band_net.0.enc3.conv1.conv.0.weight", "stg1_low_band_net.0.enc3.conv1.conv.1.weight", "stg1_low_band_net.0.enc3.conv1.conv.1.bias", "stg1_low_band_net.0.enc3.conv1.conv.1.running_mean", "stg1_low_band_net.0.enc3.conv1.conv.1.running_var", "stg1_low_band_net.0.enc3.conv2.conv.0.weight", "stg1_low_band_net.0.enc3.conv2.conv.1.weight", "stg1_low_band_net.0.enc3.conv2.conv.1.bias", "stg1_low_band_net.0.enc3.conv2.conv.1.running_mean", "stg1_low_band_net.0.enc3.conv2.conv.1.running_var", "stg1_low_band_net.0.enc4.conv1.conv.0.weight", "stg1_low_band_net.0.enc4.conv1.conv.1.weight", "stg1_low_band_net.0.enc4.conv1.conv.1.bias", "stg1_low_band_net.0.enc4.conv1.conv.1.running_mean", "stg1_low_band_net.0.enc4.conv1.conv.1.running_var", "stg1_low_band_net.0.enc4.conv2.conv.0.weight", "stg1_low_band_net.0.enc4.conv2.conv.1.weight", "stg1_low_band_net.0.enc4.conv2.conv.1.bias", "stg1_low_band_net.0.enc4.conv2.conv.1.running_mean", "stg1_low_band_net.0.enc4.conv2.conv.1.running_var", "stg1_low_band_net.0.enc5.conv1.conv.0.weight", "stg1_low_band_net.0.enc5.conv1.conv.1.weight", "stg1_low_band_net.0.enc5.conv1.conv.1.bias", "stg1_low_band_net.0.enc5.conv1.conv.1.running_mean", "stg1_low_band_net.0.enc5.conv1.conv.1.running_var", "stg1_low_band_net.0.enc5.conv2.conv.0.weight", "stg1_low_band_net.0.enc5.conv2.conv.1.weight", "stg1_low_band_net.0.enc5.conv2.conv.1.bias", "stg1_low_band_net.0.enc5.conv2.conv.1.running_mean", "stg1_low_band_net.0.enc5.conv2.conv.1.running_var", "stg1_low_band_net.0.aspp.conv1.1.conv.0.weight", "stg1_low_band_net.0.aspp.conv1.1.conv.1.weight", "stg1_low_band_net.0.aspp.conv1.1.conv.1.bias", "stg1_low_band_net.0.aspp.conv1.1.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv1.1.conv.1.running_var", "stg1_low_band_net.0.aspp.conv2.conv.0.weight", "stg1_low_band_net.0.aspp.conv2.conv.1.weight", "stg1_low_band_net.0.aspp.conv2.conv.1.bias", "stg1_low_band_net.0.aspp.conv2.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv2.conv.1.running_var", "stg1_low_band_net.0.aspp.conv3.conv.0.weight", "stg1_low_band_net.0.aspp.conv3.conv.1.weight", "stg1_low_band_net.0.aspp.conv3.conv.1.bias", "stg1_low_band_net.0.aspp.conv3.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv3.conv.1.running_var", "stg1_low_band_net.0.aspp.conv4.conv.0.weight", "stg1_low_band_net.0.aspp.conv4.conv.1.weight", "stg1_low_band_net.0.aspp.conv4.conv.1.bias", "stg1_low_band_net.0.aspp.conv4.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv4.conv.1.running_var", "stg1_low_band_net.0.aspp.conv5.conv.0.weight", "stg1_low_band_net.0.aspp.conv5.conv.1.weight", "stg1_low_band_net.0.aspp.conv5.conv.1.bias", "stg1_low_band_net.0.aspp.conv5.conv.1.running_mean", "stg1_low_band_net.0.aspp.conv5.conv.1.running_var", "stg1_low_band_net.0.aspp.bottleneck.conv.0.weight", "stg1_low_band_net.0.aspp.bottleneck.conv.1.weight", "stg1_low_band_net.0.aspp.bottleneck.conv.1.bias", "stg1_low_band_net.0.aspp.bottleneck.conv.1.running_mean", "stg1_low_band_net.0.aspp.bottleneck.conv.1.running_var", "stg1_low_band_net.0.dec4.conv1.conv.0.weight", "stg1_low_band_net.0.dec4.conv1.conv.1.weight", "stg1_low_band_net.0.dec4.conv1.conv.1.bias", "stg1_low_band_net.0.dec4.conv1.conv.1.running_mean", "stg1_low_band_net.0.dec4.conv1.conv.1.running_var", "stg1_low_band_net.0.dec3.conv1.conv.0.weight", "stg1_low_band_net.0.dec3.conv1.conv.1.weight", "stg1_low_band_net.0.dec3.conv1.conv.1.bias", "stg1_low_band_net.0.dec3.conv1.conv.1.running_mean", "stg1_low_band_net.0.dec3.conv1.conv.1.running_var", "stg1_low_band_net.0.dec2.conv1.conv.0.weight", "stg1_low_band_net.0.dec2.conv1.conv.1.weight", "stg1_low_band_net.0.dec2.conv1.conv.1.bias", "stg1_low_band_net.0.dec2.conv1.conv.1.running_mean", "stg1_low_band_net.0.dec2.conv1.conv.1.running_var", "stg1_low_band_net.0.lstm_dec2.conv.conv.0.weight", "stg1_low_band_net.0.lstm_dec2.conv.conv.1.weight", "stg1_low_band_net.0.lstm_dec2.conv.conv.1.bias", "stg1_low_band_net.0.lstm_dec2.conv.conv.1.running_mean", "stg1_low_band_net.0.lstm_dec2.conv.conv.1.running_var", "stg1_low_band_net.0.lstm_dec2.lstm.weight_ih_l0", "stg1_low_band_net.0.lstm_dec2.lstm.weight_hh_l0", "stg1_low_band_net.0.lstm_dec2.lstm.bias_ih_l0", "stg1_low_band_net.0.lstm_dec2.lstm.bias_hh_l0", "stg1_low_band_net.0.lstm_dec2.lstm.weight_ih_l0_reverse", "stg1_low_band_net.0.lstm_dec2.lstm.weight_hh_l0_reverse", "stg1_low_band_net.0.lstm_dec2.lstm.bias_ih_l0_reverse", "stg1_low_band_net.0.lstm_dec2.lstm.bias_hh_l0_reverse", "stg1_low_band_net.0.lstm_dec2.dense.0.weight", "stg1_low_band_net.0.lstm_dec2.dense.0.bias", "stg1_low_band_net.0.lstm_dec2.dense.1.weight", "stg1_low_band_net.0.lstm_dec2.dense.1.bias", "stg1_low_band_net.0.lstm_dec2.dense.1.running_mean", "stg1_low_band_net.0.lstm_dec2.dense.1.running_var", "stg1_low_band_net.0.dec1.conv1.conv.0.weight", "stg1_low_band_net.0.dec1.conv1.conv.1.weight", "stg1_low_band_net.0.dec1.conv1.conv.1.bias", "stg1_low_band_net.0.dec1.conv1.conv.1.running_mean", "stg1_low_band_net.0.dec1.conv1.conv.1.running_var", "stg1_low_band_net.1.conv.0.weight", "stg1_low_band_net.1.conv.1.weight", "stg1_low_band_net.1.conv.1.bias", "stg1_low_band_net.1.conv.1.running_mean", "stg1_low_band_net.1.conv.1.running_var", "stg1_high_band_net.enc1.conv.0.weight", "stg1_high_band_net.enc1.conv.1.weight", "stg1_high_band_net.enc1.conv.1.bias", "stg1_high_band_net.enc1.conv.1.running_mean", "stg1_high_band_net.enc1.conv.1.running_var", "stg1_high_band_net.enc5.conv1.conv.0.weight", "stg1_high_band_net.enc5.conv1.conv.1.weight", "stg1_high_band_net.enc5.conv1.conv.1.bias", "stg1_high_band_net.enc5.conv1.conv.1.running_mean", "stg1_high_band_net.enc5.conv1.conv.1.running_var", "stg1_high_band_net.enc5.conv2.conv.0.weight", "stg1_high_band_net.enc5.conv2.conv.1.weight", "stg1_high_band_net.enc5.conv2.conv.1.bias", "stg1_high_band_net.enc5.conv2.conv.1.running_mean", "stg1_high_band_net.enc5.conv2.conv.1.running_var", "stg1_high_band_net.aspp.conv3.conv.1.bias", "stg1_high_band_net.aspp.conv3.conv.1.running_mean", "stg1_high_band_net.aspp.conv3.conv.1.running_var", "stg1_high_band_net.aspp.conv4.conv.1.bias", "stg1_high_band_net.aspp.conv4.conv.1.running_mean", "stg1_high_band_net.aspp.conv4.conv.1.running_var", "stg1_high_band_net.aspp.conv5.conv.1.bias", "stg1_high_band_net.aspp.conv5.conv.1.running_mean", "stg1_high_band_net.aspp.conv5.conv.1.running_var", "stg1_high_band_net.aspp.bottleneck.conv.0.weight", "stg1_high_band_net.aspp.bottleneck.conv.1.weight", "stg1_high_band_net.aspp.bottleneck.conv.1.bias", "stg1_high_band_net.aspp.bottleneck.conv.1.running_mean", "stg1_high_band_net.aspp.bottleneck.conv.1.running_var", "stg1_high_band_net.dec4.conv1.conv.0.weight", "stg1_high_band_net.dec4.conv1.conv.1.weight", "stg1_high_band_net.dec4.conv1.conv.1.bias", "stg1_high_band_net.dec4.conv1.conv.1.running_mean", "stg1_high_band_net.dec4.conv1.conv.1.running_var", "stg1_high_band_net.dec3.conv1.conv.0.weight", "stg1_high_band_net.dec3.conv1.conv.1.weight", "stg1_high_band_net.dec3.conv1.conv.1.bias", "stg1_high_band_net.dec3.conv1.conv.1.running_mean", "stg1_high_band_net.dec3.conv1.conv.1.running_var", "stg1_high_band_net.dec2.conv1.conv.0.weight", "stg1_high_band_net.dec2.conv1.conv.1.weight", "stg1_high_band_net.dec2.conv1.conv.1.bias", "stg1_high_band_net.dec2.conv1.conv.1.running_mean", "stg1_high_band_net.dec2.conv1.conv.1.running_var", "stg1_high_band_net.lstm_dec2.conv.conv.0.weight", "stg1_high_band_net.lstm_dec2.conv.conv.1.weight", "stg1_high_band_net.lstm_dec2.conv.conv.1.bias", "stg1_high_band_net.lstm_dec2.conv.conv.1.running_mean", "stg1_high_band_net.lstm_dec2.conv.conv.1.running_var", "stg1_high_band_net.lstm_dec2.lstm.weight_ih_l0", "stg1_high_band_net.lstm_dec2.lstm.weight_hh_l0", "stg1_high_band_net.lstm_dec2.lstm.bias_ih_l0", "stg1_high_band_net.lstm_dec2.lstm.bias_hh_l0", "stg1_high_band_net.lstm_dec2.lstm.weight_ih_l0_reverse", "stg1_high_band_net.lstm_dec2.lstm.weight_hh_l0_reverse", "stg1_high_band_net.lstm_dec2.lstm.bias_ih_l0_reverse", "stg1_high_band_net.lstm_dec2.lstm.bias_hh_l0_reverse", "stg1_high_band_net.lstm_dec2.dense.0.weight", "stg1_high_band_net.lstm_dec2.dense.0.bias", "stg1_high_band_net.lstm_dec2.dense.1.weight", "stg1_high_band_net.lstm_dec2.dense.1.bias", "stg1_high_band_net.lstm_dec2.dense.1.running_mean", "stg1_high_band_net.lstm_dec2.dense.1.running_var", "stg1_high_band_net.dec1.conv1.conv.0.weight", "stg1_high_band_net.dec1.conv1.conv.1.weight", "stg1_high_band_net.dec1.conv1.conv.1.bias", "stg1_high_band_net.dec1.conv1.conv.1.running_mean", "stg1_high_band_net.dec1.conv1.conv.1.running_var", "stg2_low_band_net.0.enc1.conv.0.weight", "stg2_low_band_net.0.enc1.conv.1.weight", "stg2_low_band_net.0.enc1.conv.1.bias", "stg2_low_band_net.0.enc1.conv.1.running_mean", "stg2_low_band_net.0.enc1.conv.1.running_var", "stg2_low_band_net.0.enc2.conv1.conv.0.weight", "stg2_low_band_net.0.enc2.conv1.conv.1.weight", "stg2_low_band_net.0.enc2.conv1.conv.1.bias", "stg2_low_band_net.0.enc2.conv1.conv.1.running_mean", "stg2_low_band_net.0.enc2.conv1.conv.1.running_var", "stg2_low_band_net.0.enc2.conv2.conv.0.weight", "stg2_low_band_net.0.enc2.conv2.conv.1.weight", "stg2_low_band_net.0.enc2.conv2.conv.1.bias", "stg2_low_band_net.0.enc2.conv2.conv.1.running_mean", "stg2_low_band_net.0.enc2.conv2.conv.1.running_var", "stg2_low_band_net.0.enc3.conv1.conv.0.weight", "stg2_low_band_net.0.enc3.conv1.conv.1.weight", "stg2_low_band_net.0.enc3.conv1.conv.1.bias", "stg2_low_band_net.0.enc3.conv1.conv.1.running_mean", "stg2_low_band_net.0.enc3.conv1.conv.1.running_var", "stg2_low_band_net.0.enc3.conv2.conv.0.weight", "stg2_low_band_net.0.enc3.conv2.conv.1.weight", "stg2_low_band_net.0.enc3.conv2.conv.1.bias", "stg2_low_band_net.0.enc3.conv2.conv.1.running_mean", "stg2_low_band_net.0.enc3.conv2.conv.1.running_var", "stg2_low_band_net.0.enc4.conv1.conv.0.weight", "stg2_low_band_net.0.enc4.conv1.conv.1.weight", "stg2_low_band_net.0.enc4.conv1.conv.1.bias", "stg2_low_band_net.0.enc4.conv1.conv.1.running_mean", "stg2_low_band_net.0.enc4.conv1.conv.1.running_var", "stg2_low_band_net.0.enc4.conv2.conv.0.weight", "stg2_low_band_net.0.enc4.conv2.conv.1.weight", "stg2_low_band_net.0.enc4.conv2.conv.1.bias", "stg2_low_band_net.0.enc4.conv2.conv.1.running_mean", "stg2_low_band_net.0.enc4.conv2.conv.1.running_var", "stg2_low_band_net.0.enc5.conv1.conv.0.weight", "stg2_low_band_net.0.enc5.conv1.conv.1.weight", "stg2_low_band_net.0.enc5.conv1.conv.1.bias", "stg2_low_band_net.0.enc5.conv1.conv.1.running_mean", "stg2_low_band_net.0.enc5.conv1.conv.1.running_var", "stg2_low_band_net.0.enc5.conv2.conv.0.weight", "stg2_low_band_net.0.enc5.conv2.conv.1.weight", "stg2_low_band_net.0.enc5.conv2.conv.1.bias", "stg2_low_band_net.0.enc5.conv2.conv.1.running_mean", "stg2_low_band_net.0.enc5.conv2.conv.1.running_var", "stg2_low_band_net.0.aspp.conv1.1.conv.0.weight", "stg2_low_band_net.0.aspp.conv1.1.conv.1.weight", "stg2_low_band_net.0.aspp.conv1.1.conv.1.bias", "stg2_low_band_net.0.aspp.conv1.1.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv1.1.conv.1.running_var", "stg2_low_band_net.0.aspp.conv2.conv.0.weight", "stg2_low_band_net.0.aspp.conv2.conv.1.weight", "stg2_low_band_net.0.aspp.conv2.conv.1.bias", "stg2_low_band_net.0.aspp.conv2.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv2.conv.1.running_var", "stg2_low_band_net.0.aspp.conv3.conv.0.weight", "stg2_low_band_net.0.aspp.conv3.conv.1.weight", "stg2_low_band_net.0.aspp.conv3.conv.1.bias", "stg2_low_band_net.0.aspp.conv3.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv3.conv.1.running_var", "stg2_low_band_net.0.aspp.conv4.conv.0.weight", "stg2_low_band_net.0.aspp.conv4.conv.1.weight", "stg2_low_band_net.0.aspp.conv4.conv.1.bias", "stg2_low_band_net.0.aspp.conv4.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv4.conv.1.running_var", "stg2_low_band_net.0.aspp.conv5.conv.0.weight", "stg2_low_band_net.0.aspp.conv5.conv.1.weight", "stg2_low_band_net.0.aspp.conv5.conv.1.bias", "stg2_low_band_net.0.aspp.conv5.conv.1.running_mean", "stg2_low_band_net.0.aspp.conv5.conv.1.running_var", "stg2_low_band_net.0.aspp.bottleneck.conv.0.weight", "stg2_low_band_net.0.aspp.bottleneck.conv.1.weight", "stg2_low_band_net.0.aspp.bottleneck.conv.1.bias", "stg2_low_band_net.0.aspp.bottleneck.conv.1.running_mean", "stg2_low_band_net.0.aspp.bottleneck.conv.1.running_var", "stg2_low_band_net.0.dec4.conv1.conv.0.weight", "stg2_low_band_net.0.dec4.conv1.conv.1.weight", "stg2_low_band_net.0.dec4.conv1.conv.1.bias", "stg2_low_band_net.0.dec4.conv1.conv.1.running_mean", "stg2_low_band_net.0.dec4.conv1.conv.1.running_var", "stg2_low_band_net.0.dec3.conv1.conv.0.weight", "stg2_low_band_net.0.dec3.conv1.conv.1.weight", "stg2_low_band_net.0.dec3.conv1.conv.1.bias", "stg2_low_band_net.0.dec3.conv1.conv.1.running_mean", "stg2_low_band_net.0.dec3.conv1.conv.1.running_var", "stg2_low_band_net.0.dec2.conv1.conv.0.weight", "stg2_low_band_net.0.dec2.conv1.conv.1.weight", "stg2_low_band_net.0.dec2.conv1.conv.1.bias", "stg2_low_band_net.0.dec2.conv1.conv.1.running_mean", "stg2_low_band_net.0.dec2.conv1.conv.1.running_var", "stg2_low_band_net.0.lstm_dec2.conv.conv.0.weight", "stg2_low_band_net.0.lstm_dec2.conv.conv.1.weight", "stg2_low_band_net.0.lstm_dec2.conv.conv.1.bias", "stg2_low_band_net.0.lstm_dec2.conv.conv.1.running_mean", "stg2_low_band_net.0.lstm_dec2.conv.conv.1.running_var", "stg2_low_band_net.0.lstm_dec2.lstm.weight_ih_l0", "stg2_low_band_net.0.lstm_dec2.lstm.weight_hh_l0", "stg2_low_band_net.0.lstm_dec2.lstm.bias_ih_l0", "stg2_low_band_net.0.lstm_dec2.lstm.bias_hh_l0", "stg2_low_band_net.0.lstm_dec2.lstm.weight_ih_l0_reverse", "stg2_low_band_net.0.lstm_dec2.lstm.weight_hh_l0_reverse", "stg2_low_band_net.0.lstm_dec2.lstm.bias_ih_l0_reverse", "stg2_low_band_net.0.lstm_dec2.lstm.bias_hh_l0_reverse", "stg2_low_band_net.0.lstm_dec2.dense.0.weight", "stg2_low_band_net.0.lstm_dec2.dense.0.bias", "stg2_low_band_net.0.lstm_dec2.dense.1.weight", "stg2_low_band_net.0.lstm_dec2.dense.1.bias", "stg2_low_band_net.0.lstm_dec2.dense.1.running_mean", "stg2_low_band_net.0.lstm_dec2.dense.1.running_var", "stg2_low_band_net.0.dec1.conv1.conv.0.weight", "stg2_low_band_net.0.dec1.conv1.conv.1.weight", "stg2_low_band_net.0.dec1.conv1.conv.1.bias", "stg2_low_band_net.0.dec1.conv1.conv.1.running_mean", "stg2_low_band_net.0.dec1.conv1.conv.1.running_var", "stg2_low_band_net.1.conv.0.weight", "stg2_low_band_net.1.conv.1.weight", "stg2_low_band_net.1.conv.1.bias", "stg2_low_band_net.1.conv.1.running_mean", "stg2_low_band_net.1.conv.1.running_var", "stg2_high_band_net.enc1.conv.0.weight", "stg2_high_band_net.enc1.conv.1.weight", "stg2_high_band_net.enc1.conv.1.bias", "stg2_high_band_net.enc1.conv.1.running_mean", "stg2_high_band_net.enc1.conv.1.running_var", "stg2_high_band_net.enc2.conv1.conv.0.weight", "stg2_high_band_net.enc2.conv1.conv.1.weight", "stg2_high_band_net.enc2.conv1.conv.1.bias", "stg2_high_band_net.enc2.conv1.conv.1.running_mean", "stg2_high_band_net.enc2.conv1.conv.1.running_var", "stg2_high_band_net.enc2.conv2.conv.0.weight", "stg2_high_band_net.enc2.conv2.conv.1.weight", "stg2_high_band_net.enc2.conv2.conv.1.bias", "stg2_high_band_net.enc2.conv2.conv.1.running_mean", "stg2_high_band_net.enc2.conv2.conv.1.running_var", "stg2_high_band_net.enc3.conv1.conv.0.weight", "stg2_high_band_net.enc3.conv1.conv.1.weight", "stg2_high_band_net.enc3.conv1.conv.1.bias", "stg2_high_band_net.enc3.conv1.conv.1.running_mean", "stg2_high_band_net.enc3.conv1.conv.1.running_var", "stg2_high_band_net.enc3.conv2.conv.0.weight", "stg2_high_band_net.enc3.conv2.conv.1.weight", "stg2_high_band_net.enc3.conv2.conv.1.bias", "stg2_high_band_net.enc3.conv2.conv.1.running_mean", "stg2_high_band_net.enc3.conv2.conv.1.running_var", "stg2_high_band_net.enc4.conv1.conv.0.weight", "stg2_high_band_net.enc4.conv1.conv.1.weight", "stg2_high_band_net.enc4.conv1.conv.1.bias", "stg2_high_band_net.enc4.conv1.conv.1.running_mean", "stg2_high_band_net.enc4.conv1.conv.1.running_var", "stg2_high_band_net.enc4.conv2.conv.0.weight", "stg2_high_band_net.enc4.conv2.conv.1.weight", "stg2_high_band_net.enc4.conv2.conv.1.bias", "stg2_high_band_net.enc4.conv2.conv.1.running_mean", "stg2_high_band_net.enc4.conv2.conv.1.running_var", "stg2_high_band_net.enc5.conv1.conv.0.weight", "stg2_high_band_net.enc5.conv1.conv.1.weight", "stg2_high_band_net.enc5.conv1.conv.1.bias", "stg2_high_band_net.enc5.conv1.conv.1.running_mean", "stg2_high_band_net.enc5.conv1.conv.1.running_var", "stg2_high_band_net.enc5.conv2.conv.0.weight", "stg2_high_band_net.enc5.conv2.conv.1.weight", "stg2_high_band_net.enc5.conv2.conv.1.bias", "stg2_high_band_net.enc5.conv2.conv.1.running_mean", "stg2_high_band_net.enc5.conv2.conv.1.running_var", "stg2_high_band_net.aspp.conv1.1.conv.0.weight", "stg2_high_band_net.aspp.conv1.1.conv.1.weight", "stg2_high_band_net.aspp.conv1.1.conv.1.bias", "stg2_high_band_net.aspp.conv1.1.conv.1.running_mean", "stg2_high_band_net.aspp.conv1.1.conv.1.running_var", "stg2_high_band_net.aspp.conv2.conv.0.weight", "stg2_high_band_net.aspp.conv2.conv.1.weight", "stg2_high_band_net.aspp.conv2.conv.1.bias", "stg2_high_band_net.aspp.conv2.conv.1.running_mean", "stg2_high_band_net.aspp.conv2.conv.1.running_var", "stg2_high_band_net.aspp.conv3.conv.0.weight", "stg2_high_band_net.aspp.conv3.conv.1.weight", "stg2_high_band_net.aspp.conv3.conv.1.bias", "stg2_high_band_net.aspp.conv3.conv.1.running_mean", "stg2_high_band_net.aspp.conv3.conv.1.running_var", "stg2_high_band_net.aspp.conv4.conv.0.weight", "stg2_high_band_net.aspp.conv4.conv.1.weight", "stg2_high_band_net.aspp.conv4.conv.1.bias", "stg2_high_band_net.aspp.conv4.conv.1.running_mean", "stg2_high_band_net.aspp.conv4.conv.1.running_var", "stg2_high_band_net.aspp.conv5.conv.0.weight", "stg2_high_band_net.aspp.conv5.conv.1.weight", "stg2_high_band_net.aspp.conv5.conv.1.bias", "stg2_high_band_net.aspp.conv5.conv.1.running_mean", "stg2_high_band_net.aspp.conv5.conv.1.running_var", "stg2_high_band_net.aspp.bottleneck.conv.0.weight", "stg2_high_band_net.aspp.bottleneck.conv.1.weight", "stg2_high_band_net.aspp.bottleneck.conv.1.bias", "stg2_high_band_net.aspp.bottleneck.conv.1.running_mean", "stg2_high_band_net.aspp.bottleneck.conv.1.running_var", "stg2_high_band_net.dec4.conv1.conv.0.weight", "stg2_high_band_net.dec4.conv1.conv.1.weight", "stg2_high_band_net.dec4.conv1.conv.1.bias", "stg2_high_band_net.dec4.conv1.conv.1.running_mean", "stg2_high_band_net.dec4.conv1.conv.1.running_var", "stg2_high_band_net.dec3.conv1.conv.0.weight", "stg2_high_band_net.dec3.conv1.conv.1.weight", "stg2_high_band_net.dec3.conv1.conv.1.bias", "stg2_high_band_net.dec3.conv1.conv.1.running_mean", "stg2_high_band_net.dec3.conv1.conv.1.running_var", "stg2_high_band_net.dec2.conv1.conv.0.weight", "stg2_high_band_net.dec2.conv1.conv.1.weight", "stg2_high_band_net.dec2.conv1.conv.1.bias", "stg2_high_band_net.dec2.conv1.conv.1.running_mean", "stg2_high_band_net.dec2.conv1.conv.1.running_var", "stg2_high_band_net.lstm_dec2.conv.conv.0.weight", "stg2_high_band_net.lstm_dec2.conv.conv.1.weight", "stg2_high_band_net.lstm_dec2.conv.conv.1.bias", "stg2_high_band_net.lstm_dec2.conv.conv.1.running_mean", "stg2_high_band_net.lstm_dec2.conv.conv.1.running_var", "stg2_high_band_net.lstm_dec2.lstm.weight_ih_l0", "stg2_high_band_net.lstm_dec2.lstm.weight_hh_l0", "stg2_high_band_net.lstm_dec2.lstm.bias_ih_l0", "stg2_high_band_net.lstm_dec2.lstm.bias_hh_l0", "stg2_high_band_net.lstm_dec2.lstm.weight_ih_l0_reverse", "stg2_high_band_net.lstm_dec2.lstm.weight_hh_l0_reverse", "stg2_high_band_net.lstm_dec2.lstm.bias_ih_l0_reverse", "stg2_high_band_net.lstm_dec2.lstm.bias_hh_l0_reverse", "stg2_high_band_net.lstm_dec2.dense.0.weight", "stg2_high_band_net.lstm_dec2.dense.0.bias", "stg2_high_band_net.lstm_dec2.dense.1.weight", "stg2_high_band_net.lstm_dec2.dense.1.bias", "stg2_high_band_net.lstm_dec2.dense.1.running_mean", "stg2_high_band_net.lstm_dec2.dense.1.running_var", "stg2_high_band_net.dec1.conv1.conv.0.weight", "stg2_high_band_net.dec1.conv1.conv.1.weight", "stg2_high_band_net.dec1.conv1.conv.1.bias", "stg2_high_band_net.dec1.conv1.conv.1.running_mean", "stg2_high_band_net.dec1.conv1.conv.1.running_var", "stg3_full_band_net.enc1.conv.0.weight", "stg3_full_band_net.enc1.conv.1.weight", "stg3_full_band_net.enc1.conv.1.bias", "stg3_full_band_net.enc1.conv.1.running_mean", "stg3_full_band_net.enc1.conv.1.running_var", "stg3_full_band_net.enc5.conv1.conv.0.weight", "stg3_full_band_net.enc5.conv1.conv.1.weight", "stg3_full_band_net.enc5.conv1.conv.1.bias", "stg3_full_band_net.enc5.conv1.conv.1.running_mean", "stg3_full_band_net.enc5.conv1.conv.1.running_var", "stg3_full_band_net.enc5.conv2.conv.0.weight", "stg3_full_band_net.enc5.conv2.conv.1.weight", "stg3_full_band_net.enc5.conv2.conv.1.bias", "stg3_full_band_net.enc5.conv2.conv.1.running_mean", "stg3_full_band_net.enc5.conv2.conv.1.running_var", "stg3_full_band_net.aspp.conv3.conv.1.bias", "stg3_full_band_net.aspp.conv3.conv.1.running_mean", "stg3_full_band_net.aspp.conv3.conv.1.running_var", "stg3_full_band_net.aspp.conv4.conv.1.bias", "stg3_full_band_net.aspp.conv4.conv.1.running_mean", "stg3_full_band_net.aspp.conv4.conv.1.running_var", "stg3_full_band_net.aspp.conv5.conv.1.bias", "stg3_full_band_net.aspp.conv5.conv.1.running_mean", "stg3_full_band_net.aspp.conv5.conv.1.running_var", "stg3_full_band_net.aspp.bottleneck.conv.0.weight", "stg3_full_band_net.aspp.bottleneck.conv.1.weight", "stg3_full_band_net.aspp.bottleneck.conv.1.bias", "stg3_full_band_net.aspp.bottleneck.conv.1.running_mean", "stg3_full_band_net.aspp.bottleneck.conv.1.running_var", "stg3_full_band_net.dec4.conv1.conv.0.weight", "stg3_full_band_net.dec4.conv1.conv.1.weight", "stg3_full_band_net.dec4.conv1.conv.1.bias", "stg3_full_band_net.dec4.conv1.conv.1.running_mean", "stg3_full_band_net.dec4.conv1.conv.1.running_var", "stg3_full_band_net.dec3.conv1.conv.0.weight", "stg3_full_band_net.dec3.conv1.conv.1.weight", "stg3_full_band_net.dec3.conv1.conv.1.bias", "stg3_full_band_net.dec3.conv1.conv.1.running_mean", "stg3_full_band_net.dec3.conv1.conv.1.running_var", "stg3_full_band_net.dec2.conv1.conv.0.weight", "stg3_full_band_net.dec2.conv1.conv.1.weight", "stg3_full_band_net.dec2.conv1.conv.1.bias", "stg3_full_band_net.dec2.conv1.conv.1.running_mean", "stg3_full_band_net.dec2.conv1.conv.1.running_var", "stg3_full_band_net.lstm_dec2.conv.conv.0.weight", "stg3_full_band_net.lstm_dec2.conv.conv.1.weight", "stg3_full_band_net.lstm_dec2.conv.conv.1.bias", "stg3_full_band_net.lstm_dec2.conv.conv.1.running_mean", "stg3_full_band_net.lstm_dec2.conv.conv.1.running_var", "stg3_full_band_net.lstm_dec2.lstm.weight_ih_l0", "stg3_full_band_net.lstm_dec2.lstm.weight_hh_l0", "stg3_full_band_net.lstm_dec2.lstm.bias_ih_l0", "stg3_full_band_net.lstm_dec2.lstm.bias_hh_l0", "stg3_full_band_net.lstm_dec2.lstm.weight_ih_l0_reverse", "stg3_full_band_net.lstm_dec2.lstm.weight_hh_l0_reverse", "stg3_full_band_net.lstm_dec2.lstm.bias_ih_l0_reverse", "stg3_full_band_net.lstm_dec2.lstm.bias_hh_l0_reverse", "stg3_full_band_net.lstm_dec2.dense.0.weight", "stg3_full_band_net.lstm_dec2.dense.0.bias", "stg3_full_band_net.lstm_dec2.dense.1.weight", "stg3_full_band_net.lstm_dec2.dense.1.bias", "stg3_full_band_net.lstm_dec2.dense.1.running_mean", "stg3_full_band_net.lstm_dec2.dense.1.running_var", "stg3_full_band_net.dec1.conv1.conv.0.weight", "stg3_full_band_net.dec1.conv1.conv.1.weight", "stg3_full_band_net.dec1.conv1.conv.1.bias", "stg3_full_band_net.dec1.conv1.conv.1.running_mean", "stg3_full_band_net.dec1.conv1.conv.1.running_var", "aux_out.weight".
Unexpected key(s) in state_dict: "stg2_bridge.conv.0.weight", "stg2_bridge.conv.1.weight", "stg2_bridge.conv.1.bias", "stg2_bridge.conv.1.running_mean", "stg2_bridge.conv.1.running_var", "stg2_bridge.conv.1.num_batches_tracked", "stg2_full_band_net.enc1.conv1.conv.0.weight", "stg2_full_band_net.enc1.conv1.conv.1.weight", "stg2_full_band_net.enc1.conv1.conv.1.bias", "stg2_full_band_net.enc1.conv1.conv.1.running_mean", "stg2_full_band_net.enc1.conv1.conv.1.running_var", "stg2_full_band_net.enc1.conv1.conv.1.num_batches_tracked", "stg2_full_band_net.enc1.conv2.conv.0.weight", "stg2_full_band_net.enc1.conv2.conv.1.weight", "stg2_full_band_net.enc1.conv2.conv.1.bias", "stg2_full_band_net.enc1.conv2.conv.1.running_mean", "stg2_full_band_net.enc1.conv2.conv.1.running_var", "stg2_full_band_net.enc1.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.enc2.conv1.conv.0.weight", "stg2_full_band_net.enc2.conv1.conv.1.weight", "stg2_full_band_net.enc2.conv1.conv.1.bias", "stg2_full_band_net.enc2.conv1.conv.1.running_mean", "stg2_full_band_net.enc2.conv1.conv.1.running_var", "stg2_full_band_net.enc2.conv1.conv.1.num_batches_tracked", "stg2_full_band_net.enc2.conv2.conv.0.weight", "stg2_full_band_net.enc2.conv2.conv.1.weight", "stg2_full_band_net.enc2.conv2.conv.1.bias", "stg2_full_band_net.enc2.conv2.conv.1.running_mean", "stg2_full_band_net.enc2.conv2.conv.1.running_var", "stg2_full_band_net.enc2.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.enc3.conv1.conv.0.weight", "stg2_full_band_net.enc3.conv1.conv.1.weight", "stg2_full_band_net.enc3.conv1.conv.1.bias", "stg2_full_band_net.enc3.conv1.conv.1.running_mean", "stg2_full_band_net.enc3.conv1.conv.1.running_var", "stg2_full_band_net.enc3.conv1.conv.1.num_batches_tracked", "stg2_full_band_net.enc3.conv2.conv.0.weight", "stg2_full_band_net.enc3.conv2.conv.1.weight", "stg2_full_band_net.enc3.conv2.conv.1.bias", "stg2_full_band_net.enc3.conv2.conv.1.running_mean", "stg2_full_band_net.enc3.conv2.conv.1.running_var", "stg2_full_band_net.enc3.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.enc4.conv1.conv.0.weight", "stg2_full_band_net.enc4.conv1.conv.1.weight", "stg2_full_band_net.enc4.conv1.conv.1.bias", "stg2_full_band_net.enc4.conv1.conv.1.running_mean", "stg2_full_band_net.enc4.conv1.conv.1.running_var", "stg2_full_band_net.enc4.conv1.conv.1.num_batches_tracked", "stg2_full_band_net.enc4.conv2.conv.0.weight", "stg2_full_band_net.enc4.conv2.conv.1.weight", "stg2_full_band_net.enc4.conv2.conv.1.bias", "stg2_full_band_net.enc4.conv2.conv.1.running_mean", "stg2_full_band_net.enc4.conv2.conv.1.running_var", "stg2_full_band_net.enc4.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.aspp.conv1.1.conv.0.weight", "stg2_full_band_net.aspp.conv1.1.conv.1.weight", "stg2_full_band_net.aspp.conv1.1.conv.1.bias", "stg2_full_band_net.aspp.conv1.1.conv.1.running_mean", "stg2_full_band_net.aspp.conv1.1.conv.1.running_var", "stg2_full_band_net.aspp.conv1.1.conv.1.num_batches_tracked", "stg2_full_band_net.aspp.conv2.conv.0.weight", "stg2_full_band_net.aspp.conv2.conv.1.weight", "stg2_full_band_net.aspp.conv2.conv.1.bias", "stg2_full_band_net.aspp.conv2.conv.1.running_mean", "stg2_full_band_net.aspp.conv2.conv.1.running_var", "stg2_full_band_net.aspp.conv2.conv.1.num_batches_tracked", "stg2_full_band_net.aspp.conv3.conv.0.weight", "stg2_full_band_net.aspp.conv3.conv.1.weight", "stg2_full_band_net.aspp.conv3.conv.2.weight", "stg2_full_band_net.aspp.conv3.conv.2.bias", "stg2_full_band_net.aspp.conv3.conv.2.running_mean", "stg2_full_band_net.aspp.conv3.conv.2.running_var", "stg2_full_band_net.aspp.conv3.conv.2.num_batches_tracked", "stg2_full_band_net.aspp.conv4.conv.0.weight", "stg2_full_band_net.aspp.conv4.conv.1.weight", "stg2_full_band_net.aspp.conv4.conv.2.weight", "stg2_full_band_net.aspp.conv4.conv.2.bias", "stg2_full_band_net.aspp.conv4.conv.2.running_mean", "stg2_full_band_net.aspp.conv4.conv.2.running_var", "stg2_full_band_net.aspp.conv4.conv.2.num_batches_tracked", "stg2_full_band_net.aspp.conv5.conv.0.weight", "stg2_full_band_net.aspp.conv5.conv.1.weight", "stg2_full_band_net.aspp.conv5.conv.2.weight", "stg2_full_band_net.aspp.conv5.conv.2.bias", "stg2_full_band_net.aspp.conv5.conv.2.running_mean", "stg2_full_band_net.aspp.conv5.conv.2.running_var", "stg2_full_band_net.aspp.conv5.conv.2.num_batches_tracked", "stg2_full_band_net.aspp.bottleneck.0.conv.0.weight", "stg2_full_band_net.aspp.bottleneck.0.conv.1.weight", "stg2_full_band_net.aspp.bottleneck.0.conv.1.bias", "stg2_full_band_net.aspp.bottleneck.0.conv.1.running_mean", "stg2_full_band_net.aspp.bottleneck.0.conv.1.running_var", "stg2_full_band_net.aspp.bottleneck.0.conv.1.num_batches_tracked", "stg2_full_band_net.dec4.conv.conv.0.weight", "stg2_full_band_net.dec4.conv.conv.1.weight", "stg2_full_band_net.dec4.conv.conv.1.bias", "stg2_full_band_net.dec4.conv.conv.1.running_mean", "stg2_full_band_net.dec4.conv.conv.1.running_var", "stg2_full_band_net.dec4.conv.conv.1.num_batches_tracked", "stg2_full_band_net.dec3.conv.conv.0.weight", "stg2_full_band_net.dec3.conv.conv.1.weight", "stg2_full_band_net.dec3.conv.conv.1.bias", "stg2_full_band_net.dec3.conv.conv.1.running_mean", "stg2_full_band_net.dec3.conv.conv.1.running_var", "stg2_full_band_net.dec3.conv.conv.1.num_batches_tracked", "stg2_full_band_net.dec2.conv.conv.0.weight", "stg2_full_band_net.dec2.conv.conv.1.weight", "stg2_full_band_net.dec2.conv.conv.1.bias", "stg2_full_band_net.dec2.conv.conv.1.running_mean", "stg2_full_band_net.dec2.conv.conv.1.running_var", "stg2_full_band_net.dec2.conv.conv.1.num_batches_tracked", "stg2_full_band_net.dec1.conv.conv.0.weight", "stg2_full_band_net.dec1.conv.conv.1.weight", "stg2_full_band_net.dec1.conv.conv.1.bias", "stg2_full_band_net.dec1.conv.conv.1.running_mean", "stg2_full_band_net.dec1.conv.conv.1.running_var", "stg2_full_band_net.dec1.conv.conv.1.num_batches_tracked", "stg3_bridge.conv.0.weight", "stg3_bridge.conv.1.weight", "stg3_bridge.conv.1.bias", "stg3_bridge.conv.1.running_mean", "stg3_bridge.conv.1.running_var", "stg3_bridge.conv.1.num_batches_tracked", "aux1_out.weight", "aux2_out.weight", "stg1_low_band_net.enc1.conv1.conv.0.weight", "stg1_low_band_net.enc1.conv1.conv.1.weight", "stg1_low_band_net.enc1.conv1.conv.1.bias", "stg1_low_band_net.enc1.conv1.conv.1.running_mean", "stg1_low_band_net.enc1.conv1.conv.1.running_var", "stg1_low_band_net.enc1.conv1.conv.1.num_batches_tracked", "stg1_low_band_net.enc1.conv2.conv.0.weight", "stg1_low_band_net.enc1.conv2.conv.1.weight", "stg1_low_band_net.enc1.conv2.conv.1.bias", "stg1_low_band_net.enc1.conv2.conv.1.running_mean", "stg1_low_band_net.enc1.conv2.conv.1.running_var", "stg1_low_band_net.enc1.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.enc2.conv1.conv.0.weight", "stg1_low_band_net.enc2.conv1.conv.1.weight", "stg1_low_band_net.enc2.conv1.conv.1.bias", "stg1_low_band_net.enc2.conv1.conv.1.running_mean", "stg1_low_band_net.enc2.conv1.conv.1.running_var", "stg1_low_band_net.enc2.conv1.conv.1.num_batches_tracked", "stg1_low_band_net.enc2.conv2.conv.0.weight", "stg1_low_band_net.enc2.conv2.conv.1.weight", "stg1_low_band_net.enc2.conv2.conv.1.bias", "stg1_low_band_net.enc2.conv2.conv.1.running_mean", "stg1_low_band_net.enc2.conv2.conv.1.running_var", "stg1_low_band_net.enc2.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.enc3.conv1.conv.0.weight", "stg1_low_band_net.enc3.conv1.conv.1.weight", "stg1_low_band_net.enc3.conv1.conv.1.bias", "stg1_low_band_net.enc3.conv1.conv.1.running_mean", "stg1_low_band_net.enc3.conv1.conv.1.running_var", "stg1_low_band_net.enc3.conv1.conv.1.num_batches_tracked", "stg1_low_band_net.enc3.conv2.conv.0.weight", "stg1_low_band_net.enc3.conv2.conv.1.weight", "stg1_low_band_net.enc3.conv2.conv.1.bias", "stg1_low_band_net.enc3.conv2.conv.1.running_mean", "stg1_low_band_net.enc3.conv2.conv.1.running_var", "stg1_low_band_net.enc3.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.enc4.conv1.conv.0.weight", "stg1_low_band_net.enc4.conv1.conv.1.weight", "stg1_low_band_net.enc4.conv1.conv.1.bias", "stg1_low_band_net.enc4.conv1.conv.1.running_mean", "stg1_low_band_net.enc4.conv1.conv.1.running_var", "stg1_low_band_net.enc4.conv1.conv.1.num_batches_tracked", "stg1_low_band_net.enc4.conv2.conv.0.weight", "stg1_low_band_net.enc4.conv2.conv.1.weight", "stg1_low_band_net.enc4.conv2.conv.1.bias", "stg1_low_band_net.enc4.conv2.conv.1.running_mean", "stg1_low_band_net.enc4.conv2.conv.1.running_var", "stg1_low_band_net.enc4.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.aspp.conv1.1.conv.0.weight", "stg1_low_band_net.aspp.conv1.1.conv.1.weight", "stg1_low_band_net.aspp.conv1.1.conv.1.bias", "stg1_low_band_net.aspp.conv1.1.conv.1.running_mean", "stg1_low_band_net.aspp.conv1.1.conv.1.running_var", "stg1_low_band_net.aspp.conv1.1.conv.1.num_batches_tracked", "stg1_low_band_net.aspp.conv2.conv.0.weight", "stg1_low_band_net.aspp.conv2.conv.1.weight", "stg1_low_band_net.aspp.conv2.conv.1.bias", "stg1_low_band_net.aspp.conv2.conv.1.running_mean", "stg1_low_band_net.aspp.conv2.conv.1.running_var", "stg1_low_band_net.aspp.conv2.conv.1.num_batches_tracked", "stg1_low_band_net.aspp.conv3.conv.0.weight", "stg1_low_band_net.aspp.conv3.conv.1.weight", "stg1_low_band_net.aspp.conv3.conv.2.weight", "stg1_low_band_net.aspp.conv3.conv.2.bias", "stg1_low_band_net.aspp.conv3.conv.2.running_mean", "stg1_low_band_net.aspp.conv3.conv.2.running_var", "stg1_low_band_net.aspp.conv3.conv.2.num_batches_tracked", "stg1_low_band_net.aspp.conv4.conv.0.weight", "stg1_low_band_net.aspp.conv4.conv.1.weight", "stg1_low_band_net.aspp.conv4.conv.2.weight", "stg1_low_band_net.aspp.conv4.conv.2.bias", "stg1_low_band_net.aspp.conv4.conv.2.running_mean", "stg1_low_band_net.aspp.conv4.conv.2.running_var", "stg1_low_band_net.aspp.conv4.conv.2.num_batches_tracked", "stg1_low_band_net.aspp.conv5.conv.0.weight", "stg1_low_band_net.aspp.conv5.conv.1.weight", "stg1_low_band_net.aspp.conv5.conv.2.weight", "stg1_low_band_net.aspp.conv5.conv.2.bias", "stg1_low_band_net.aspp.conv5.conv.2.running_mean", "stg1_low_band_net.aspp.conv5.conv.2.running_var", "stg1_low_band_net.aspp.conv5.conv.2.num_batches_tracked", "stg1_low_band_net.aspp.bottleneck.0.conv.0.weight", "stg1_low_band_net.aspp.bottleneck.0.conv.1.weight", "stg1_low_band_net.aspp.bottleneck.0.conv.1.bias", "stg1_low_band_net.aspp.bottleneck.0.conv.1.running_mean", "stg1_low_band_net.aspp.bottleneck.0.conv.1.running_var", "stg1_low_band_net.aspp.bottleneck.0.conv.1.num_batches_tracked", "stg1_low_band_net.dec4.conv.conv.0.weight", "stg1_low_band_net.dec4.conv.conv.1.weight", "stg1_low_band_net.dec4.conv.conv.1.bias", "stg1_low_band_net.dec4.conv.conv.1.running_mean", "stg1_low_band_net.dec4.conv.conv.1.running_var", "stg1_low_band_net.dec4.conv.conv.1.num_batches_tracked", "stg1_low_band_net.dec3.conv.conv.0.weight", "stg1_low_band_net.dec3.conv.conv.1.weight", "stg1_low_band_net.dec3.conv.conv.1.bias", "stg1_low_band_net.dec3.conv.conv.1.running_mean", "stg1_low_band_net.dec3.conv.conv.1.running_var", "stg1_low_band_net.dec3.conv.conv.1.num_batches_tracked", "stg1_low_band_net.dec2.conv.conv.0.weight", "stg1_low_band_net.dec2.conv.conv.1.weight", "stg1_low_band_net.dec2.conv.conv.1.bias", "stg1_low_band_net.dec2.conv.conv.1.running_mean", "stg1_low_band_net.dec2.conv.conv.1.running_var", "stg1_low_band_net.dec2.conv.conv.1.num_batches_tracked", "stg1_low_band_net.dec1.conv.conv.0.weight", "stg1_low_band_net.dec1.conv.conv.1.weight", "stg1_low_band_net.dec1.conv.conv.1.bias", "stg1_low_band_net.dec1.conv.conv.1.running_mean", "stg1_low_band_net.dec1.conv.conv.1.running_var", "stg1_low_band_net.dec1.conv.conv.1.num_batches_tracked", "stg1_high_band_net.enc1.conv1.conv.0.weight", "stg1_high_band_net.enc1.conv1.conv.1.weight", "stg1_high_band_net.enc1.conv1.conv.1.bias", "stg1_high_band_net.enc1.conv1.conv.1.running_mean", "stg1_high_band_net.enc1.conv1.conv.1.running_var", "stg1_high_band_net.enc1.conv1.conv.1.num_batches_tracked", "stg1_high_band_net.enc1.conv2.conv.0.weight", "stg1_high_band_net.enc1.conv2.conv.1.weight", "stg1_high_band_net.enc1.conv2.conv.1.bias", "stg1_high_band_net.enc1.conv2.conv.1.running_mean", "stg1_high_band_net.enc1.conv2.conv.1.running_var", "stg1_high_band_net.enc1.conv2.conv.1.num_batches_tracked", "stg1_high_band_net.aspp.conv3.conv.2.weight", "stg1_high_band_net.aspp.conv3.conv.2.bias", "stg1_high_band_net.aspp.conv3.conv.2.running_mean", "stg1_high_band_net.aspp.conv3.conv.2.running_var", "stg1_high_band_net.aspp.conv3.conv.2.num_batches_tracked", "stg1_high_band_net.aspp.conv4.conv.2.weight", "stg1_high_band_net.aspp.conv4.conv.2.bias", "stg1_high_band_net.aspp.conv4.conv.2.running_mean", "stg1_high_band_net.aspp.conv4.conv.2.running_var", "stg1_high_band_net.aspp.conv4.conv.2.num_batches_tracked", "stg1_high_band_net.aspp.conv5.conv.2.weight", "stg1_high_band_net.aspp.conv5.conv.2.bias", "stg1_high_band_net.aspp.conv5.conv.2.running_mean", "stg1_high_band_net.aspp.conv5.conv.2.running_var", "stg1_high_band_net.aspp.conv5.conv.2.num_batches_tracked", "stg1_high_band_net.aspp.bottleneck.0.conv.0.weight", "stg1_high_band_net.aspp.bottleneck.0.conv.1.weight", "stg1_high_band_net.aspp.bottleneck.0.conv.1.bias", "stg1_high_band_net.aspp.bottleneck.0.conv.1.running_mean", "stg1_high_band_net.aspp.bottleneck.0.conv.1.running_var", "stg1_high_band_net.aspp.bottleneck.0.conv.1.num_batches_tracked", "stg1_high_band_net.dec4.conv.conv.0.weight", "stg1_high_band_net.dec4.conv.conv.1.weight", "stg1_high_band_net.dec4.conv.conv.1.bias", "stg1_high_band_net.dec4.conv.conv.1.running_mean", "stg1_high_band_net.dec4.conv.conv.1.running_var", "stg1_high_band_net.dec4.conv.conv.1.num_batches_tracked", "stg1_high_band_net.dec3.conv.conv.0.weight", "stg1_high_band_net.dec3.conv.conv.1.weight", "stg1_high_band_net.dec3.conv.conv.1.bias", "stg1_high_band_net.dec3.conv.conv.1.running_mean", "stg1_high_band_net.dec3.conv.conv.1.running_var", "stg1_high_band_net.dec3.conv.conv.1.num_batches_tracked", "stg1_high_band_net.dec2.conv.conv.0.weight", "stg1_high_band_net.dec2.conv.conv.1.weight", "stg1_high_band_net.dec2.conv.conv.1.bias", "stg1_high_band_net.dec2.conv.conv.1.running_mean", "stg1_high_band_net.dec2.conv.conv.1.running_var", "stg1_high_band_net.dec2.conv.conv.1.num_batches_tracked", "stg1_high_band_net.dec1.conv.conv.0.weight", "stg1_high_band_net.dec1.conv.conv.1.weight", "stg1_high_band_net.dec1.conv.conv.1.bias", "stg1_high_band_net.dec1.conv.conv.1.running_mean", "stg1_high_band_net.dec1.conv.conv.1.running_var", "stg1_high_band_net.dec1.conv.conv.1.num_batches_tracked", "stg3_full_band_net.enc1.conv1.conv.0.weight", "stg3_full_band_net.enc1.conv1.conv.1.weight", "stg3_full_band_net.enc1.conv1.conv.1.bias", "stg3_full_band_net.enc1.conv1.conv.1.running_mean", "stg3_full_band_net.enc1.conv1.conv.1.running_var", "stg3_full_band_net.enc1.conv1.conv.1.num_batches_tracked", "stg3_full_band_net.enc1.conv2.conv.0.weight", "stg3_full_band_net.enc1.conv2.conv.1.weight", "stg3_full_band_net.enc1.conv2.conv.1.bias", "stg3_full_band_net.enc1.conv2.conv.1.running_mean", "stg3_full_band_net.enc1.conv2.conv.1.running_var", "stg3_full_band_net.enc1.conv2.conv.1.num_batches_tracked", "stg3_full_band_net.aspp.conv3.conv.2.weight", "stg3_full_band_net.aspp.conv3.conv.2.bias", "stg3_full_band_net.aspp.conv3.conv.2.running_mean", "stg3_full_band_net.aspp.conv3.conv.2.running_var", "stg3_full_band_net.aspp.conv3.conv.2.num_batches_tracked", "stg3_full_band_net.aspp.conv4.conv.2.weight", "stg3_full_band_net.aspp.conv4.conv.2.bias", "stg3_full_band_net.aspp.conv4.conv.2.running_mean", "stg3_full_band_net.aspp.conv4.conv.2.running_var", "stg3_full_band_net.aspp.conv4.conv.2.num_batches_tracked", "stg3_full_band_net.aspp.conv5.conv.2.weight", "stg3_full_band_net.aspp.conv5.conv.2.bias", "stg3_full_band_net.aspp.conv5.conv.2.running_mean", "stg3_full_band_net.aspp.conv5.conv.2.running_var", "stg3_full_band_net.aspp.conv5.conv.2.num_batches_tracked", "stg3_full_band_net.aspp.bottleneck.0.conv.0.weight", "stg3_full_band_net.aspp.bottleneck.0.conv.1.weight", "stg3_full_band_net.aspp.bottleneck.0.conv.1.bias", "stg3_full_band_net.aspp.bottleneck.0.conv.1.running_mean", "stg3_full_band_net.aspp.bottleneck.0.conv.1.running_var", "stg3_full_band_net.aspp.bottleneck.0.conv.1.num_batches_tracked", "stg3_full_band_net.dec4.conv.conv.0.weight", "stg3_full_band_net.dec4.conv.conv.1.weight", "stg3_full_band_net.dec4.conv.conv.1.bias", "stg3_full_band_net.dec4.conv.conv.1.running_mean", "stg3_full_band_net.dec4.conv.conv.1.running_var", "stg3_full_band_net.dec4.conv.conv.1.num_batches_tracked", "stg3_full_band_net.dec3.conv.conv.0.weight", "stg3_full_band_net.dec3.conv.conv.1.weight", "stg3_full_band_net.dec3.conv.conv.1.bias", "stg3_full_band_net.dec3.conv.conv.1.running_mean", "stg3_full_band_net.dec3.conv.conv.1.running_var", "stg3_full_band_net.dec3.conv.conv.1.num_batches_tracked", "stg3_full_band_net.dec2.conv.conv.0.weight", "stg3_full_band_net.dec2.conv.conv.1.weight", "stg3_full_band_net.dec2.conv.conv.1.bias", "stg3_full_band_net.dec2.conv.conv.1.running_mean", "stg3_full_band_net.dec2.conv.conv.1.running_var", "stg3_full_band_net.dec2.conv.conv.1.num_batches_tracked", "stg3_full_band_net.dec1.conv.conv.0.weight", "stg3_full_band_net.dec1.conv.conv.1.weight", "stg3_full_band_net.dec1.conv.conv.1.bias", "stg3_full_band_net.dec1.conv.conv.1.running_mean", "stg3_full_band_net.dec1.conv.conv.1.running_var", "stg3_full_band_net.dec1.conv.conv.1.num_batches_tracked".
size mismatch for stg1_high_band_net.enc2.conv1.conv.0.weight: copying a param with shape torch.Size([64, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 8, 3, 3]).
size mismatch for stg1_high_band_net.enc2.conv1.conv.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv1.conv.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv1.conv.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv1.conv.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 16, 3, 3]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc2.conv2.conv.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.0.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv1.conv.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc3.conv2.conv.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.0.weight: copying a param with shape torch.Size([256, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 32, 3, 3]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv1.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 48, 3, 3]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.enc4.conv2.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.0.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 1, 1]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv1.1.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.0.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 1, 1]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv2.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv3.conv.0.weight: copying a param with shape torch.Size([256, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for stg1_high_band_net.aspp.conv3.conv.1.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv4.conv.0.weight: copying a param with shape torch.Size([256, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for stg1_high_band_net.aspp.conv4.conv.1.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg1_high_band_net.aspp.conv5.conv.0.weight: copying a param with shape torch.Size([256, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for stg1_high_band_net.aspp.conv5.conv.1.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.0.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv1.conv.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc2.conv2.conv.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.0.weight: copying a param with shape torch.Size([256, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv1.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc3.conv2.conv.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.0.weight: copying a param with shape torch.Size([512, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([192, 128, 3, 3]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv1.conv.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.0.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([192, 192, 3, 3]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.enc4.conv2.conv.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.0.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv1.1.conv.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.0.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv2.conv.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv3.conv.0.weight: copying a param with shape torch.Size([512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for stg3_full_band_net.aspp.conv3.conv.1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv4.conv.0.weight: copying a param with shape torch.Size([512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for stg3_full_band_net.aspp.conv4.conv.1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stg3_full_band_net.aspp.conv5.conv.0.weight: copying a param with shape torch.Size([512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for stg3_full_band_net.aspp.conv5.conv.1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for out.weight: copying a param with shape torch.Size([2, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([2, 32, 1, 1])."
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 1050, in seperate
File "torch\nn\modules\module.py", line 1667, in load_state_dict
"
Error Time Stamp [2025-02-24 10:29:07]
Full Application Settings:
vr_model: HP5_only_main_vocal
aggression_setting: 5
window_size: 320
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v4 | htdemucs_ft
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: True
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Main
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Align Inputs
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: False
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: True
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems | open | 2025-02-24T05:32:17Z | 2025-02-25T21:49:05Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1749 | [] | Jasurbek1987 | 1 |
widgetti/solara | jupyter | 831 | Can't use tooltips for children of `ToggleButtonsSingle` | This is somewhat related to #683
If I try to use tooltips for each Button inside a ToggleButtonsSingle component, the selected value is set to the tooltip value, not the one from the button.
## Correct behavior (but no tooltips)
```python
import solara
map_type = solara.Reactive("stack")
@solara.component
def Page():
with solara.ToggleButtonsSingle(value=map_type):
solara.Button("Stack", icon_name="mdi-layers-triple", value="stack", text=True)
solara.Button("Split", icon_name="mdi-arrow-split-vertical", value="split", text=True)
solara.Text(map_type.value)
Page()
```

## Issue when trying to use tooltips
```python
@solara.component
def Page():
with solara.ToggleButtonsSingle(value=map_type):
with solara.Tooltip("Stacks each layer on top of each other."):
solara.Button("Stack", icon_name="mdi-layers-triple", value="stack", text=True)
with solara.Tooltip("Creates a split in the map that you can move."):
solara.Button("Split", icon_name="mdi-arrow-split-vertical", value="split", text=True)
solara.Text(map_type.value)
```

| open | 2024-10-24T11:42:17Z | 2024-11-22T10:28:03Z | https://github.com/widgetti/solara/issues/831 | [
"bug"
] | lopezvoliver | 0 |
jupyter-book/jupyter-book | jupyter | 2,153 | Fix analytics config remapping | ### Describe the bug
The upstream `pydata-sphinx-theme` understands configuration sections for analytics information, namely `html.analytics`. These changes post-date the Jupyter Book config & documentation, so we need to update it to match.
### Reproduce the bug
NA
### List your environment
_No response_ | closed | 2024-05-28T10:37:00Z | 2024-05-28T12:24:32Z | https://github.com/jupyter-book/jupyter-book/issues/2153 | [
"bug"
] | agoose77 | 0 |
replicate/cog | tensorflow | 1,801 | Don't hold event lock while processing iterator models | https://github.com/replicate/cog/pull/1773/files#r1676200859 | closed | 2024-07-12T17:06:35Z | 2024-07-18T13:02:41Z | https://github.com/replicate/cog/issues/1801 | [] | nickstenning | 2 |
roboflow/supervision | machine-learning | 1,670 | Problem with minimum matching threshold parameter of ByteTracker | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
Hi folks. Amazing project, but I'm getting a peculiar behaviour in ByteTracker.
My assumption for the `minimum_matching_threshold` parameter of ByteTracker is that it acts similar to an IoU threshold. A smaller threshold should make boxes match more easily, and a larger threshold should make boxes match only if they have a really good match score (ex: really high IoU). However, I observe the inverse behaviour. Not sure if this is expected, but thought I'll highlight it here
### Environment
- Supervision: 0.25.0
- Ubuntu: 22.04
- Python: 3.10
### Minimal Reproducible Example
Code block to reproduce:
```python
import supervision as sv
import numpy as np
detections = [sv.Detections(xyxy=np.array([[10, 10, 20, 20]]),class_id=np.array([1]),confidence=np.array([1]))]*2
detections+= [sv.Detections(xyxy=np.array([[11, 11, 21, 21]]), class_id=np.array([1]), confidence=np.array([1]))]*2 # 90% overlap
byte_tracker_low_threshold = sv.ByteTrack(minimum_matching_threshold=0.1)
tracked_detections = [byte_tracker_low_threshold.update_with_detections(d) for d in detections]
print("Track IDs associated with detections in 10\% overlap: ", list(t_det.tracker_id for t_det in tracked_detections))
print("Internally tracked states in 10\% overlap: ", byte_tracker_low_threshold.tracked_tracks)
print()
print()
byte_tracker_high_threshold = sv.ByteTrack(minimum_matching_threshold=0.9)
tracked_detections = [byte_tracker_high_threshold.update_with_detections(d) for d in detections]
print("Track IDs associated with detections in 90\% overlap: ", list(t_det.tracker_id for t_det in tracked_detections))
print("Internally tracked states in 90\% overlap: ", byte_tracker_high_threshold.tracked_tracks)
```
Gives the output:
```
Track IDs associated with detections in 10\% overlap: [array([1]), array([1]), array([], dtype=int64), array([2])]
Internally tracked states in 10\% overlap: [OT_1_(3-4)]
Track IDs associated with detections in 90\% overlap: [array([1]), array([1]), array([1]), array([1])]
Internally tracked states in 90\% overlap: [OT_0_(1-4)]
```
I would expect the opposite to be true, i.e. when we set a low `minimum_matching_threshold`, it should assign the same track ID to detections more easily (with less IoU overlap). However, that doesn't seem to be the case.
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | open | 2024-11-15T01:54:36Z | 2025-01-09T15:57:27Z | https://github.com/roboflow/supervision/issues/1670 | [
"bug"
] | rsnk96 | 1 |
OFA-Sys/Chinese-CLIP | computer-vision | 236 | loss 为0 | 您好,我训练自己的数据,loss 为 0 可能是什么原因,日志如下:
2023-12-14,08:40:01 | INFO | Rank 0 | Global Steps: 240/270 | Train Epoch: 3 [60/90 (67%)] | Loss: 0.000000 | Image2Text Acc: 100.00 | Text2Image Acc: 100.00 | Data Time: 0.042s | Batch Time: 0.170s | LR: 0.000004 | logit_scale: 2.659 | Global Batch Size: 1 | open | 2023-12-14T08:47:31Z | 2023-12-20T03:19:45Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/236 | [] | wwangxinhao | 1 |
zappa/Zappa | django | 812 | [Migrated] Streaming data with Flask's stream_with_context function does not behave as expected | Originally from: https://github.com/Miserlou/Zappa/issues/1980 by [ArmanMaesumi](https://github.com/ArmanMaesumi)
<!--- Provide a general summary of the issue in the Title above -->
## Context
I am trying to use Flask's stream_with_context function to stream a large file (100mb-500mb) while it is being created.
Here is a simplified version of what I have in Flask:
```
@app.route('/stream')
def streamed_response():
def generate():
for i in range(100000):
yield str(i)
return Response(stream_with_context(generate()))
```
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
Upon hitting the above endpoint, we would expect the Flask server to stream the data to the client _as_ it's being created. The data should immediately appear, and continue to be streamed.
## Actual Behavior
When testing on Lambda, the above endpoint will try to complete the generate() function entirely, THEN return it as a response to the client.
## Possible Fix
Does Lambda support this? Perhaps there's a solution in a different language (node.js/Java/C#).
Does any other serverless platform support this?
## Steps to Reproduce
1. Set up an endpoint that uses a stream_with_context response
2. Hit the endpoint locally, then on Lambda
3. Observe how the local version will stream the response, while the Lambda version attempts to complete the generator function.
## Your Environment
* Zappa version used: 0.48.2
* Operating System and Python version: Windows 10, Python 3.7.3
* Your `zappa_settings.py`:
```
{
"dev": {
"app_function": "app.app",
"profile_name": null,
"project_name": "...",
"runtime": "python3.7",
"s3_bucket": "...l",
"aws_region": "us-east-1"
}
}
```
| closed | 2021-02-20T12:51:57Z | 2022-08-18T02:01:26Z | https://github.com/zappa/Zappa/issues/812 | [] | jneves | 1 |
pyg-team/pytorch_geometric | pytorch | 9,222 | pyproject.toml doesn't list as dependencies modules imported at runtime: dgl, torch_sparse | ### 🐛 Describe the bug
dgl, torch_sparse are imported, but not mentioned in pyproject.toml
### Versions
HEAD | closed | 2024-04-21T03:06:15Z | 2024-04-22T15:43:44Z | https://github.com/pyg-team/pytorch_geometric/issues/9222 | [
"bug"
] | yurivict | 2 |
airtai/faststream | asyncio | 1,297 | Docs: dealing with different schema registries | It would be good to add documentation with examples how to deal with different schema registries. Again there are many registries and coupling router with a particular registry isn't a good idea, unless there will be a some Abstract class first, so later community can add implementation | open | 2024-03-11T10:23:41Z | 2024-08-21T19:09:52Z | https://github.com/airtai/faststream/issues/1297 | [
"documentation",
"Confluent"
] | davorrunje | 0 |
aimhubio/aim | tensorflow | 2,501 | Flag / option to auto-commit or store diff patch | ## 🚀 Feature
Flag or option on run instantiation (or maybe some config file somewhere) to auto-commit when a new run is started so that commits stored on Aim are synced with the git repo.
### Motivation
Often, commits on Aim are not in sync with the git repo state because uncommitted changes are not incorporated.
### Pitch
Let's auto-commit or store a diff patch on Aim so that these changes are reflected on Aim.
### Alternatives
N/A
### Additional context
N/A
| open | 2023-01-25T19:13:09Z | 2023-02-01T18:47:04Z | https://github.com/aimhubio/aim/issues/2501 | [
"type / enhancement",
"area / SDK-storage"
] | rodrigo-castellon | 1 |
litestar-org/litestar | pydantic | 3,893 | Ehancement: CLI - Better error message for invalid `--app` string | ### Description
A condition is missing for the case that `app_path` does not contain a colon.
```
Using Litestar app from env: 'invalid'
Traceback (most recent call last):
File "/home/henry/miniconda3/envs/facefusion/bin/litestar", line 8, in <module>
sys.exit(run_cli())
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/__main__.py", line 6, in run_cli
litestar_group()
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/rich_click/rich_command.py", line 367, in __call__
return super().__call__(*args, **kwargs)
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/rich_click/rich_command.py", line 151, in main
with self.make_context(prog_name, args, **extra) as ctx:
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/cli/_utils.py", line 224, in make_context
self._prepare(ctx)
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/cli/_utils.py", line 206, in _prepare
env = ctx.obj = LitestarEnv.from_env(ctx.params.get("app_path"), ctx.params.get("app_dir"))
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/cli/_utils.py", line 112, in from_env
loaded_app = _load_app_from_path(app_path)
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/cli/_utils.py", line 276, in _load_app_from_path
module_path, app_name = app_path.split(":")
ValueError: not enough values to unpack (expected 2, got 1)
```
Either add a condition to `_load_app_from_path` or introduce a `safe_split` utility/helper.
### URL to code causing the issue
_No response_
### MCVE
```python
litestar --app invalid
```
```
### Steps to reproduce
_No response_
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
2.13.0final0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-12-07T13:27:44Z | 2025-03-20T15:55:03Z | https://github.com/litestar-org/litestar/issues/3893 | [
"Enhancement"
] | henryruhs | 3 |
facebookresearch/fairseq | pytorch | 5,510 | i have tried your hokkien demo before,it works well.but recently i found it not work .what's wrong | ## ❓ Questions and Help
### Before asking:
1. search the issues.
2. search the docs.
<!-- If you still can't find what you need: -->
#### What is your question?
#### Code
<!-- Please paste a code snippet if your question requires it! -->
#### What have you tried?
#### What's your environment?
- fairseq Version (e.g., 1.0 or main):
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux):
- How you installed fairseq (`pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
| open | 2024-06-21T09:42:28Z | 2024-06-21T09:42:28Z | https://github.com/facebookresearch/fairseq/issues/5510 | [
"question",
"needs triage"
] | Jackylee2032 | 0 |
gradio-app/gradio | data-science | 10,335 | How to present mathematical formulas? | Firstly, **I Tried gr.Markdown**. It doesn't work
Then, **I tried gr.Markdown and js** like this:
`<script type="text/javascript" async
src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-MML-AM_CHTML'"
</script>`
`out = gr.HTML(label="Answer", value=mathjax_script + "<div>This is a formula: $y = mx + b$</div>")`
BUT, it doesn't work
**So I would like to ask how to present the mathematical formula?**
**Thanks!!!!!** | closed | 2025-01-11T11:17:32Z | 2025-01-12T15:40:11Z | https://github.com/gradio-app/gradio/issues/10335 | [] | MrJs133 | 1 |
pyqtgraph/pyqtgraph | numpy | 2,417 | Precision issues in opengl renderer when zoomed in with high values | ### Short description
The gpu's ability to deal with interpolating numbers can be somewhat limited, it's not just the values you are sending down because the GPU needs to be able to interpolate between the points to draw the line which can require a bunch of additional precision.
The paintGL code winds up with 2 issues if your range is like 5000.01 to 5000.02.
*The stencil buffer code winds up subject to precision problems leading to the vertexes of the triangles to hop around, this leads to the graph not drawing inside the expected plot area.
*The graph winds up quantized
The first issue can be fixed by simplifying the transforms used to draw the stencil buffer to avoid ever leaving screen coordinates.
Currently the mapRectToItem call transforms the screen coords to model coordinates. Those initial coords can just be used raw (with an offset for the left axis if the model view matrix is reset:
```
rect = view.boundingRect()
gl.glPushMatrix()
gl.glLoadIdentity()
gl.glEnable(gl.GL_STENCIL_TEST)
gl.glColorMask(gl.GL_FALSE, gl.GL_FALSE, gl.GL_FALSE,
gl.GL_FALSE) # disable drawing to frame buffer
gl.glDepthMask(gl.GL_FALSE) # disable drawing to depth buffer
gl.glStencilFunc(gl.GL_NEVER, 1, 0xFF)
gl.glStencilOp(gl.GL_REPLACE, gl.GL_KEEP, gl.GL_KEEP)
## draw stencil pattern
gl.glStencilMask(0xFF)
gl.glClear(gl.GL_STENCIL_BUFFER_BIT)
margin = widget.width() - rect.width()
gl.glBegin(gl.GL_TRIANGLES)
gl.glVertex2f(rect.x() + margin, rect.y())
gl.glVertex2f(rect.x() + rect.width() + margin, rect.y())
gl.glVertex2f(rect.x() + margin, rect.y() + rect.height())
gl.glVertex2f(rect.x() + rect.width() + margin, rect.y() + rect.height())
gl.glVertex2f(rect.x() + rect.width() + margin, rect.y())
gl.glVertex2f(rect.x() + margin, rect.y() + rect.height())
gl.glEnd()
gl.glColorMask(gl.GL_TRUE, gl.GL_TRUE, gl.GL_TRUE, gl.GL_TRUE)
gl.glDepthMask(gl.GL_TRUE)
gl.glStencilMask(0x00)
gl.glStencilFunc(gl.GL_EQUAL, 1, 0xFF)
gl.glPopMatrix()
```
Similarly if the lower left hand coordinates of the data is subtracted from the data and pushed into glTranslate() then there's a quantization error in the initial point, but the rest of the curve is much less quantized:
Blue is qt draw code, red is current ogl code, green includes following transform

```
gl.glPushMatrix()
gl.glTranslate(x[0], y[0], 0)
pos[:, 0] = x - x[0]
pos[:, 1] = y - y[0]
... draw
gl.glPopMatrix()
```
### Tested environment(s)
* PyQtGraph version: 0.11.1
* Qt Python binding: 'PyQt5 5.14.2 Qt 5.14.2'
* Python version: 3.7
* NumPy version: 1.16.6
* Operating system: ubuntu
* Installation method: custom repo
| open | 2022-09-15T18:42:27Z | 2024-06-16T05:44:25Z | https://github.com/pyqtgraph/pyqtgraph/issues/2417 | [
"openGL"
] | gedalia | 8 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,550 | Select specific file types | ### Proposal
Hello,
is there a possibility that only certain file types can be sent with a report?
### Motivation and context
Certain file types may contain malicious code | closed | 2023-07-24T06:48:41Z | 2023-07-28T05:26:30Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3550 | [
"T: Feature"
] | JimpoTEDY | 1 |
pydantic/pydantic-settings | pydantic | 494 | `BaseSettings.__init_subclass__()` takes no keyword arguments | When creating a new BaseSettings class, I get an error that states the `__init_subclass__` function takes no keyword arguments. I've taken the following example directly from the [Pydantic Settings Docs](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#cli-kebab-case-for-arguments)
```python
import sys
from pydantic import Field
from pydantic_settings import BaseSettings
class Settings(BaseSettings, cli_parse_args=True, cli_kebab_case=True):
my_option: str = Field(description='will show as kebab case on CLI')
try:
sys.argv = ['example.py', '--help']
Settings()
except SystemExit as e:
print(e)
```
The output is as follows
```
Traceback (most recent call last):
File "/Users/joshl/Library/Application Support/JetBrains/PyCharm2024.3/scratches/scratch_13.py", line 8, in <module>
class Settings(BaseSettings, cli_parse_args=True, cli_kebab_case=True):
File "/Users/joshl/Projects/COMO/.venv/lib/python3.10/site-packages/pydantic/_internal/_model_construction.py", line 137, in __new__
cls = cast('type[BaseModel]', super().__new__(mcs, cls_name, bases, namespace, **kwargs))
File "/Users/joshl/.local/share/uv/python/cpython-3.10.15-macos-aarch64-none/lib/python3.10/abc.py", line 106, in __new__
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
TypeError: Settings.__init_subclass__() takes no keyword arguments
```
```
> pip list
pydantic 2.10.3
pydantic-core 2.27.1
pydantic-settings 2.6.1
```
I'm seeing a few closed/fixed issues surrounding this, but I'm still having problems
[Pydantic #2522](https://github.com/pydantic/pydantic/issues/2522)
[Pydantic #6499](https://github.com/pydantic/pydantic/issues/6499)
(Not sure if these details are relevant, but I thought I would include them anyway)
Python: 3.10.15
Hardware: Macbook Pro M3
OS: MacOS 15.1.1 | closed | 2024-12-06T18:17:43Z | 2024-12-09T14:24:18Z | https://github.com/pydantic/pydantic-settings/issues/494 | [
"unconfirmed"
] | JoshLoecker | 2 |
paperless-ngx/paperless-ngx | django | 8,513 | [BUG] File descriptors gone wild | ### Description
I have a weird issue where paperless is using hundreds of thousands of file handles and browing out the server. On a completely idle startup, it's using ~150,000 handles. When using it, it rapidly goes up and breaks when the system runs out of file handles.
We are not using notify, but polling on the consume directory, so I wouldn't expect that the system would use so many handles idle.
Celery is using ~7182 unix sockets and ~170 tcp sockets configured as one worker with one thread per worker and has been crashing attempting to open the `celerybeat-schedule.db` file but can't due to too many open files.
I have deleted the `celerybeat-schedule.db` file and it made no difference.
Is there a way to reduce this resource usage at all?
Thank you for any help.
### Steps to reproduce
Run `lsof 2>/dev/null | wc -l` to get a baseline count of the system's open file handles
start paperless
Run `lsof 2>/dev/null | wc -l` to see how many are used now
stop paperless
Run Run `lsof 2>/dev/null | wc -l` and see it go down.
### Webserver logs
```bash
[2024-12-18 01:06:57,694] [INFO] [paperless.management.consumer] Polling directory for changes: /usr/src/paperless/consume
[2024-12-18 01:25:26,612] [DEBUG] [paperless.classifier] Document classification model does not exist (yet), not performing automatic matching.
[2024-12-18 01:26:35,606] [DEBUG] [paperless.management.consumer] Consumer exiting.
[2024-12-18 01:27:11,807] [INFO] [paperless.management.consumer] Polling directory for changes: /usr/src/paperless/consume
```
```
[2024-12-18 01:14:00,283] [INFO] [celery.beat] beat: Starting...
[2024-12-18 01:14:00,317] [ERROR] [celery.beat] Removing corrupted schedule file '/usr/src/paperless/data/celerybeat-schedule.db': error(11, 'Resource temporarily unavailable')
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/celery/beat.py", line 531, in setup_schedule
self._store = self._open_schedule()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/celery/beat.py", line 521, in _open_schedule
return self.persistence.open(self.schedule_filename, writeback=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/shelve.py", line 243, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/shelve.py", line 227, in __init__
Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dbm/__init__.py", line 95, in open
return mod.open(file, flag, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^
_gdbm.error: [Errno 11] Resource temporarily unavailable: '/usr/src/paperless/data/celerybeat-schedule.db'
[2024-12-18 01:14:00,337] [DEBUG] [celery.beat] Current schedule:
<ScheduleEntry: Empty trash documents.tasks.empty_trash() <crontab: 0 1 * * * (m/h/dM/MY/d)>
<ScheduleEntry: Optimize the index documents.tasks.index_optimize() <crontab: 0 0 * * * (m/h/dM/MY/d)>
<ScheduleEntry: Perform sanity check documents.tasks.sanity_check() <crontab: 30 0 * * sun (m/h/dM/MY/d)>
<ScheduleEntry: celery.backend_cleanup celery.backend_cleanup() <crontab: 0 4 * * * (m/h/dM/MY/d)>
[2024-12-18 01:14:00,338] [DEBUG] [celery.beat] beat: Ticking with max interval->5.00 minutes
[2024-12-18 01:14:00,339] [DEBUG] [celery.beat] beat: Waking up in 5.00 minutes.
```
```
### Browser logs
_No response_
### Paperless-ngx version
2.13.5
### Host OS
Ubuntu 20.04
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.13.5",
"server_os": "Linux-6.8.0-49-generic-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 3724519538688,
"available": 3721563865088
},
"database": {
"type": "sqlite",
"url": "/usr/src/paperless/data/db.sqlite3",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0028_alter_mailaccount_password_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://172.16.1.4:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-12-18T00:00:15.131769-08:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": null,
"classifier_error": null
}
}
```
### Browser
_No response_
### Configuration changes
```
PAPERLESS_EMAIL_TASK_CRON: disable
PAPERLESS_TRAIN_TASK_CRON: disable
PAPERLESS_INDEX_TASK_CRON: "0 0 * * *"
PAPERLESS_SANITY_TASK_CRON: "30 0 * * sun"
PAPERLESS_TASK_WORKERS: 1
PAPERLESS_THREADS_PER_WORKER: 1
PAPERLESS_CONSUMER_POLLING: 300
PAPERLESS_CONSUMER_POLLING_RETRY_COUNT: 60
PAPERLESS_CONSUMER_POLLING_DELAY: 60
```
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-12-18T09:43:53Z | 2024-12-18T14:50:19Z | https://github.com/paperless-ngx/paperless-ngx/issues/8513 | [
"dependencies",
"not a bug"
] | PhantomPhoton | 1 |
nolar/kopf | asyncio | 1,070 | on.create() handler keeps getting fired every time object is modified | ### Long story short
I've implemented a firewall operator that assigns externalIPs to LoadBalancer services. The problem it that the on.create() handler keeps getting fired not only upon service creation, but also upon every modification of the service.
Ive tested this by creating a simple Watch stream in python to see if the problem is in Kubernetes uncorrectly handling serivce modification of if kopf treats modifications as creation.
### python watcher
```
import kopf
import kubernetes
import yaml
import pprint
kubernetes.config.load_kube_config()
api = kubernetes.client.CoreV1Api()
w = kubernetes.watch.Watch()
# Start watching for service creation across all namespaces
for event in w.stream(api.list_service_for_all_namespaces):
svc = event['object']
print(f"Service {svc.metadata.name} {event['type']} in namespace {svc.metadata.namespace}")
```
### watcher stdout
```
Service siem-kafka-zookeeper-client MODIFIED in namespace siem-strimzi
Service siem-kafka-zookeeper-nodes MODIFIED in namespace siem-strimzi
Service siem-kafka-zookeeper-client MODIFIED in namespace siem-strimzi
Service siem-kafka-zookeeper-nodes MODIFIED in namespace siem-strimzi
Service siem-kafka-zookeeper-client MODIFIED in namespace siem-strimzi
Service siem-kafka-zookeeper-nodes MODIFIED in namespace siem-strimzi
Service siem-kafka-kafka-brokers MODIFIED in namespace siem-strimzi
```
The intervarls of modification from my script match the logs I see in the kopf operator.
### Expected behaviour
Kopf should only invoke the on.create() decorator if an object is created - event['type'] returned from Watch.stream() retyrbs ADDED instead of MODIFIED
Am I misunderstanding how kopf works?
### Kopf version
1.36.2
### Kubernetes version
1.26.1
### Python version
python3.9.17
### Code
```python
@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
settings.execution.max_workers = 5
@kopf.on.create('v1', 'services', retries=5, backoff=10)
def create_svc(body, spec, **kwargs):
# Get info from object
svc_name = body['metadata']['name']
svc_namespace = body['metadata']['namespace']
obj_type = spec['type']
# If not LB, do nothing
if obj_type == None or obj_type.lower() != 'loadbalancer':
return
# Verfiy object has not been previously processed
annotations = body['metadata'].get('annotations', None)
if annotations != None:
server_pool = annotations.get('operator.io/server_pool_link', None)
if server_pool != None:
return
# ...
# Do its thing
# ...
# Assign externalIP, annotations
service_patch = {
'metadata': {
'annotations' : {
'operator.io/server_pool_link': pool_link,
'operator.io/fw_policy_link': new_policy_link}
},
'spec':{
'externalIPs':[available_pool[0]]
}
}
# Validate changes
try:
# Patch LoadBalancer object
api_response = api.patch_namespaced_service(
name=svc_name,
namespace=svc_namespace,
body=service_patch,
field_validation='Strict')
except Exception as e:
logging.info(f'HANDLER on.create: Object patch failed, received error: {e}')
@kopf.on.delete('v1', 'services', retries=5, backoff=10)
def delete_svc(body, spec, **kwargs):
# undos changes on fw
```
### Logs
```none
[2023-10-19 14:41:05,054] asyncio [DEBUG ] Using selector: EpollSelector
[2023-10-19 14:41:05,058] kopf._core.reactor.r [DEBUG ] Starting Kopf 1.36.2.
[2023-10-19 14:41:05,059] kopf.activities.star [DEBUG ] Activity 'configure' is invoked.
[2023-10-19 14:41:05,061] kopf.activities.star [INFO ] Activity 'configure' succeeded.
[2023-10-19 14:41:05,063] kopf._core.engines.a [INFO ] Initial authentication has been initiated.
[2023-10-19 14:41:05,063] kopf.activities.auth [DEBUG ] Activity 'login_via_client' is invoked.
[2023-10-19 14:41:05,067] kopf._core.engines.p [DEBUG ] Serving health status at http://0.0.0.0:8080/healthz
[2023-10-19 14:41:05,068] kopf.activities.auth [DEBUG ] Client is configured in cluster with service account.
[2023-10-19 14:41:05,070] kopf.activities.auth [INFO ] Activity 'login_via_client' succeeded.
[2023-10-19 14:41:05,071] kopf._core.engines.a [INFO ] Initial authentication has finished.
[2023-10-19 14:41:05,128] kopf._cogs.clients.w [DEBUG ] Starting the watch-stream for customresourcedefinitions.v1.apiextensions.k8s.io cluster-wide.
[2023-10-19 14:41:05,130] kopf._cogs.clients.w [DEBUG ] Starting the watch-stream for services.v1 cluster-wide.
[2023-10-19 14:41:05,367] kopf.objects [DEBUG ] [siem-strimzi/siem-kafka-zookeeper-nodes] Resuming is in progress: {'metadata': {'name': 'siem-kafka-zooke
eper-nodes', 'namespace': 'siem-strimzi', 'uid': '46b9c17a-9189-49be-ab89-148451b1fdaf', 'resourceVersion': '2262892', 'creationTimestamp': '2023-10-13T12:32:40Z',
'labels': {'app.kubernetes.io/instance': 'siem-kafka', 'app.kubernetes.io/managed-by': 'strimzi-cluster-operator', 'app.kubernetes.io/name': 'zookeeper', 'app.kuber
netes.io/part-of': 'strimzi-siem-kafka', 'strimzi.io/cluster': 'siem-kafka', 'strimzi.io/component-type': 'zookeeper', 'strimzi.io/kind': 'Kafka', 'strimzi.io/name'
: 'siem-kafka-zookeeper'}, 'annotations': {'kopf.zalando.org/last-handled-configuration': '{"spec":{"ports":[{"name":"tcp-clients","protocol":"TCP","port":2181,"tar
getPort":2181},{"name":"tcp-clustering","protocol":"TCP","port":2888,"targetPort":2888},{"name":"tcp-election","protocol":"TCP","port":3888,"targetPort":3888}],"sel
ector":{"strimzi.io/cluster":"siem-kafka","strimzi.io/kind":"Kafka","strimzi.io/name":"siem-kafka-zookeeper"},"clusterIP":"None","clusterIPs":["None"],"type":"Clust
erIP","sessionAffinity":"None","publishNotReadyAddresses":true,"ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","internalTrafficPolicy":"Cluster"},"metadata":{"
labels":{"app.kubernetes.io/instance":"siem-kafka","app.kubernetes.io/managed-by":"strimzi-cluster-operator","app.kubernetes.io/name":"zookeeper","app.kubernetes.io
/part-of":"strimzi-siem-kafka","strimzi.io/cluster":"siem-kafka","strimzi.io/component-type":"zookeeper","strimzi.io/kind":"Kafka","strimzi.io/name":"siem-kafka-zoo
keeper"}}}\n'}, 'ownerReferences': [{'apiVersion': 'kafka.strimzi.io/v1beta2', 'kind': 'Kafka', 'name': 'siem-kafka', 'uid': 'c4e5239a-021a-436f-b4c9-281948bdb963',
'controller': False, 'blockOwnerDeletion': False}], 'finalizers': ['kopf.zalando.org/KopfFinalizerMarker'], 'managedFields': [{'manager': 'strimzi-cluster-operator
', 'operation': 'Update', 'apiVersion': 'v1', 'time': '2023-10-13T12:32:40Z', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:app.kub
ernetes.io/instance': {}, 'f:app.kubernetes.io/managed-by': {}, 'f:app.kubernetes.io/name': {}, 'f:app.kubernetes.io/part-of': {}, 'f:strimzi.io/cluster': {}, 'f:st
rimzi.io/component-type': {}, 'f:strimzi.io/kind': {}, 'f:strimzi.io/name': {}}, 'f:ownerReferences': {'.': {}, 'k:{"uid":"c4e5239a-021a-436f-b4c9-281948bdb963"}':
{}}}, 'f:spec': {'f:clusterIP': {}, 'f:internalTrafficPolicy': {}, 'f:ports': {'.': {}, 'k:{"port":2181,"protocol":"TCP"}': {'.': {}, 'f:name': {}, 'f:port': {}, 'f
:protocol': {}, 'f:targetPort': {}}, 'k:{"port":2888,"protocol":"TCP"}': {'.': {}, 'f:name': {}, 'f:port': {}, 'f:protocol': {}, 'f:targetPort': {}}, 'k:{"port":388
8,"protocol":"TCP"}': {'.': {}, 'f:name': {}, 'f:port': {}, 'f:protocol': {}, 'f:targetPort': {}}}, 'f:publishNotReadyAddresses': {}, 'f:selector': {}, 'f:sessionAf
finity': {}, 'f:type': {}}}}, {'manager': 'kopf', 'operation': 'Update', 'apiVersion': 'v1', 'time': '2023-10-19T14:39:05Z', 'fieldsType': 'FieldsV1', 'fieldsV1': {
'f:metadata': {'f:annotations': {'.': {}, 'f:kopf.zalando.org/last-handled-configuration': {}}, 'f:finalizers': {'.': {}, 'v:"kopf.zalando.org/KopfFinalizerMarker"'
: {}}}}}]}, 'spec': {'ports': [{'name': 'tcp-clients', 'protocol': 'TCP', 'port': 2181, 'targetPort': 2181}, {'name': 'tcp-clustering', 'protocol': 'TCP', 'port': 2
888, 'targetPort': 2888}, {'name': 'tcp-election', 'protocol': 'TCP', 'port': 3888, 'targetPort': 3888}], 'selector': {'strimzi.io/cluster': 'siem-kafka', 'strimzi.io/kind': 'Kafka', 'strimzi.io/name': 'siem-kafka-zookeeper'}, 'clusterIP': 'None', 'clusterIPs': ['None'], 'type': 'ClusterIP', 'sessionAffinity': 'None', 'publish
NotReadyAddresses': True, 'ipFamilies': ['IPv4'], 'ipFamilyPolicy': 'SingleStack', 'internalTrafficPolicy': 'Cluster'}, 'status': {'loadBalancer': {}}, 'kind': 'Ser
vice', 'apiVersion': 'v1'}
[2023-10-19 14:41:05,367] kopf.objects [DEBUG ] [siem-strimzi/siem-kafka-zookeeper-nodes] Handling cycle is finished, waiting for new changes.
# Resumes for all services in cluster
# Then gets stuck with a few services
[2023-10-19 14:43:05,236] kopf.objects [DEBUG ] [siem-strimzi/siem-kafka-zookeeper-client] Adding the finalizer, thus preventing the actual deletion.
[2023-10-19 14:43:05,237] kopf.objects [DEBUG ] [siem-strimzi/siem-kafka-zookeeper-client] Patching with: {'metadata': {'finalizers': ['kopf.zalando.org/K
opfFinalizerMarker']}}
[2023-10-19 14:43:05,377] kopf.objects [DEBUG ] [siem-strimzi/siem-kafka-zookeeper-client] Creation is in progress: {'kind': 'Service', 'apiVersion': 'v1'
, 'metadata': {'name': 'siem-kafka-zookeeper-client', 'namespace': 'siem-strimzi', 'uid': '2232dd65-a841-4e06-8cb0-92a24f0fcc87', 'resourceVersion': '2263899', 'cre
ationTimestamp': '2023-10-13T12:32:39Z', 'labels': {'app.kubernetes.io/instance': 'siem-kafka', 'app.kubernetes.io/managed-by': 'strimzi-cluster-operator', 'app.kub
ernetes.io/name': 'zookeeper', 'app.kubernetes.io/part-of': 'strimzi-siem-kafka', 'strimzi.io/cluster': 'siem-kafka', 'strimzi.io/component-type': 'zookeeper', 'str
imzi.io/kind': 'Kafka', 'strimzi.io/name': 'siem-kafka-zookeeper'}, 'ownerReferences': [{'apiVersion': 'kafka.strimzi.io/v1beta2', 'kind': 'Kafka', 'name': 'siem-ka
fka', 'uid': 'c4e5239a-021a-436f-b4c9-281948bdb963', 'controller': False, 'blockOwnerDeletion': False}], 'finalizers': ['kopf.zalando.org/KopfFinalizerMarker'], 'ma
nagedFields': [{'manager': 'strimzi-cluster-operator', 'operation': 'Update', 'apiVersion': 'v1', 'time': '2023-10-13T12:32:39Z', 'fieldsType': 'FieldsV1', 'fieldsV
1': {'f:metadata': {'f:labels': {'.': {}, 'f:app.kubernetes.io/instance': {}, 'f:app.kubernetes.io/managed-by': {}, 'f:app.kubernetes.io/name': {}, 'f:app.kubernete
s.io/part-of': {}, 'f:strimzi.io/cluster': {}, 'f:strimzi.io/component-type': {}, 'f:strimzi.io/kind': {}, 'f:strimzi.io/name': {}}, 'f:ownerReferences': {'.': {},
'k:{"uid":"c4e5239a-021a-436f-b4c9-281948bdb963"}': {}}}, 'f:spec': {'f:internalTrafficPolicy': {}, 'f:ports': {'.': {}, 'k:{"port":2181,"protocol":"TCP"}': {'.': {
}, 'f:name': {}, 'f:port': {}, 'f:protocol': {}, 'f:targetPort': {}}}, 'f:selector': {}, 'f:sessionAffinity': {}, 'f:type': {}}}}, {'manager': 'kopf', 'operation':
'Update', 'apiVersion': 'v1', 'time': '2023-10-19T14:43:05Z', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"kopf.zalando.org/K
opfFinalizerMarker"': {}}}}}]}, 'spec': {'ports': [{'name': 'tcp-clients', 'protocol': 'TCP', 'port': 2181, 'targetPort': 2181}], 'selector': {'strimzi.io/cluster':
'siem-kafka', 'strimzi.io/kind': 'Kafka', 'strimzi.io/name': 'siem-kafka-zookeeper'}, 'clusterIP': '10.10.18.215', 'clusterIPs': ['10.10.18.215'], 'type': 'ClusterIP
', 'sessionAffinity': 'None', 'ipFamilies': ['IPv4'], 'ipFamilyPolicy': 'SingleStack', 'internalTrafficPolicy': 'Cluster'}, 'status': {'loadBalancer': {}}}
[2023-10-19 14:43:05,377] kopf.objects [DEBUG ] [siem-strimzi/siem-kafka-zookeeper-client] Handler 'create_svc' is invoked.
[2023-10-19 14:43:05,379] kopf.objects [INFO ] [siem-strimzi/siem-kafka-zookeeper-client] Handler 'create_svc' succeeded.
[2023-10-19 14:43:05,380] kopf.objects [INFO ] [siem-strimzi/siem-kafka-zookeeper-client] Creation is processed: 1 succeeded; 0 failed.
```
### Additional information
kubectl describe siem-kafka-zookeeper-client
```
apiVersion: v1
kind: Service
metadata:
annotations:
kopf.zalando.org/last-handled-configuration: |
{"spec":{"ports":[{"name":"tcp-clients","protocol":"TCP","port":2181,"targetPort":2181}],"selector":{"strimzi.io/cluster":"siem-kafka","strimzi.io/kind":"Kafk
a","strimzi.io/name":"siem-kafka-zookeeper"},"clusterIP":"10.10.18.215","clusterIPs":["10.10.18.215"],"type":"ClusterIP","sessionAffinity":"None","ipFamilies":["IPv4"
],"ipFamilyPolicy":"SingleStack","internalTrafficPolicy":"Cluster"},"metadata":{"labels":{"app.kubernetes.io/instance":"siem-kafka","app.kubernetes.io/managed-by":"
strimzi-cluster-operator","app.kubernetes.io/name":"zookeeper","app.kubernetes.io/part-of":"strimzi-siem-kafka","strimzi.io/cluster":"siem-kafka","strimzi.io/compon
ent-type":"zookeeper","strimzi.io/kind":"Kafka","strimzi.io/name":"siem-kafka-zookeeper"}}}
creationTimestamp: "2023-10-13T12:32:39Z"
finalizers:
- kopf.zalando.org/KopfFinalizerMarker
labels:
app.kubernetes.io/instance: siem-kafka
app.kubernetes.io/managed-by: strimzi-cluster-operator
app.kubernetes.io/name: zookeeper
app.kubernetes.io/part-of: strimzi-siem-kafka
strimzi.io/cluster: siem-kafka
strimzi.io/component-type: zookeeper
strimzi.io/kind: Kafka
strimzi.io/name: siem-kafka-zookeeper
name: siem-kafka-zookeeper-client
namespace: siem-strimzi
ownerReferences:
- apiVersion: kafka.strimzi.io/v1beta2
blockOwnerDeletion: false
controller: false
kind: Kafka
name: siem-kafka
uid: c4e5239a-021a-436f-b4c9-281948bdb963
resourceVersion: "2264390"
uid: 2232dd65-a841-4e06-8cb0-92a24f0fcc87
spec:
clusterIP: 10.10.18.215
clusterIPs:
- 10.10.18.215
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: tcp-clients
port: 2181
protocol: TCP
targetPort: 2181
selector:
strimzi.io/cluster: siem-kafka
strimzi.io/kind: Kafka
strimzi.io/name: siem-kafka-zookeeper
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
``` | open | 2023-10-19T14:48:50Z | 2023-10-26T07:21:34Z | https://github.com/nolar/kopf/issues/1070 | [
"bug"
] | michal0000000 | 1 |
pyg-team/pytorch_geometric | pytorch | 8,994 | Possible overwriting scenario with Jinja | ### 🐛 Describe the bug
I am getting the following error, not always but from time to time:
```
File "/usr/local/lib/python3.8/dist-packages/torch_geometric/nn/conv/cg_conv.py", line 57, in __init__
super().__init__(aggr=aggr, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch_geometric/nn/conv/message_passing.py", line 193, in __init__
self.__class__._jinja_propagate = module.propagate
AttributeError: module 'torch_geometric.nn.conv.cg_conv_CGConv_propagate' has no attribute 'propagate'
```
I am using pyg in a parallel setting with mpi. I think there is possibility of overwriting when pyg uses jina template here:
https://github.com/pyg-team/pytorch_geometric/blob/9b660ac6ca882604d1ae521912d20ded1d180ecf/torch_geometric/nn/conv/message_passing.py#L170
I put some following debug message around line 186:
```
print ("module:", module)
print ("dir(module):", dir(module))
```
Here is what I got from one process:
```
module: <module 'torch_geometric.nn.conv.pna_conv_PNAConv_propagate' from '/root/.cache/pyg/message_passing/torch_geometric.nn.conv.pna_conv_PNAConv_propagate.py'>
dir(module): ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__']
```
And this is the output from the other process:
```
module: <module 'torch_geometric.nn.conv.pna_conv_PNAConv_propagate' from '/root/.cache/pyg/message_passing/torch_geometric.nn.conv.pna_conv_PNAConv_propagate.py'>
dir(module): ['Adj', 'Any', 'Callable', 'CollectArgs', 'DataLoader', 'DegreeScalerAggregation', 'Dict', 'Linear', 'List', 'MessagePassing', 'ModuleList', 'NamedTuple', 'OptTensor', 'Optional', 'PNAConv', 'Sequential', 'Size', 'SparseTensor', 'Tensor', 'Union', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'activation_resolver', 'collect', 'degree', 'is_compiling', 'is_sparse', 'is_torch_sparse_tensor', 'propagate', 'ptr2index', 'reset', 'torch', 'torch_geometric', 'typing']
```
It looks to me this can happen when two processes in the same node generate the same template file. One process read the python script in the middle, while the other process overwrites it.
This is just my thought. Anyhow, I am getting such error when using with MPI. Any help will be appreciated.
### Versions
```
PyTorch version: 2.0.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 22 2023, 10:22:35) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-6.6.12-linuxkit-x86_64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.0.1+cpu
[pip3] torch-cluster==1.6.3+pt20cpu
[pip3] torch_geometric==2.5.0
[pip3] torch-scatter==2.1.2+pt20cpu
[pip3] torch-sparse==0.6.18+pt20cpu
[pip3] torch-spline-conv==1.2.2+pt20cpu
[pip3] torchaudio==2.0.2+cpu
[pip3] torchvision==0.15.2+cpu
[conda] Could not collect
``` | closed | 2024-02-29T15:31:31Z | 2024-03-01T18:24:03Z | https://github.com/pyg-team/pytorch_geometric/issues/8994 | [
"bug"
] | jychoi-hpc | 3 |
browser-use/browser-use | python | 92 | How to stop Python script after agent is done? | It requires to press "Enter" to stop, but in a Docker environment it's not that handy and triggering `exit()` seems to be a workaround.
Is it happening on Playwright level or inside browser-use? Anyone knows how to stop it? | open | 2024-12-06T08:54:10Z | 2024-12-06T14:01:55Z | https://github.com/browser-use/browser-use/issues/92 | [] | n-sviridenko | 1 |
fastapi/sqlmodel | pydantic | 37 | FastAPI and Pydantic - Relationships Not Working | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import List, Optional
from fastapi import Depends, FastAPI, HTTPException, Query
from sqlmodel import Field, Relationship, Session, SQLModel, create_engine, select
class TeamBase(SQLModel):
name: str
headquarters: str
class Team(TeamBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
heroes: List["Hero"] = Relationship(back_populates="team")
class TeamCreate(TeamBase):
pass
class TeamRead(TeamBase):
id: int
class TeamUpdate(SQLModel):
id: Optional[int] = None
name: Optional[str] = None
headquarters: Optional[str] = None
class HeroBase(SQLModel):
name: str
secret_name: str
age: Optional[int] = None
team_id: Optional[int] = Field(default=None, foreign_key="team.id")
class Hero(HeroBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
team: Optional[Team] = Relationship(back_populates="heroes")
class HeroRead(HeroBase):
id: int
class HeroCreate(HeroBase):
pass
class HeroUpdate(SQLModel):
name: Optional[str] = None
secret_name: Optional[str] = None
age: Optional[int] = None
team_id: Optional[int] = None
class HeroReadWithTeam(HeroRead):
team: Optional[TeamRead] = None
class TeamReadWithHeroes(TeamRead):
heroes: List[HeroRead] = []
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
connect_args = {"check_same_thread": False}
engine = create_engine(sqlite_url, echo=True, connect_args=connect_args)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
def get_session():
with Session(engine) as session:
yield session
app = FastAPI()
@app.on_event("startup")
def on_startup():
create_db_and_tables()
@app.post("/heroes/", response_model=HeroRead)
def create_hero(*, session: Session = Depends(get_session), hero: HeroCreate):
db_hero = Hero.from_orm(hero)
session.add(db_hero)
session.commit()
session.refresh(db_hero)
return db_hero
@app.get("/heroes/", response_model=List[HeroRead])
def read_heroes(
*,
session: Session = Depends(get_session),
offset: int = 0,
limit: int = Query(default=100, lte=100),
):
heroes = session.exec(select(Hero).offset(offset).limit(limit)).all()
return heroes
@app.get("/heroes/{hero_id}", response_model=HeroReadWithTeam)
def read_hero(*, session: Session = Depends(get_session), hero_id: int):
hero = session.get(Hero, hero_id)
if not hero:
raise HTTPException(status_code=404, detail="Hero not found")
return hero
@app.patch("/heroes/{hero_id}", response_model=HeroRead)
def update_hero(
*, session: Session = Depends(get_session), hero_id: int, hero: HeroUpdate
):
db_hero = session.get(Hero, hero_id)
if not db_hero:
raise HTTPException(status_code=404, detail="Hero not found")
hero_data = hero.dict(exclude_unset=True)
for key, value in hero_data.items():
setattr(db_hero, key, value)
session.add(db_hero)
session.commit()
session.refresh(db_hero)
return db_hero
@app.delete("/heroes/{hero_id}")
def delete_hero(*, session: Session = Depends(get_session), hero_id: int):
hero = session.get(Hero, hero_id)
if not hero:
raise HTTPException(status_code=404, detail="Hero not found")
session.delete(hero)
session.commit()
return {"ok": True}
@app.post("/teams/", response_model=TeamRead)
def create_team(*, session: Session = Depends(get_session), team: TeamCreate):
db_team = Team.from_orm(team)
session.add(db_team)
session.commit()
session.refresh(db_team)
return db_team
@app.get("/teams/", response_model=List[TeamRead])
def read_teams(
*,
session: Session = Depends(get_session),
offset: int = 0,
limit: int = Query(default=100, lte=100),
):
teams = session.exec(select(Team).offset(offset).limit(limit)).all()
return teams
@app.get("/teams/{team_id}", response_model=TeamReadWithHeroes)
def read_team(*, team_id: int, session: Session = Depends(get_session)):
team = session.get(Team, team_id)
if not team:
raise HTTPException(status_code=404, detail="Team not found")
return team
@app.patch("/teams/{team_id}", response_model=TeamRead)
def update_team(
*,
session: Session = Depends(get_session),
team_id: int,
team: TeamUpdate,
):
db_team = session.get(Team, team_id)
if not db_team:
raise HTTPException(status_code=404, detail="Team not found")
team_data = team.dict(exclude_unset=True)
for key, value in team_data.items():
setattr(db_team, key, value)
session.add(db_team)
session.commit()
session.refresh(db_team)
return db_team
@app.delete("/teams/{team_id}")
def delete_team(*, session: Session = Depends(get_session), team_id: int):
team = session.get(Team, team_id)
if not team:
raise HTTPException(status_code=404, detail="Team not found")
session.delete(team)
session.commit()
return {"ok": True}
```
### Description
Is realationships working for anyone?
I either get null or an empty list.
OK, so, I've copied the last full file preview at the - https://sqlmodel.tiangolo.com/tutorial/fastapi/relationships/
Run it and it creates the Db and the foreign key
Then I've insert the data into the Db.
Checking the docs UI everything looks great
<img width="1368" alt="Screenshot 2021-08-26 at 23 33 55" src="https://user-images.githubusercontent.com/11464425/131044799-26f45765-95bf-4528-8353-4277dcfceb3e.png">
But when I do a request for a hero, `team` is `null`
<img width="1400" alt="Screenshot 2021-08-26 at 23 36 39" src="https://user-images.githubusercontent.com/11464425/131044990-e773fe1f-3b3a-48e4-9204-74ce0b14718c.png">
Really not sure what going on, especially when all I have just is copied the code example with no changes?
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.8.2
### Additional Context
_No response_ | closed | 2021-08-26T22:40:52Z | 2024-08-22T16:54:39Z | https://github.com/fastapi/sqlmodel/issues/37 | [
"question"
] | Chunkford | 24 |
litestar-org/polyfactory | pydantic | 27 | `OrmarModelFactory.get_field_value` TypeError("object of type 'bool' has no len()") | Hi @Goldziher!
First of all, thanks for this superb library, I just started integrating it into my project and it seems very promising. I stumbled upon a problem, though, and I think it might be a problem with the library itself.
The `OrmarModelFactory` overrides the `get_field_value` method to handle choices field. However, in my model I have a `ormar.ForeignKey` field:
```python
user: Optional[Union[User, Dict]] = ormar.ForeignKey(User)
```
When trying to create an instance of this model using `pydantic_factories`, the aforementioned method raises an error:
```python
@classmethod
def get_field_value(cls, model_field: ModelField) -> Any:
"""
We need to handle here both choices and the fact that ormar sets values to be optional
"""
model_field.required = True
> if hasattr(model_field.field_info, "choices") and len(model_field.field_info.choices) > 0: # type: ignore
E TypeError: object of type 'bool' has no len()
```
The problem is that this model_field actually does have the `choices` attribute in the `field_info` dict, but it is set to `False`. The `hasattr(model_field.field_info, "choices")` check does not accommodate for that and returns true, and then `len(False)` obviously fails.
I am not sure if I am thinking correctly, but if so, then simply replacing `hasattr(model_field.field_info, "choices")` with:
```python
getattr(model_field.field_info, "choices", False)
```
will resolve the issue (it did it for me).
If I am not missing anything and my solution is right, I can make a PR tomorrow as well :) It's just a 1 line change anyway.
Thanks! | closed | 2022-02-15T21:59:28Z | 2022-02-18T08:20:33Z | https://github.com/litestar-org/polyfactory/issues/27 | [] | mciszczon | 1 |
oegedijk/explainerdashboard | dash | 140 | Tabs are freezing | I have dataset which has 100k row and 50 column. I use classification use case.
--
Feature Importances
Classification Stats
--
tabs are opened in ~1sec.
---
Individual Predictions
What if...
Feature Dependence
Decision Trees
---
This tabs are opend in 30 second... What is the problem? I think they are precomputed values aren't they? | closed | 2021-08-10T11:41:13Z | 2021-12-23T15:27:31Z | https://github.com/oegedijk/explainerdashboard/issues/140 | [] | nailcankara | 2 |
Kanaries/pygwalker | matplotlib | 240 | Running PyGWalker in a Hugging Face space | It would be amazing to test PyGWalker in a [Hugging Face space](https://huggingface.co/spaces), particularly on datasets hosted on the Hub.
| open | 2023-09-25T12:47:33Z | 2023-10-25T14:18:57Z | https://github.com/Kanaries/pygwalker/issues/240 | [
"enhancement"
] | severo | 15 |
christabor/flask_jsondash | flask | 38 | Only load js files if the chart for it is enabled in that view. | closed | 2016-08-26T21:45:02Z | 2016-09-09T22:23:53Z | https://github.com/christabor/flask_jsondash/issues/38 | [
"enhancement",
"performance"
] | christabor | 0 | |
HIT-SCIR/ltp | nlp | 428 | requirement.txt 下面 transformers 的版本必须是是3.2 有什么特殊原因么? | transformers==3.2.0 跟transformers 最新版有冲突。 安装是会自动降级transformers。 有什么特殊原因必须固定在3.2 版本么?由于版本原因会导致transformers 里面一些example 无法运行 | closed | 2020-10-29T00:03:25Z | 2020-11-02T06:23:11Z | https://github.com/HIT-SCIR/ltp/issues/428 | [] | johnsonice | 1 |
apachecn/ailearning | python | 499 | LSTM深入浅出的好文这篇 blog 链接已挂 | 第2部分-深度学习基础-深度学习必学下第4个链接 _LSTM深入浅出的好文: https://blog.csdn.net/roslei/article/details/61912618_ ,此链接不可用。 | closed | 2019-04-22T02:09:03Z | 2019-04-26T02:06:29Z | https://github.com/apachecn/ailearning/issues/499 | [] | Sunjk21 | 1 |
albumentations-team/albumentations | deep-learning | 2,439 | [Feature request] Add apply_to_images to ToGray | open | 2025-03-11T01:19:11Z | 2025-03-11T01:19:17Z | https://github.com/albumentations-team/albumentations/issues/2439 | [
"enhancement",
"good first issue"
] | ternaus | 0 | |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,646 | Training Parameters or Architecture Settings Recommendations | closed | 2024-04-22T12:21:35Z | 2024-05-03T07:36:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1646 | [] | selimceylan | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.