repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
mirumee/ariadne | graphql | 1,051 | Support passing Python `Enum` directly to `make_executable_schema` | `make_executable_schema` could have two lines of magic that would repack `Enum`s to `ariadne.EnumType(enum.__name__, enum).bind_to_schema(schema)` | closed | 2023-03-17T17:02:54Z | 2023-03-20T12:27:06Z | https://github.com/mirumee/ariadne/issues/1051 | [
"enhancement",
"docs"
] | rafalp | 0 |
aminalaee/sqladmin | sqlalchemy | 181 | Show the form fields in the order of `only` | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
`get_model_form()` receives `only` parameter in `Sequence` but ignores its order to build form fields.
### Describe the solution you would like.
`attributes` should align to the order of `only` if specified like:
```python
attributes = []
names = only or mapper.attrs.keys()
for name in names:
if exclude and name in exclude:
continue
attributes.append((name, mapper.attrs[name]))
```
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | closed | 2022-06-16T10:56:08Z | 2022-06-19T08:59:44Z | https://github.com/aminalaee/sqladmin/issues/181 | [] | okapies | 0 |
sebastianruder/NLP-progress | machine-learning | 209 | Semantic parsing / natural language query | It would be really interesting to see a state-of-the-art summary for semantic parsing tasks, like natural-language query into databases. Is that task category too broad to really accommodate comparable metrics? I see [a bunch of papers at stateoftheart.ai](https://www.stateoftheart.ai/?area=Natural%20Language%20Processing&task=Semantic%20Parsing) but the metrics columns there are quite sparse. | closed | 2019-01-16T02:11:23Z | 2019-01-16T16:27:58Z | https://github.com/sebastianruder/NLP-progress/issues/209 | [] | gthb | 2 |
fastapi/sqlmodel | sqlalchemy | 249 | Create Relationships with Unique Fields (UniqueViolationError) | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
# From the SQLModel Tutorial (https://sqlmodel.tiangolo.com/tutorial/relationship-attributes/create-and-update-relationships/)
from typing import List, Optional
from sqlmodel import Field, Relationship, Session, SQLModel, create_engine
class Team(SQLModel, table=True):
# id: Optional[int] = Field(default=None, primary_key=True)
id: int = Field(default=None, primary_key=True) # NEW
name: str = Field(index=True)
heroes: List["Hero"] = Relationship(back_populates="team")
class Hero(SQLModel, table=True):
# id: Optional[int] = Field(default=None, primary_key=True)
id: int = Field(default=None, primary_key=True) # NEW
name: str = Field(index=True)
# team_id: Optional[int] = Field(default=None, foreign_key="team.id")
# team: Optional[Team] = Relationship(back_populates="heroes")
team_id: int = Field(default=None, foreign_key="team.id") # NEW
team: Team = Relationship(back_populates="heroes") # NEW
from sqlalchemy.ext.asyncio import AsyncSession # ADDED: 2022_02_24
# def create_heroes():
async def create_heroes(session: AsyncSession, request: Hero): # NEW, EDITED: 2022_02_24
# with Session(engine) as session: # EDITED
# team_preventers = Team(name="Preventers", headquarters="Sharp Tower") # REMOVE
# team_z_force = Team(name="Z-Force", headquarters="Sister Margaret’s Bar") # REMOVE
assigned_team = Team(name=request.team_to_assign)
new_hero = Hero(
name=request.hero_name,
team=assigned_team
)
session.add(new_hero)
await session.commit() # EDITED: 2022_02_24
await session.refresh(new_hero) # EDITED: 2022_02_24
return new_hero # ADDED: 2022_02_24
# Code below omitted 👇
```
### Description
I'm following the SQLModel tutorial as I implement my own version. I have a model very similar to the above example (derived from the Hero/Team example given in the tutorial on how to implement One-To-Many relationships with SQLModel.
When I use this approach, it does write the required Team and Hero objects to my database. However, it does not check the Team table to ensure that the "team_to_assign" from the request object does not already exist. So, if I use the "create_heroes" function (in two separate commits) to create two Heroes who are on the same team, I get two entries for the same team in the Team table. This is not desirable. If the team already exists, the Hero being created should use the id that already exists for that team.
When I implement "sa_column_kwargs={"unique": True}" within the "name" Field of the Team table, I can no longer create a new Hero if they are to be connected to a Team that already exists. I get the error:
> `sqlalchemy.exc.IntegrityError: (sqlalchemy.dialects.postgresql.asyncpg.IntegrityError) <class 'asyncpg.exceptions.UniqueViolationError'>: duplicate key value violates unique constraint "ix_team_name"
> DETAIL: Key (name)=(team_name) already exists.
> [SQL: INSERT INTO "team" (name) VALUES (%s) RETURNING "team".id]`
I was hoping that would somehow tell SQLModel to skip the insertion of a Team that already exists and get the appropriate Team id instead. Clearly it just stops it from happening. SQLModel doesn't appear to check that a Team already exists before inserting it into the Team table.
Am I missing something about how to handle this with SQLModel, or am I meant to employ my own logic to check the Team table prior to generating the Hero object to insert?
Thanks for your time!
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.10
### Additional Context
Using async libraries:
SQLAlchemy = {extras = ["asyncio"], version = "^1.4.31"}
asyncpg = "^0.25.0"
| closed | 2022-02-23T17:02:05Z | 2022-03-02T14:08:42Z | https://github.com/fastapi/sqlmodel/issues/249 | [
"question"
] | njdowdy | 11 |
microsoft/MMdnn | tensorflow | 694 | Transfer from keras to caffe 【error forSeparableConv 】 | Platform (like ubuntu 16.04/win10):ubuntu 16.04
Python version:python2.7.12
Source framework with version (like Tensorflow 1.4.1 with GPU):keras 2.2.4
Destination framework with version (like CNTK 2.3 with GPU):caffe
Pre-trained model path (webpath or webdisk path):
Running scripts:
mmconvert -sf keras -iw age.h5 --dstNodeName age -df caffe -om mycaffemodel
```
Using TensorFlow backend.
2019-07-12 17:28:20.255156: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2019-07-12 17:28:20.361369: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties:
name: Quadro P2000 major: 6 minor: 1 memoryClockRate(GHz): 1.4805
pciBusID: 0000:65:00.0
totalMemory: 4.94GiB freeMemory: 4.28GiB
2019-07-12 17:28:20.361401: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0
2019-07-12 17:28:20.573748: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4032 MB memory) -> physical GPU (device: 0, name: Quadro P2000, pci bus id: 0000:65:00.0, compute capability: 6.1)
IR network structure is saved as [cf3c22ff6bc040c4b67fefb242b96e02.json].
IR network structure is saved as [cf3c22ff6bc040c4b67fefb242b96e02.pb].
IR weights are saved as [cf3c22ff6bc040c4b67fefb242b96e02.npy].
Parse file [cf3c22ff6bc040c4b67fefb242b96e02.pb] with binary format successfully.
CaffeEmitter has not supported operator [SeparableConv].
separable_conv2d_1
CaffeEmitter has not supported operator [SeparableConv].
separable_conv2d_2
CaffeEmitter has not supported operator [SeparableConv].
separable_conv2d_3
CaffeEmitter has not supported operator [SeparableConv].
separable_conv2d_4
CaffeEmitter has not supported operator [SeparableConv].
separable_conv2d_5
CaffeEmitter has not supported operator [SeparableConv].
separable_conv2d_6
CaffeEmitter has not supported operator [SeparableConv].
separable_conv2d_7
CaffeEmitter has not supported operator [SeparableConv].
separable_conv2d_8
Target network code snippet is saved as [cf3c22ff6bc040c4b67fefb242b96e02.py].
Target weights are saved as [cf3c22ff6bc040c4b67fefb242b96e02.npy].
Traceback (most recent call last):
File "/home/bill/.local/bin/mmconvert", line 10, in <module>
sys.exit(_main())
File "/home/bill/.local/lib/python2.7/site-packages/mmdnn/conversion/_script/convert.py", line 112, in _main
dump_code(args.dstFramework, network_filename + '.py', temp_filename + '.npy', args.outputModel, args.dump_tag)
File "/home/bill/.local/lib/python2.7/site-packages/mmdnn/conversion/_script/dump_code.py", line 32, in dump_code
save_model(MainModel, network_filepath, weight_filepath, dump_filepath)
File "/home/bill/.local/lib/python2.7/site-packages/mmdnn/conversion/caffe/saver.py", line 9, in save_model
MainModel.make_net(dump_net)
File "cf3c22ff6bc040c4b67fefb242b96e02.py", line 100, in make_net
n = KitModel()
File "cf3c22ff6bc040c4b67fefb242b96e02.py", line 38, in KitModel
n.batch_normalization_4 = L.BatchNorm(n.separable_conv2d_1, eps=0.0010000000475, use_global_stats=True, ntop=1)
File "/home/bill/E/dl_base/caffe_program/caffe_mobilenet/python/caffe/net_spec.py", line 180, in __getattr__
return self.tops[name]
KeyError: 'separable_conv2d_1'
```
Before I can successfully convert mxnet's depthwise conv to caffe's groupconv, how to deal with the keras's SeparableConv and transfer to caffe's groupconv. Thanks!
| open | 2019-07-12T09:50:49Z | 2020-06-24T01:09:25Z | https://github.com/microsoft/MMdnn/issues/694 | [] | zys1994 | 1 |
InstaPy/InstaPy | automation | 6,630 | No way to create posts? | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
I would think that an Instagram automation library as comprehensive as this one would have a way for one to create posts.
## Current Behavior
However, I am unable to find one.
## Possible Solution (optional)
Perhaps I am mistaken. If not, however, this seems like something that this library should be able to do.
## InstaPy configuration
| closed | 2022-07-07T02:27:52Z | 2022-08-16T21:50:37Z | https://github.com/InstaPy/InstaPy/issues/6630 | [] | kaijif | 1 |
zappa/Zappa | django | 440 | [Migrated] Using Extra Cloundfront Distro to Restrict TLS Protocol and Ciphers? | Originally from: https://github.com/Miserlou/Zappa/issues/1154 by [Erstwild](https://github.com/Erstwild)
I have been working on trying create a "regulated industry" deployment of Zappa. I have one issue open on putting the DynamoDB instance in a VPC. The other I have been working on with AWS is trying to find the best currently available way to restrict TLS protocol and ciphers. The idea I had that they also recommended to try is adding an additional cloudfront distribution with the following settings: https://aws.amazon.com/blogs/aws/cloudfront-update-https-tls-v1-1v1-2-to-the-origin-addmodify-headers/
So essentially this would change the standard:
Client -> Route53 -> CloudFront (Created by API GW) -> API Gateway -> Lambda backend
to this:
Client -> Route53 -> CloudFront (custom distro with TLS settings) -> CloudFront (Created by API GW) -> API Gateway -> Lambda backend
I know this will add some latency, but I am hoping it wont be a showstopper if I can get it working. My question is if there are other issues that will likely arise with this approach/other changes I will need to make? | closed | 2021-02-20T08:34:52Z | 2024-04-13T16:17:58Z | https://github.com/zappa/Zappa/issues/440 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
plotly/dash-bio | dash | 711 | Browser scrolls and zooms simultaniously NglMoleculeViewer | I am talking about the dash_bio.NglMolecule.Viewer here:
The browser is scrolling down when the mouse is in the window of the viewer, therefore the browser scrolls simultaniously when zooming into or out of the viewer.
I was told by the original developer that this could be an easy fix by overwriting the `stage.mouseControls`
of the core ngl viewer.
Can you guys help me with this ?
Python version is: 3.10.4
These are the used dependencies:
aiohttp 3.8.3
aiosignal 1.2.0
ansi2html 1.8.0
appnope 0.1.3
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
asttokens 2.0.5
async-timeout 4.0.2
attrs 21.4.0
backcall 0.2.0
beautifulsoup4 4.11.1
biopython 1.79
black 22.8.0
bleach 5.0.0
bokeh 2.4.3
Brotli 1.0.9
certifi 2022.5.18.1
cffi 1.15.0
charset-normalizer 2.0.12
click 8.1.3
cloudpickle 2.1.0
colorcet 3.0.0
colour 0.1.5
cycler 0.11.0
dash 2.6.1
dash-bio 1.0.2
dash-bootstrap-components 1.2.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dask 2022.6.0
datashader 0.14.0
datashape 0.5.2
debugpy 1.6.0
decorator 5.1.1
defusedxml 0.7.1
distributed 2022.6.0
docker-pycreds 0.4.0
entrypoints 0.4
executing 0.8.3
fastjsonschema 2.15.3
Flask 2.2.2
Flask-Compress 1.12
fonttools 4.33.3
frozenlist 1.3.1
fsspec 2022.5.0
GEOparse 2.0.3
gitdb 4.0.9
GitPython 3.1.27
h5py 3.7.0
HeapDict 1.0.1
holoviews 1.14.9
idna 3.3
imageio 2.19.3
ipykernel 6.15.0
ipython 8.4.0
ipython-genutils 0.2.0
ipywidgets 7.7.0
itsdangerous 2.1.2
jedi 0.18.1
Jinja2 3.1.2
joblib 1.1.0
jsonschema 4.6.0
jupyter 1.0.0
jupyter-client 7.3.4
jupyter-console 6.4.3
jupyter-core 4.10.0
jupyter-dash 0.4.2
jupyterlab-pygments 0.2.2
jupyterlab-widgets 1.1.0
kaleido 0.2.1
kiwisolver 1.4.3
llvmlite 0.38.1
locket 1.0.0
Markdown 3.3.7
MarkupSafe 2.1.1
matplotlib 3.5.2
matplotlib-inline 0.1.3
mistune 0.8.4
msgpack 1.0.4
multidict 6.0.2
multipledispatch 0.6.0
mypy-extensions 0.4.3
nbclient 0.6.4
nbconvert 6.5.0
nbformat 5.4.0
nest-asyncio 1.5.5
networkx 2.8.4
notebook 6.4.12
numba 0.55.2
numpy 1.21.6
packaging 21.3
pandas 1.4.3
pandocfilters 1.5.0
panel 0.13.1
param 1.12.1
ParmEd 3.4.3
parso 0.8.3
partd 1.2.0
pathspec 0.10.1
pathtools 0.1.2
periodictable 1.6.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.1.1
pip 22.3
platformdirs 2.5.2
plotly 5.8.2
prometheus-client 0.14.1
promise 2.3
prompt-toolkit 3.0.29
protobuf 3.20.1
psutil 5.9.1
ptyprocess 0.7.0
pure-eval 0.2.2
pycparser 2.21
pyct 0.4.8
pyfaidx 0.7.1
Pygments 2.12.0
pynndescent 0.5.7
pyparsing 3.0.9
pyrsistent 0.18.1
python-dateutil 2.8.2
pytz 2022.1
pyviz-comms 2.2.0
PyWavelets 1.3.0
PyYAML 6.0
pyzmq 23.1.0
qtconsole 5.3.1
QtPy 2.1.0
requests 2.28.0
retrying 1.3.3
scikit-image 0.19.3
scikit-learn 1.1.1
scipy 1.8.1
seaborn 0.11.2
Send2Trash 1.8.0
sentry-sdk 1.6.0
setproctitle 1.2.3
setuptools 60.2.0
shortuuid 1.0.9
six 1.16.0
sklearn 0.0
smmap 5.0.0
sortedcontainers 2.4.0
soupsieve 2.3.2.post1
stack-data 0.3.0
tblib 1.7.0
tenacity 8.0.1
terminado 0.15.0
threadpoolctl 3.1.0
tifffile 2022.5.4
tinycss2 1.1.1
tomli 2.0.1
toolz 0.11.2
torch 1.11.0
torchsummary 1.5.1
tornado 6.1
tqdm 4.64.0
traitlets 5.3.0
typing_extensions 4.2.0
umap-learn 0.5.3
urllib3 1.26.9
wandb 0.12.20
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 2.2.2
wheel 0.37.1
widgetsnbextension 3.6.0
xarray 2022.3.0
yarl 1.8.1
zict 2.2.0
| open | 2022-11-16T08:36:05Z | 2022-11-16T10:20:35Z | https://github.com/plotly/dash-bio/issues/711 | [] | Ento0n | 1 |
nerfstudio-project/nerfstudio | computer-vision | 2,862 | TypeError: __init__() got an unexpected keyword argument 'dataparser' | **Describe the bug**
After installing and setting up the environment, I run the example command `ns-train nerfacto --data data/nerfstudio/poster` and then got the following error.
It worked without error two weeks ago, so I am not sure whether it is due to some inconsistency between versions.
**To Reproduce**
Steps to reproduce the behavior:
1. Download and install the `nerfstudio` repo
2. Run the sample command `ns-train nerfacto --data data/nerfstudio/poster`
4. See error
```
(ns) xxx@xxx:/data/xxx/projects/nerfstudio$ ns-train nerfacto --data data/nerfstudio/poster
Traceback (most recent call last):
File "/home/ruihan/anaconda3/envs/ns/bin/ns-train", line 8, in <module>
sys.exit(entrypoint())
File "/data/ruihan/projects/nerfstudio/nerfstudio/scripts/train.py", line 263, in entrypoint
tyro.cli(
File "/home/ruihan/.local/lib/python3.8/site-packages/tyro/_cli.py", line 187, in cli
output = _cli_impl(
File "/home/ruihan/.local/lib/python3.8/site-packages/tyro/_cli.py", line 454, in _cli_impl
out, consumed_keywords = _calling.call_from_args(
File "/home/ruihan/.local/lib/python3.8/site-packages/tyro/_calling.py", line 157, in call_from_args
value, consumed_keywords_child = call_from_args(
File "/home/ruihan/.local/lib/python3.8/site-packages/tyro/_calling.py", line 122, in call_from_args
value, consumed_keywords_child = call_from_args(
File "/home/ruihan/.local/lib/python3.8/site-packages/tyro/_calling.py", line 122, in call_from_args
value, consumed_keywords_child = call_from_args(
File "/home/ruihan/.local/lib/python3.8/site-packages/tyro/_calling.py", line 247, in call_from_args
return unwrapped_f(*positional_args, **kwargs), consumed_keywords # type: ignore
TypeError: __init__() got an unexpected keyword argument 'dataparser'
```
**Expected behavior**
Expected it to run the nerfacto training.
| closed | 2024-02-01T15:58:27Z | 2024-02-01T21:45:19Z | https://github.com/nerfstudio-project/nerfstudio/issues/2862 | [] | RuihanGao | 6 |
python-restx/flask-restx | flask | 193 | Why was the changelog removed from the repository and the documentation? | **Ask a question**
Why was the changelog removed from the repository and the documentation?
**Additional context**
Hi, I was looking at trying to move a project from flask_restplus==0.10.1 to restx but had a hard time figuring out what had changed. The docs say that is it mostly compatible with restplus but that is specific to a version, I think relabeling tags was a bad idea and can be reverted. Also having the changelog in the docs is not only very common but also a great way to figure out the risks when trying to move from restplus to restx.
| open | 2020-08-07T16:17:54Z | 2020-09-02T18:48:40Z | https://github.com/python-restx/flask-restx/issues/193 | [
"question"
] | avilaton | 3 |
babysor/MockingBird | deep-learning | 204 | 使用新語料 toolbox inference錯誤 | 最近使用了一個新的語料庫去訓練synthesizer(沒有使用作者提供的語料),訓練格式都沒有改。但是將模型拿去toolbox做inference的時候遇到下列的錯誤。
`Feel free to add your own. You can still use the toolbox by recording samples yourself.
Loaded encoder "pretrained.pt" trained to step 1564501
Synthesizer using device: cpu
Trainable Parameters: 30.875M
Traceback (most recent call last):
File "C:\Users\VC\MockingBird\toolbox\__init__.py", line 123, in <lambda>
func = lambda: self.synthesize() or self.vocode()
File "C:\Users\VC\MockingBird\toolbox\__init__.py", line 237, in synthesize
specs = self.synthesizer.synthesize_spectrograms(texts, embeds)
File "C:\Users\VC\MockingBird\synthesizer\inference.py", line 87, in synthesize_spectrograms
self.load()
File "C:\Users\VC\MockingBird\synthesizer\inference.py", line 65, in load
self._model.load(self.model_fpath)
File "C:\Users\VC\MockingBird\synthesizer\models\tacotron.py", line 497, in load
self.load_state_dict(checkpoint["model_state"])
File "C:\Users\anaconda3\envs\VC-test\lib\site-packages\torch\nn\modules\module.py", line 1407, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Tacotron:
Unexpected key(s) in state_dict: "gst.encoder.convs.0.weight", "gst.encoder.convs.0.bias", "gst.encoder.convs.1.weight", "gst.encoder.convs.1.bias", "gst.encoder.convs.2.weight", "gst.encoder.convs.2.bias", "gst.encoder.convs.3.weight", "gst.encoder.convs.3.bias", "gst.encoder.convs.4.weight", "gst.encoder.convs.4.bias", "gst.encoder.convs.5.weight", "gst.encoder.convs.5.bias", "gst.encoder.bns.0.weight", "gst.encoder.bns.0.bias", "gst.encoder.bns.0.running_mean", "gst.encoder.bns.0.running_var", "gst.encoder.bns.0.num_batches_tracked", "gst.encoder.bns.1.weight", "gst.encoder.bns.1.bias", "gst.encoder.bns.1.running_mean", "gst.encoder.bns.1.running_var", "gst.encoder.bns.1.num_batches_tracked", "gst.encoder.bns.2.weight", "gst.encoder.bns.2.bias", "gst.encoder.bns.2.running_mean", "gst.encoder.bns.2.running_var", "gst.encoder.bns.2.num_batches_tracked", "gst.encoder.bns.3.weight", "gst.encoder.bns.3.bias", "gst.encoder.bns.3.running_mean", "gst.encoder.bns.3.running_var", "gst.encoder.bns.3.num_batches_tracked", "gst.encoder.bns.4.weight", "gst.encoder.bns.4.bias", "gst.encoder.bns.4.running_mean", "gst.encoder.bns.4.running_var", "gst.encoder.bns.4.num_batches_tracked", "gst.encoder.bns.5.weight", "gst.encoder.bns.5.bias", "gst.encoder.bns.5.running_mean", "gst.encoder.bns.5.running_var", "gst.encoder.bns.5.num_batches_tracked", "gst.encoder.gru.weight_ih_l0", "gst.encoder.gru.weight_hh_l0", "gst.encoder.gru.bias_ih_l0", "gst.encoder.gru.bias_hh_l0", "gst.stl.embed", "gst.stl.attention.W_query.weight", "gst.stl.attention.W_key.weight", "gst.stl.attention.W_value.weight".` | open | 2021-11-09T07:27:15Z | 2021-11-09T13:10:05Z | https://github.com/babysor/MockingBird/issues/204 | [] | gaga820402 | 1 |
pydata/xarray | pandas | 9,815 | Test failure on RISC-V platform | ### What happened?
I am a Fedora packager
When I run pytest on the RISC-V platform, I got four failures
### What did you expect to happen?
_No response_
### Minimal Complete Verifiable Example
```Python
pytest xarray/tests/test_backends.py::TestNetCDF4ViaDaskData::test_roundtrip_mask_and_scale
pytest xarray/tests/test_backends.py::TestNetCDF4Data::test_roundtrip_mask_and_scale
```
### MVCE confirmation
- [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
- [x] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
XFAIL tests/test_units.py::TestDataArray::test_searchsorted[int64-function_searchsorted-identical_unit] - xarray does not implement __array_function__
XFAIL tests/test_units.py::TestDataArray::test_missing_value_filling[int64-method_bfill] - ffill and bfill lose units in data
XFAIL tests/test_units.py::TestPlots::test_units_in_slice_line_plot_labels_isel[coord_unit1-coord_attrs1] - pint.errors.UnitStrippedWarning
XFAIL tests/test_units.py::TestPlots::test_units_in_line_plot_labels[coord_unit1-coord_attrs1] - indexes don't support units
XFAIL tests/test_variable.py::TestVariableWithDask::test_pad[xr_arg2-np_arg2-median] - median is not implemented by Dask
XFAIL tests/test_units.py::TestDataset::test_missing_value_filling[int64-method_bfill] - ffill and bfill lose the unit
XFAIL tests/test_variable.py::TestVariableWithDask::test_pad[xr_arg0-np_arg0-median] - median is not implemented by Dask
XFAIL tests/test_units.py::TestDataArray::test_interp_reindex_like[int64-method_interp_like-data] - uses scipy
XFAIL tests/test_variable.py::TestVariableWithDask::test_pad[xr_arg2-np_arg2-reflect] - dask.array.pad bug
XFAIL tests/test_units.py::TestDataset::test_interp_reindex_like[float64-method_interp_like-coords] - uses scipy
XFAIL tests/test_variable.py::TestVariableWithDask::test_pad[xr_arg3-np_arg3-median] - median is not implemented by Dask
XFAIL tests/test_variable.py::TestVariableWithDask::test_pad[xr_arg4-np_arg4-median] - median is not implemented by Dask
XFAIL tests/test_units.py::TestDataArray::test_numpy_methods_with_args[float64-no_unit-function_clip] - xarray does not implement __array_function__
XFAIL tests/test_units.py::TestDataset::test_interp_reindex[int64-method_interp-data] - uses scipy
XFAIL tests/test_variable.py::TestVariableWithDask::test_pad[xr_arg3-np_arg3-reflect] - dask.array.pad bug
XFAIL tests/test_units.py::TestDataArray::test_interp_reindex_like[int64-method_interp_like-coords] - uses scipy
XFAIL tests/test_units.py::TestDataset::test_interp_reindex_like[int64-method_interp_like-data] - uses scipy
XFAIL tests/test_units.py::TestDataset::test_interp_reindex[int64-method_interp-coords] - uses scipy
XFAIL tests/test_units.py::TestDataset::test_interp_reindex_like[int64-method_interp_like-coords] - uses scipy
XPASS tests/test_computation.py::test_cross[a6-b6-ae6-be6-cartesian--1-False]
XPASS tests/test_dataarray.py::TestDataArray::test_copy_coords[True-expected_orig0]
XPASS tests/test_dataarray.py::TestDataArray::test_to_dask_dataframe - dask-expr is broken
XPASS tests/test_dataset.py::TestDataset::test_copy_coords[True-expected_orig0]
XPASS tests/test_computation.py::test_cross[a5-b5-ae5-be5-cartesian--1-False]
XPASS tests/test_plot.py::TestImshow::test_dates_are_concise - Failing inside matplotlib. Should probably be fixed upstream because other plot functions can handle it. Remove this test when it works, already in Common2dMixin
XPASS tests/test_units.py::TestVariable::test_computation[int64-method_rolling_window-numbagg] - converts to ndarray
XPASS tests/test_units.py::TestVariable::test_computation[int64-method_rolling_window-None] - converts to ndarray
XPASS tests/test_units.py::TestVariable::test_computation[float64-method_rolling_window-numbagg] - converts to ndarray
XPASS tests/test_units.py::TestVariable::test_computation[float64-method_rolling_window-None] - converts to ndarray
XPASS tests/test_units.py::TestDataset::test_computation_objects[float64-coords-method_rolling] - strips units
XPASS tests/test_units.py::TestDataset::test_computation_objects[int64-coords-method_rolling] - strips units
XPASS tests/test_variable.py::TestVariableWithDask::test_pad[xr_arg0-np_arg0-reflect] - dask.array.pad bug
XPASS tests/test_variable.py::TestVariableWithDask::test_pad[xr_arg1-np_arg1-reflect] - dask.array.pad bug
FAILED tests/test_backends.py::TestNetCDF4ViaDaskData::test_roundtrip_mask_and_scale[dtype0-create_unsigned_false_masked_scaled_data-create_encoded_unsigned_false_masked_scaled_data]
FAILED tests/test_backends.py::TestNetCDF4Data::test_roundtrip_mask_and_scale[dtype0-create_unsigned_false_masked_scaled_data-create_encoded_unsigned_false_masked_scaled_data]
FAILED tests/test_backends.py::TestNetCDF4ViaDaskData::test_roundtrip_mask_and_scale[dtype1-create_unsigned_false_masked_scaled_data-create_encoded_unsigned_false_masked_scaled_data]
FAILED tests/test_backends.py::TestNetCDF4Data::test_roundtrip_mask_and_scale[dtype1-create_unsigned_false_masked_scaled_data-create_encoded_unsigned_false_masked_scaled_data]
```
### Anything else we need to know?
TestNetCDF4ViaDaskData::test_roundtrip_mask_and_scale

TestNetCDF4Data::test_roundtrip_mask_and_scale

### Environment
<details>
commit: 1bb867d573390509dbc0379f0fd318a6985dab45
python: 3.13.0 (main, Oct 8 2024, 00:00:00) [GCC 14.2.1 20240912 (Red Hat 14.2.1-3)]
python-bits: 64
OS: Linux
OS-release: 6.1.55
machine: riscv64
processor:
byteorder: little
LC_ALL: None
LANG: C.UTF-8
LOCALE: ('C', 'UTF-8')
libhdf5: 1.12.1
libnetcdf: 4.9.2
xarray: 2024.11.0
pandas: 2.2.1
numpy: 1.26.4
scipy: 1.11.3
netCDF4: 1.7.1
pydap: None
h5netcdf: None
h5py: None
zarr: 2.18.3
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: 1.3.7
dask: 2024.9.1
distributed: None
matplotlib: 3.9.1
cartopy: None
seaborn: 0.13.2
numbagg: None
fsspec: 2024.3.1
cupy: None
pint: 0.24.4
sparse: None
flox: None
numpy_groupies: None
setuptools: 69.2.0
pip: 24.2
conda: None
pytest: 8.3.3
mypy: None
IPython: None
sphinx: 7.3.7
</details>
| open | 2024-11-24T10:21:51Z | 2025-02-16T18:37:58Z | https://github.com/pydata/xarray/issues/9815 | [
"bug",
"needs triage"
] | U2FsdGVkX1 | 14 |
InstaPy/InstaPy | automation | 6,628 | Not working view-source:https://freegeoip.app/json | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Log in
## Current Behavior
Im tryng to login but, the request to get the ip it doesn't work anymore. The request to de ipbase endpoint return a 404


## Possible Solution (optional)
Change for another free ip location api
## InstaPy configuration
```
session = InstaPy(
username=insta_username,
password=insta_password,
headless_browser=False,
)
```
| open | 2022-07-02T19:19:10Z | 2022-11-23T05:38:17Z | https://github.com/InstaPy/InstaPy/issues/6628 | [] | Arwiim | 2 |
flairNLP/flair | pytorch | 2,887 | issue loading MultiTagger (unpickling error) | I am using Hunflair and having below error while running it: Can anyone suggest me the next step?
Traceback (most recent call last):
File "hunflair.py", line 35, in <module>
hunflair = MultiTagger.load("hunflair")
File "/home/sgu260/.local/lib/python3.8/site-packages/flair/models/sequence_tagger_model.py", line 1079, in load
model = SequenceTagger.load(model_name)
File "/home/sgu260/.local/lib/python3.8/site-packages/flair/nn/model.py", line 142, in load
state = torch.load(f, map_location="cpu")
File "/home/sgu260/.local/lib/python3.8/site-packages/torch/serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/sgu260/.local/lib/python3.8/site-packages/torch/serialization.py", line 930, in _legacy_load
result = unpickler.load()
_pickle.UnpicklingError: pickle data was truncated
| closed | 2022-08-05T18:49:56Z | 2023-01-07T13:48:14Z | https://github.com/flairNLP/flair/issues/2887 | [
"question",
"wontfix"
] | shashank140195 | 1 |
huggingface/datasets | computer-vision | 7,357 | Python process aborded with GIL issue when using image dataset | ### Describe the bug
The issue is visible only with the latest `datasets==3.2.0`.
When using image dataset the Python process gets aborted right before the exit with the following error:
```
Fatal Python error: PyGILState_Release: thread state 0x7fa1f409ade0 must be current when releasing
Python runtime state: finalizing (tstate=0x0000000000ad2958)
Thread 0x00007fa33d157740 (most recent call first):
<no Python frame>
Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._boun
ded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pyarrow.lib, pandas._libs.tslibs.ccalendar, pandas._libs.ts
libs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.t
slibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._l
ibs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._compute, pan
das._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join,
pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, charset_normalizer.md, requests.pa
ckages.charset_normalizer.md, requests.packages.chardet.md, yaml._yaml, markupsafe._speedups, PIL._imaging, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards
, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, sentencepiece._sentencepiece, sklearn.__check_build._check_build, psutil._psut
il_linux, psutil._psutil_posix, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.l
inalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_up
date, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack,
scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flo
w, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial
._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.spatial.transform._rotation, scipy.optimize._group_columns, s
cipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, sc
ipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.l
inalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integr
ate._lsoda, scipy.interpolate._fitpack, scipy.interpolate._dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._r
gi_cython, scipy.special.cython_special, scipy.stats._stats, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._ansari_swilk_statis
tics, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.stats._unuran.unuran_wrapper, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, sklearn.utils._isf
inite, sklearn.utils.sparsefuncs_fast, sklearn.utils.murmurhash, sklearn.utils._openmp_helpers, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.p
reprocessing._target_encoder_fast, sklearn.metrics._dist_metrics, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.utils._cython_blas, sklearn.metrics._pairwise_distances_reduction._bas
e, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distanc
es_reduction._argkmin_classmode, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_distances_reduction._radius_neighbors_classmode, s
klearn.metrics._pairwise_fast, PIL._imagingft, google._upb._message, h5py._errors, h5py.defs, h5py._objects, h5py.h5, h5py.utils, h5py.h5t, h5py.h5s, h5py.h5ac, h5py.h5p, h5py.h5r, h5py._proxy, h5py._conv,
h5py.h5z, h5py.h5a, h5py.h5d, h5py.h5ds, h5py.h5g, h5py.h5i, h5py.h5o, h5py.h5f, h5py.h5fd, h5py.h5pl, h5py.h5l, h5py._selector, _cffi_backend, pyarrow._parquet, pyarrow._fs, pyarrow._azurefs, pyarrow._hdfs
, pyarrow._gcsfs, pyarrow._s3fs, multidict._multidict, propcache._helpers_c, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, xxhash
._xxhash, pyarrow._json, pyarrow._acero, pyarrow._csv, pyarrow._dataset, pyarrow._dataset_orc, pyarrow._parquet_encryption, pyarrow._dataset_parquet_encryption, pyarrow._dataset_parquet, regex._regex, scipy
.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, PIL._imagingmath, PIL._webp (total: 236)
Aborted (core dumped)
```an
### Steps to reproduce the bug
Install `datasets==3.2.0`
Run the following script:
```python
import datasets
DATASET_NAME = "phiyodr/InpaintCOCO"
NUM_SAMPLES = 10
def preprocess_fn(example):
return {
"prompts": example["inpaint_caption"],
"images": example["coco_image"],
"masks": example["mask"],
}
default_dataset = datasets.load_dataset(
DATASET_NAME, split="test", streaming=True
).filter(lambda example: example["inpaint_caption"] != "").take(NUM_SAMPLES)
test_data = default_dataset.map(
lambda x: preprocess_fn(x), remove_columns=default_dataset.column_names
)
for data in test_data:
print(data["prompts"])
``
### Expected behavior
The script should not hang or crash.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.31
- Python version: 3.11.0
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.2.0 | open | 2025-01-06T11:29:30Z | 2025-03-08T15:59:36Z | https://github.com/huggingface/datasets/issues/7357 | [] | AlexKoff88 | 1 |
mlfoundations/open_clip | computer-vision | 2 | Avenue for exploration - augmenting training set with colour palettes / texture names /more meta data | so part of the fun with clip is using it in conjunction with VQGAN.
This allows the prompts to generate images.
There's something lost in this translation. though .
They say a picture is worth a 1000 words - but what if some extra data was injected into the training ?
could be say textures / maybe even geometric descptions / meta data
| closed | 2021-07-30T11:03:50Z | 2021-07-30T19:52:20Z | https://github.com/mlfoundations/open_clip/issues/2 | [] | johndpope | 1 |
SciTools/cartopy | matplotlib | 2,088 | cartopy crashes with 0.21.0 | Hi,
The simplest cartopy script crashes when cartopy is installed at 0.21.0 release
with a pip install cartopy==0.21.0
```
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
plt.show()
```
produces a core.
Concerned installed modules are:
```
python 3.9.13
matplotlib 3.6.0
Cartopy 0.21.0
``` | closed | 2022-09-26T17:57:56Z | 2022-09-27T07:35:54Z | https://github.com/SciTools/cartopy/issues/2088 | [] | PBrockmann | 2 |
Lightning-AI/pytorch-lightning | pytorch | 20,276 | Lightning place model inputs and model to different devices | ### Bug description
In the following code snippet, `lmm` is a class inherited from `nn.Module` which is a wrapper class huggingface model and processor.
```
class ICVModel(pl.LightningModule):
def __init__(self, lmm, icv_encoder: torch.nn.Module) -> None:
super().__init__()
self.lmm = lmm
self.lmm.requires_grad_(False)
self.icv_encoder = icv_encoder
self.eos_token = self.lmm.processor.tokenizer.eos_token
def forward(self, ice_texts, query_texts, answers, images):
query_answer = [
query + answer + self.eos_token
for query, answer in zip(query_texts, answers)
]
query_images = [img[-setting.num_image_in_query :] for img in images]
query_inputs = self.lmm.process_input(query_answer, query_images)
query_outputs = self.lmm.model(
**query_inputs,
labels=query_inputs["input_ids"],
)
```
However, a device mismatch error raised at
```python
query_outputs = self.lmm.model(
**query_inputs,
labels=query_inputs["input_ids"],
)
```
I printed device of `inputs.pixel_values.device`, `self.device`, `self.lmm.device` outside of `lmm.model.forward`, then I got
```
rank[0]: cpu cuda:0 cuda:0
rank[1]: cpu cuda:1 cuda:1
```
In Idefics (`self.lmm.model`) forward process, when I printed `inputs.pixel_values.device` and `self.device`, I got
```
rank[0]: cuda:0 cuda:0
rank[1]: cuda:0 cuda:1
```
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
Full trace stack:
```
[rank1]: File "/home/jyc/ICLTestbed/dev/train.py", line 103, in <module>
[rank1]: main()
[rank1]: File "/home/jyc/ICLTestbed/dev/train.py", line 72, in main
[rank1]: trainer.fit(
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/trainer/trainer.py", line 538, in fit
[rank1]: call._call_and_handle_interrupt(
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/trainer/call.py", line 46, in _call_and_handle_interrupt
[rank1]: return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 105, in launch
[rank1]: return function(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/trainer/trainer.py", line 574, in _fit_impl
[rank1]: self._run(model, ckpt_path=ckpt_path)
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/trainer/trainer.py", line 981, in _run
[rank1]: results = self._run_stage()
[rank1]: ^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/trainer/trainer.py", line 1025, in _run_stage
[rank1]: self.fit_loop.run()
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/loops/fit_loop.py", line 205, in run
[rank1]: self.advance()
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/loops/fit_loop.py", line 363, in advance
[rank1]: self.epoch_loop.run(self._data_fetcher)
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 140, in run
[rank1]: self.advance(data_fetcher)
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 250, in advance
[rank1]: batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 190, in run
[rank1]: self._optimizer_step(batch_idx, closure)
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 268, in _optimizer_step
[rank1]: call._call_lightning_module_hook(
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/trainer/call.py", line 167, in _call_lightning_module_hook
[rank1]: output = fn(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/core/module.py", line 1306, in optimizer_step
[rank1]: optimizer.step(closure=optimizer_closure)
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/core/optimizer.py", line 153, in step
[rank1]: step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/strategies/ddp.py", line 270, in optimizer_step
[rank1]: optimizer_output = super().optimizer_step(optimizer, closure, model, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/strategies/strategy.py", line 238, in optimizer_step
[rank1]: return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/plugins/precision/deepspeed.py", line 129, in optimizer_step
[rank1]: closure_result = closure()
[rank1]: ^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 144, in __call__
[rank1]: self._result = self.closure(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 129, in closure
[rank1]: step_output = self._step_fn()
[rank1]: ^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 317, in _training_step
[rank1]: training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/trainer/call.py", line 319, in _call_strategy_hook
[rank1]: output = fn(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/strategies/strategy.py", line 389, in training_step
[rank1]: return self._forward_redirection(self.model, self.lightning_module, "training_step", *args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/strategies/strategy.py", line 640, in __call__
[rank1]: wrapper_output = wrapper_module(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn
[rank1]: ret_val = func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/deepspeed/runtime/engine.py", line 1899, in forward
[rank1]: loss = self.module(*inputs, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/pytorch_lightning/strategies/strategy.py", line 633, in wrapped_forward
[rank1]: out = method(*_args, **_kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/ICLTestbed/dev/icv_model.py", line 89, in training_step
[rank1]: loss_dict = self(**batch)
[rank1]: ^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/ICLTestbed/dev/icv_model.py", line 42, in forward
[rank1]: query_outputs = self.lmm.model(
[rank1]: ^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/accelerate/hooks.py", line 170, in new_forward
[rank1]: output = module._old_forward(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/transformers/models/idefics/modeling_idefics.py", line 1493, in forward
[rank1]: outputs = self.model(
[rank1]: ^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/accelerate/hooks.py", line 170, in new_forward
[rank1]: output = module._old_forward(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/transformers/models/idefics/modeling_idefics.py", line 1181, in forward
[rank1]: image_hidden_states = self.vision_model(
[rank1]: ^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/accelerate/hooks.py", line 170, in new_forward
[rank1]: output = module._old_forward(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/transformers/models/idefics/vision.py", line 467, in forward
[rank1]: hidden_states = self.embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/accelerate/hooks.py", line 170, in new_forward
[rank1]: output = module._old_forward(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/transformers/models/idefics/vision.py", line 147, in forward
[rank1]: patch_embeds = self.patch_embedding(pixel_values.to(dtype=target_dtype)) # shape = [*, width, grid, grid]
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/accelerate/hooks.py", line 170, in new_forward
[rank1]: output = module._old_forward(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 460, in forward
[rank1]: return self._conv_forward(input, self.weight, self.bias)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/jyc/miniconda3/envs/icl/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
[rank1]: return F.conv2d(input, weight, bias, self.stride,
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)
```
### Environment
_No response_
### More info
_No response_ | closed | 2024-09-12T13:27:39Z | 2025-02-13T06:46:52Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20276 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | Kamichanw | 6 |
pallets/flask | flask | 4,553 | asserts with `pytest.raises` should be outside the `with` block | I've been using Flask's test suite to evaluate our [Slipcover](https://github.com/plasma-umass/slipcover) coverage tool and noticed likely bugs in the Flask tests. For example, in `tests/test_basic.py`, you have
```python
def test_response_type_errors():
[...]
with pytest.raises(TypeError) as e:
c.get("/none")
assert "returned None" in str(e.value)
assert "from_none" in str(e.value)
with pytest.raises(TypeError) as e:
c.get("/small_tuple")
assert "tuple must have the form" in str(e.value)
[...]
with pytest.raises(TypeError) as e:
c.get("/bad_type")
assert "it was a bool" in str(e.value)
```
You probably don't want the `assert` statements checking `e` indented, as they won't execute when the exception is thrown. | closed | 2022-04-27T15:30:57Z | 2022-05-14T00:07:22Z | https://github.com/pallets/flask/issues/4553 | [
"testing"
] | jaltmayerpizzorno | 2 |
microsoft/unilm | nlp | 1,224 | Doubts about the MARIO-LAION dataset | **Describe**
Model I am using TextDiffuser:
I found that there are some index numbers starting with "50001" in the MARIO-LAION dataset, but I did not find the corresponding subfolder in the meta information (40G) file.


| open | 2023-07-26T06:55:16Z | 2024-02-03T16:02:57Z | https://github.com/microsoft/unilm/issues/1224 | [] | scutyuanzhi | 3 |
ultralytics/yolov5 | machine-learning | 13,144 | How to increase FPS camera capture inside the Raspberry Pi 4B 8GB with best.onnx model | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Detection
### Bug
Hi, i am currently trying to make traffic sign detection and recognition by using the YOLOv5 Pytorch with Yolov5s model. I am using detect.py file to run the model and the FPS i get is only 1 FPS. The dataset contain around 2K images with 200 epochs. I run the code with:
python detect.py --weights best.onnx --img 640 --conf 0.7 --source 0
Is there any modify to the code so that i can get more than 4FPS?
### Environment
-Raspberry Pi 4B with 8GB Ram
-Webcam
-Model best.onnx
-Train using Yolov5 Pytorch
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | open | 2024-06-27T21:16:08Z | 2024-10-20T19:49:02Z | https://github.com/ultralytics/yolov5/issues/13144 | [
"bug",
"Stale"
] | Killuagg | 13 |
ageitgey/face_recognition | python | 1,312 | Details of face_encodings() function | * face_recognition version:1.3.0
* Python version:3.x
* Operating System:linux
### Description
Actually this is not an issue, rather an effort to know the details of 'face_encondings()' function.
### What I Did
so in the examples in Markdown file, you always take very first list of returned multiple lists. What does this function actually return and why do you always take the first list for comparison?
| closed | 2021-05-12T17:47:29Z | 2021-05-22T19:25:09Z | https://github.com/ageitgey/face_recognition/issues/1312 | [] | AI-07 | 1 |
allenai/allennlp | pytorch | 4,971 | Failed to import WordTokenizer | Hello, I'm using Allennlp 1.4.0 and Allen-model 1.4.0 on my proiect. I run the command as `allennlp serve --archive-path <path> --predictor <predictor> --include-package <pacakge_name>`. And then I got an error of `Failed to import WordTokenizer from allennlp.data.tokenizers.word_tokenizer`.
I've looked at the WordTokenizer from version 0.9.0 to the latest one. Only version 0.9.0 contains WordTokenizer. Is there a specific reason for removing WordTokenizer? Or is it moved/changed to another method?
I also tried to run v0.9.0 but when I run `python -m allennlp.service.server_simple`. It pops another error: `/usr/bin/python: No module named allennlp.service`
It would be appreciated if someone can give an advice.
| closed | 2021-02-11T13:30:16Z | 2021-02-26T20:17:15Z | https://github.com/allenai/allennlp/issues/4971 | [
"question"
] | yanchao-yu | 3 |
piccolo-orm/piccolo | fastapi | 221 | Naming of columns | ### Discussed in https://github.com/piccolo-orm/piccolo/discussions/206
Allow ``Table`` columns to map to database columns with different names.
For example:
```python
class MyTable(Table):
name = Varchar(name="person_name")
```
In the example above, when doing queries, you will use `MyTable.name`, but the actual column in the database is called `person_name`.
This is useful when using Piccolo on legacy databases with non-ideal column names.
| closed | 2021-09-08T09:31:14Z | 2021-10-05T20:10:21Z | https://github.com/piccolo-orm/piccolo/issues/221 | [
"enhancement"
] | dantownsend | 7 |
matterport/Mask_RCNN | tensorflow | 2,906 | Training balloon.py ValueError: Expected a symbolic Tensors or a callable for the loss value. Please wrap your loss computation in a zero argument `lambda`. | python==3.7.15
tensorflow==2.10.0
keras==2.10.0
".\Mask_RCNN\mrcnn\model.py"
```
# Add L2 Regularization
# Skip gamma and beta weights of batch normalization layers.
reg_losses = [keras.regularizers.l2(self.config.WEIGHT_DECAY)(w) / tf.cast(tf.size(w), tf.float32)
for w in self.keras_model.trainable_weights
if 'gamma' not in w.name and 'beta' not in w.name]
print(tf.add_n(reg_losses))
self.keras_model.add_loss(tf.add_n(reg_losses))
```
Error occurred as below
```
File ".\Mask_RCNN\mrcnn\model.py", line 2107, in co
mpile
self.keras_model.add_loss(tf.add_n(reg_losses))
File ".\anaconda3\envs\MaskRCNN\lib\site-packages\keras\engine\base_layer.py
", line 1488, in add_loss
"Expected a symbolic Tensors or a callable for the loss value. "
ValueError: Expected a symbolic Tensors or a callable for the loss value. Please wrap your
loss computation in a zero argument `lambda`.
```
Please anyone helps me | open | 2022-11-14T23:42:10Z | 2023-02-14T04:09:32Z | https://github.com/matterport/Mask_RCNN/issues/2906 | [] | sanso62 | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,130 | [Bug]: weird>launch.py: error: unrecognized arguments: --lora-dir | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [x] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
when runing webui.bat it throws>
launch.py: error: unrecognized arguments: --lora-dir
### Steps to reproduce the problem
1.install fresh webui.w git
2.run,webui-user,wait...
2.run webui.bat?
### What should have happened?
not throw>launch.py: error: unrecognized arguments: --lora-dir
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
win11,py3.11
### Console logs
```Shell
no module 'xformers'. Processing without...
usage: launch.py [-h] [--update-all-extensions] [--skip-python-version-check] [--skip-torch-cuda-test] [--reinstall-xformers] [--reinstall-torch] [--update-check] [--test-server]
[--log-startup] [--skip-prepare-environment] [--skip-install] [--dump-sysinfo] [--loglevel LOGLEVEL] [--do-not-download-clip] [--data-dir DATA_DIR] [--config CONFIG]
[--ckpt CKPT] [--ckpt-dir CKPT_DIR] [--vae-dir VAE_DIR] [--gfpgan-dir GFPGAN_DIR] [--gfpgan-model GFPGAN_MODEL] [--no-half] [--no-half-vae] [--no-progressbar-hiding]
[--max-batch-count MAX_BATCH_COUNT] [--embeddings-dir EMBEDDINGS_DIR] [--textual-inversion-templates-dir TEXTUAL_INVERSION_TEMPLATES_DIR]
[--hypernetwork-dir HYPERNETWORK_DIR] [--localizations-dir LOCALIZATIONS_DIR] [--allow-code] [--medvram] [--medvram-sdxl] [--lowvram] [--lowram]
[--always-batch-cond-uncond] [--unload-gfpgan] [--precision {full,autocast}] [--upcast-sampling] [--share] [--ngrok NGROK] [--ngrok-region NGROK_REGION]
[--ngrok-options NGROK_OPTIONS] [--enable-insecure-extension-access] [--codeformer-models-path CODEFORMER_MODELS_PATH] [--gfpgan-models-path GFPGAN_MODELS_PATH]
[--esrgan-models-path ESRGAN_MODELS_PATH] [--bsrgan-models-path BSRGAN_MODELS_PATH] [--realesrgan-models-path REALESRGAN_MODELS_PATH] [--dat-models-path DAT_MODELS_PATH]
[--clip-models-path CLIP_MODELS_PATH] [--xformers] [--force-enable-xformers] [--xformers-flash-attention] [--deepdanbooru] [--opt-split-attention]
[--opt-sub-quad-attention] [--sub-quad-q-chunk-size SUB_QUAD_Q_CHUNK_SIZE] [--sub-quad-kv-chunk-size SUB_QUAD_KV_CHUNK_SIZE]
[--sub-quad-chunk-threshold SUB_QUAD_CHUNK_THRESHOLD] [--opt-split-attention-invokeai] [--opt-split-attention-v1] [--opt-sdp-attention] [--opt-sdp-no-mem-attention]
[--disable-opt-split-attention] [--disable-nan-check] [--use-cpu USE_CPU [USE_CPU ...]] [--use-ipex] [--disable-model-loading-ram-optimization] [--listen] [--port PORT]
[--show-negative-prompt] [--ui-config-file UI_CONFIG_FILE] [--hide-ui-dir-config] [--freeze-settings] [--freeze-settings-in-sections FREEZE_SETTINGS_IN_SECTIONS]
[--freeze-specific-settings FREEZE_SPECIFIC_SETTINGS] [--ui-settings-file UI_SETTINGS_FILE] [--gradio-debug] [--gradio-auth GRADIO_AUTH]
[--gradio-auth-path GRADIO_AUTH_PATH] [--gradio-img2img-tool GRADIO_IMG2IMG_TOOL] [--gradio-inpaint-tool GRADIO_INPAINT_TOOL] [--gradio-allowed-path GRADIO_ALLOWED_PATH]
[--opt-channelslast] [--styles-file STYLES_FILE] [--autolaunch] [--theme THEME] [--use-textbox-seed] [--disable-console-progressbars] [--enable-console-prompts]
[--vae-path VAE_PATH] [--disable-safe-unpickle] [--api] [--api-auth API_AUTH] [--api-log] [--nowebui] [--ui-debug-mode] [--device-id DEVICE_ID] [--administrator]
[--cors-allow-origins CORS_ALLOW_ORIGINS] [--cors-allow-origins-regex CORS_ALLOW_ORIGINS_REGEX] [--tls-keyfile TLS_KEYFILE] [--tls-certfile TLS_CERTFILE]
[--disable-tls-verify] [--server-name SERVER_NAME] [--gradio-queue] [--no-gradio-queue] [--skip-version-check] [--no-hashing] [--no-download-sd-model] [--subpath SUBPATH]
[--add-stop-route] [--api-server-stop] [--timeout-keep-alive TIMEOUT_KEEP_ALIVE] [--disable-all-extensions] [--disable-extra-extensions] [--skip-load-model-at-start]
[--unix-filenames-sanitization] [--filenames-max-length FILENAMES_MAX_LENGTH] [--no-prompt-history] [--ldsr-models-path LDSR_MODELS_PATH] [--lora-dir LORA_DIR]
[--lyco-dir-backcompat LYCO_DIR_BACKCOMPAT] [--scunet-models-path SCUNET_MODELS_PATH] [--swinir-models-path SWINIR_MODELS_PATH]
launch.py: error: unrecognized arguments: --lora-dir D
```
### Additional information
not sure but I downloaded from src and uses the git way. | closed | 2024-07-02T15:44:24Z | 2024-07-06T13:49:17Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16130 | [
"bug-report"
] | hgftrdw45ud67is8o89 | 2 |
Nike-Inc/koheesio | pydantic | 166 | [BUG] Snowflake sync task for insert Change Data Feed is not working in case of existing row in snowflake already | ## Describe the bug
The issue is in the line https://github.com/Nike-Inc/koheesio/blob/720da7a12d4b8b49de4367bf051007175c2af5cb/src/koheesio/integrations/spark/snowflake.py#L973, which leads to the case of skipping updates if the `change_type` is 'insert' and the row is present in Snowflake (in case of delete and insert operation on the delta side).
## Steps to Reproduce
1. Perform a delete operation on a row in the delta table.
2. Perform an insert operation on the same row in the delta table.
3. Run the synchronization process to merge the delta table with the Snowflake table.
4. Observe that the insert operation is skipped if the row already exists in Snowflake.
## Expected behavior
The insert operation should be performed even if the row already exists in Snowflake, ensuring that the latest data is correctly synchronized.
## Additional context
The issue occurs because the merge query only updates rows when `temp._change_type` is 'update_postimage'. This condition does not account for rows that have been deleted and then inserted again in the delta table. | open | 2025-02-06T23:20:12Z | 2025-02-26T13:34:57Z | https://github.com/Nike-Inc/koheesio/issues/166 | [
"bug"
] | mikita-sakalouski | 0 |
streamlit/streamlit | machine-learning | 10,648 | adding the help parameter to a (Button?) widget pads it weirdly instead of staying to the left. | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
```python
st.page_link('sub_pages/resources/lesson_plans.py', label='Lesson Plans', icon=':material/docs:', help='View and download lesson plans')
```
## Unexpected Outcome

---
## Expected

### Reproducible Code Example
```Python
import streamlit as st
st.page_link('sub_pages/resources/lesson_plans.py', label='Lesson Plans', icon=':material/docs:', help='View and download lesson plans')
```
### Steps To Reproduce
1. Add help to a button widget
### Expected Behavior
Button keeps help text and is on the left

### Current Behavior
Button keeps help text but is on the right

### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.43.0
- Python version: 3.12.2
- Operating System: Windows 11
- Browser: Chrome
### Additional Information
> Yes, this used to work in a previous version.
1.42.0 works | closed | 2025-03-05T09:56:06Z | 2025-03-07T21:21:05Z | https://github.com/streamlit/streamlit/issues/10648 | [
"type:bug",
"status:confirmed",
"priority:P1",
"feature:st.download_button",
"feature:st.button",
"feature:st.link_button",
"feature:st.page_link"
] | thehamish555 | 2 |
PaddlePaddle/models | nlp | 4,803 | DCGAN的自定义数据集应该做成什么样的格式 | open | 2020-08-16T11:52:38Z | 2024-02-26T05:10:28Z | https://github.com/PaddlePaddle/models/issues/4803 | [] | xiaolifeimianbao | 3 | |
ultralytics/ultralytics | pytorch | 19,553 | Build a dataloader without training | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi. I would like to examine data in a train dataloader. How can I build one without starting training? I plan to train a detection model. Here is my attempt:
``` python
from ultralytics.models.yolo.detect import DetectionTrainer
import os
# Define paths
DATA_YAML = f"{os.environ['DATASETS']}/drone_tiny/data.yaml" # Path to dataset YAML file
WEIGHTS_PATH = f"{os.environ['WEIGHTS']}/yolo11n.pt" # Path to local weights file
SAVE_IMAGES_DIR = f"{os.environ['PROJECT_ROOT']}/saved_images"
# Ensure save directory exists
os.makedirs(SAVE_IMAGES_DIR, exist_ok=True)
# Load the model
trainer = DetectionTrainer(
overrides = dict(
data = DATA_YAML
)
)
train_data, test_data = trainer.get_dataset()
dataloader = trainer.get_dataloader(train_data)
```
But this fails with the following error:
```bash
Ultralytics 8.3.77 🚀 Python-3.10.12 torch-2.3.1+cu121 CUDA:0 (NVIDIA GeForce GTX 1080 Ti, 11169MiB)
engine/trainer: task=detect, mode=train, model=None, data=/home/daniel/drone_detection/datasets/drone_tiny/data.yaml, epochs=100, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train9, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=None, workspace=None, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, copy_paste_mode=flip, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=/home/daniel/drone_detection/runs/detect/train9
train: Scanning /home/daniel/drone_detection/datasets/drone_tiny/train/labels.cache... 14180 images, 5 backgrounds, 304 corrupt: 100%|██████████| 14180/14180 [00:00<?, ?it/s]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0372]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0828 1.0406]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1289 1.1016]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0086]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0617]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0170.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1147]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0175.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1697]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0180.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2278]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0185.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2805]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0190.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3251]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.373]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4189]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.46]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1533]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0034]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0331]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0627]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0924]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1218]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.15]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1782]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2064]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2345]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2627]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2885]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0089 1.3143]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1051]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0496]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0596]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0697]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0110.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0792]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0115.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0879]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0120.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0966]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0125.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1074]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3334]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.327]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3212]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3178]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3144]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3085]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3027]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2968]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0170.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2906]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0175.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2838]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0180.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2763]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0185.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2688]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0190.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2613 1.0031]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2536 1.0071]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2454 1.0103]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2239 1.0308]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2292 1.0312]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2344 1.0314]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2396 1.0316]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2448 1.0317]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.25 1.0319]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2552 1.0321]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2613 1.0322]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2675 1.0324]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2738 1.0326]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.28 1.0327]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2863 1.0329]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2925 1.0331]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0394 1.4043]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0459 1.4003]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0519 1.3962]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0578 1.392]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0638 1.3876]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0696 1.3826]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0755 1.3777]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0305.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0981 1.1988]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_010_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4141]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_010_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4111]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_010_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3877]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_010_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1133]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1162]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2451]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2451]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2197]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1699]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1377]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0446]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_012_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0225]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_012_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.041]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_012_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0664]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_012_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0732]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0109]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0109]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0039]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0234]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0102]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0110.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0187]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0120.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.291]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0125.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2998]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3066]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2988]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3057]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3125]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.501]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4619]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4033]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3584]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3203]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2812]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2119]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2979]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1805]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0305.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1513]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0310.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1203]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_016_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0264]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0040.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0022]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0332]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1279]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2119]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3174]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.46 1.1621]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0641 1.1924]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1625 1.2627]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0953]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0253]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.074]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1273]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2088]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2828]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3647]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4316]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0005.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1328]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0010.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.074]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0015.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1471]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0020.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2915]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0025.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4306]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0075.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0109]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0616]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1024]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1336]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1503]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0135]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0301]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0110.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0064]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0190.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2914]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0749 1.6481]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1635]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1814]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1368]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0764]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0011]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0615]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3175]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1808]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1024]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1924]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.212]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2217]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2113]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2198]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2273]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0305.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2393]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0310.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2576]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0315.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2777]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0320.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3006]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0325.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3257]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0030.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1551]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0035.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2778]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0050.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0009]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0055.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0416]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0060.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0731]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0065.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1222]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0070.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1693]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1211]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0438]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1514]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4294]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4373]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0005.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3317]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0010.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3545]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0015.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3887]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0020.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4316]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0070.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.5488]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0075.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4951]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4268]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3955]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.332]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.293]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2324]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.209]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0110.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1582]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0115.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.126]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0120.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0879]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0125.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0615]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0186]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0029]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0159]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0475]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.219]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3054]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1074]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1396]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0781]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0120.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.001]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0125.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0254]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0459]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.083]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1084]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1288]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1608]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1904]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2158]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.248]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0170.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2832]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0175.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3164]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0358]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.058]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0137]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.5144]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.499]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4838]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4711]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4583]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4449]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4316]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4185]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4062]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.394]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3825]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3711]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3597]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3492]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.339]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3288]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3185]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3091]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3002]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2913]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0180.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0093]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0185.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.02]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0190.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0762]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0915]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0918]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0795]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0635]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0572]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.058]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0629]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0651]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0686]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0741]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.086]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0996]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1134]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1227]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1309]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1419]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1574]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1729]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1965]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2201]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.248]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.279]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0305.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3171]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0045.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.013]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0050.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0361]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0055.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0654]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0060.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0947]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0065.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1305]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0070.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.168]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0075.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2059]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2466]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0159 1.2979]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0567 1.3491]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0331]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0688]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1057]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1506]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1969]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2488]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2977]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3359]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.375]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4082]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4092]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3945]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3677]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3319]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2922]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.254]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2168]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1918]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1668]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0005.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2162]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0010.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1862]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0015.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1634]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0020.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.139]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0025.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1135]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0030.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0799]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0035.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0442]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0040.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0207]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_HELICOPTER_040_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0618]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_HELICOPTER_040_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0493]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_HELICOPTER_040_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0368]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_HELICOPTER_040_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0206]
Traceback (most recent call last):
File "/home/daniel/drone_detection/visualizations/dataloader.py", line 23, in <module>
dataloader = trainer.get_dataloader(train_data)
File "/home/daniel/drone_detection/ultralytics_src/ultralytics/models/yolo/detect/train.py", line 55, in get_dataloader
return build_dataloader(dataset, batch_size, workers, shuffle, rank) # return dataloader
File "/home/daniel/drone_detection/ultralytics_src/ultralytics/data/build.py", line 144, in build_dataloader
sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle)
File "/home/daniel/.local/lib/python3.10/site-packages/torch/utils/data/distributed.py", line 68, in __init__
num_replicas = dist.get_world_size()
File "/home/daniel/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1769, in get_world_size
return _get_group_size(group)
File "/home/daniel/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 841, in _get_group_size
default_pg = _get_default_group()
File "/home/daniel/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1008, in _get_default_group
raise ValueError(
ValueError: Default process group has not been initialized, please make sure to call init_process_group.
```
### Additional
_No response_ | closed | 2025-03-06T12:17:34Z | 2025-03-06T14:01:49Z | https://github.com/ultralytics/ultralytics/issues/19553 | [
"question",
"dependencies",
"detect"
] | daniellehot | 3 |
babysor/MockingBird | deep-learning | 90 | UnicodeEncodeError: 'charmap' codec can't encode characters in position 7-13: character maps to <undefined> | 
您好,请问这个问题如何解决?貌似卡在Synthesizer的模型那一步,可是我两个模型都试了都是同一个问题? | closed | 2021-09-20T08:07:56Z | 2021-09-21T02:17:53Z | https://github.com/babysor/MockingBird/issues/90 | [] | Udaroth | 4 |
PaddlePaddle/models | computer-vision | 4,890 | TSM 模型对视频分类时,如何传入一个文件夹路径 | closed | 2020-09-28T08:32:16Z | 2020-09-28T09:06:17Z | https://github.com/PaddlePaddle/models/issues/4890 | [] | yy2yy | 0 | |
simple-login/app | flask | 1,560 | Use minimal permissions | ## Prerequisites
- [x] I have searched open and closed issues to make sure that the bug has not yet been reported.
## Bug report
**Describe the bug**
This extension intended to preserve privacy requests "Read and change all your data on all sites".
**Expected behavior**
A privacy-focused extension should be requesting only the minimal set of permissions needed to function.
**Screenshots**

https://crxcavator.io/report/dphilobhebphkdjbpfohgikllaljmgbn?platform=Chrome&new_scan=true
**Environment (If applicable):**
N/A
**Additional context**
N/A
| closed | 2023-02-02T01:32:07Z | 2023-02-03T15:04:29Z | https://github.com/simple-login/app/issues/1560 | [] | wrycu | 2 |
deepspeedai/DeepSpeed | deep-learning | 6,605 | [REQUEST] Inquiry about code for Domino | I saw in [Domino](https://arxiv.org/pdf/2409.15241) that the code would be released here. Could you let me know when will the code be released to the public?
| closed | 2024-10-07T23:04:46Z | 2025-02-05T23:48:33Z | https://github.com/deepspeedai/DeepSpeed/issues/6605 | [
"enhancement"
] | s1ghhh | 5 |
PaddlePaddle/models | nlp | 4,833 | 已解决 | closed | 2020-09-03T03:12:17Z | 2020-09-03T07:32:16Z | https://github.com/PaddlePaddle/models/issues/4833 | [] | kyuer | 0 | |
Anjok07/ultimatevocalremovergui | pytorch | 1,232 | ValueError when using MDX-Net Process Method | Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
ValueError: "all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 2 and the array at index 1 has size 6"
Traceback Error: "
File "UVR.py", line 6584, in process_start
File "separate.py", line 470, in seperate
File "separate.py", line 532, in demix
File "<__array_function__ internals>", line 180, in concatenate
"
Error Time Stamp [2024-03-08 21:51:38]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 3
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: False
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
cuda_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems
| open | 2024-03-08T10:56:02Z | 2024-03-08T10:56:02Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1232 | [] | shivammetimbers | 0 |
proplot-dev/proplot | matplotlib | 456 | MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "facecolor" which is no longer supported as of 3.3 and will become an error two minor releases later |
### Description
A deprecation warning with savefig() has been displayed for a long time and has not been solved.
/Users/yk/anaconda3/lib/python3.11/site-packages/proplot/figure.py:469: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "facecolor" which is no longer supported as of 3.3 and will become an error two minor releases later
return func(self, *args, **kwargs)
### Steps to reproduce
```python
import proplot as pplt
import numpy as np
x = np.linspace(0, 10, 100)
y = np.sin(x)
fig, ax = pplt.subplots()
ax.plot(x,y)
ax.format(xlabel='x axis', ylabel='y axis')
fig.savefig('test.pdf')
```
**Expected behavior**: No warning
**Actual behavior**:
/Users/yk/anaconda3/lib/python3.11/site-packages/proplot/figure.py:469: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "facecolor" which is no longer supported as of 3.3 and will become an error two minor releases later
return func(self, *args, **kwargs)
/Users/yk/anaconda3/lib/python3.11/site-packages/proplot/figure.py:469: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "edgecolor" which is no longer supported as of 3.3 and will become an error two minor releases later
return func(self, *args, **kwargs)
/Users/yk/anaconda3/lib/python3.11/site-packages/proplot/figure.py:469: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "orientation" which is no longer supported as of 3.3 and will become an error two minor releases later
return func(self, *args, **kwargs)
### Proplot version
matplotlib: 3.4.3
proplot: 0.9.7
python: 3.11
Computer: macbook pro, m2pro | open | 2024-05-21T07:01:00Z | 2024-08-16T09:04:18Z | https://github.com/proplot-dev/proplot/issues/456 | [] | yykphy | 1 |
AirtestProject/Airtest | automation | 720 | auto_setup()函数中compress参数默认值设置错误 | airtest.core.api 的 auto_setup()函数中compress参数默认值设置错误
compress的默认值设定为0
但是其有效值域为 1-100
| closed | 2020-04-14T12:11:43Z | 2020-04-15T01:59:02Z | https://github.com/AirtestProject/Airtest/issues/720 | [] | lincoln987 | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,119 | fd 3 failed with permission denied | 
| closed | 2022-09-26T07:24:43Z | 2023-01-08T08:55:13Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1119 | [] | mikemills254 | 0 |
mlfoundations/open_clip | computer-vision | 240 | Naming clash in new CLIP models | I just cloned this repository on a Windows computer and saw the following:
```
PS C:\Users\585491\documents\research> git clone https://github.com/mlfoundations/open_clip.git
Cloning into 'open_clip'...
remote: Enumerating objects: 1637, done.
remote: Counting objects: 100% (74/74), done.
remote: Compressing objects: 100% (53/53), done.
remote: Total 1637 (delta 25), reused 49 (delta 17), pack-reused 1563
Receiving objects: 100% (1637/1637), 8.06 MiB | 10.91 MiB/s, done.
Resolving deltas: 100% (934/934), done.
warning: the following paths have collided (e.g. case-sensitive paths
on a case-insensitive filesystem) and only one from the same
colliding group is in the working tree:
'src/open_clip/model_configs/ViT-G-14.json'
'src/open_clip/model_configs/ViT-g-14.json'
'tests/data/output/ViT-G-14_None_fp32_random_image.pt'
'tests/data/output/ViT-g-14_None_fp32_random_image.pt'
'tests/data/output/ViT-G-14_None_fp32_random_text.pt'
'tests/data/output/ViT-g-14_None_fp32_random_text.pt'
```
It would be nice if the names could be adjusted to be compliant with case-insensitive file systems. | closed | 2022-11-21T16:04:43Z | 2022-11-27T23:47:06Z | https://github.com/mlfoundations/open_clip/issues/240 | [] | StellaAthena | 11 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,489 | driver_executable_path is not working | 
Error:

The Chrome driver version in the location mentioned in the first ss is 112.0.5615 but still, it's showing an error with the same Chrome browser version. | open | 2023-08-17T18:31:08Z | 2023-08-17T19:39:46Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1489 | [] | odddkidout | 1 |
hankcs/HanLP | nlp | 651 | 可以有自定义停词库吗 | 最新版本可以使用自定义词库,不知道是否支持自定义停词库。试了一下直接在配置文件中停词path后面加路径不支持 | closed | 2017-10-16T07:22:00Z | 2020-01-01T10:52:13Z | https://github.com/hankcs/HanLP/issues/651 | [
"ignored"
] | caleben | 2 |
twelvedata/twelvedata-python | matplotlib | 90 | [Feature Request] add validation for api key | **Is your feature request related to a problem? Please describe.**
when i tried my api key accidentally passed a `None` value and it didn't throw an error.
**Describe the solution you'd like**
throw an error or alert.
**Describe alternatives you've considered**
i fixed the issue.
**Additional context**
💙
| closed | 2024-09-28T02:59:29Z | 2024-09-28T03:18:20Z | https://github.com/twelvedata/twelvedata-python/issues/90 | [] | AzulGarza | 1 |
deepspeedai/DeepSpeed | deep-learning | 5,779 | how to set "training_step" during training? | **Describe the bug**
I use zero1 to train a unet network with the following deepspeed_config configuration. I set 10 epochs and the output during training is as follows:
```json
{
"train_micro_batch_size_per_gpu": 1,
"gradient_accumulation_steps": 1,
"local_rank": 0,
"steps_per_print": 500,
"optimizer": {
"type": "Adam",
"params": {"lr": 0, "betas": [0.9, 0.98], "eps": 1e-9, "weight_decay": 3e-7}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"total_num_steps": 4000,
"warmup_min_lr": 0.00001,
"warmup_max_lr": 0.01,
"warmup_num_steps": 1000,
"warmup_type": "linear",
"last_batch_iteration": -1,
},
},
"bf16": {"enabled": false},
"fp16": {
"enabled": true,
"auto_cast": false,
"loss_scale": 0,
"initial_scale_power": 16,
"loss_scale_window": 1000,
"hysteresis": 2,
"consecutive_hysteresis": false,
"min_loss_scale": 1,
},
"zero_optimization": {
"stage": 1,
"reduce_bucket_size": 5e8,
"allgather_bucket_size": 5e8,
"reduce_scatter": true,
},
"logging": {"log_level": "INFO"},
}
```
output
```shell
[2024-07-17 19:44:06,829] [INFO] [loss_scaler.py:190:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, but hysteresis is 2. Reducing hysteresis to 1
[2024-07-17 19:44:06,945] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768
[2024-07-17 19:44:07,064] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768, reducing to 16384
[2024-07-17 19:44:07,184] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384, reducing to 8192
[2024-07-17 19:44:07,303] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 8192, reducing to 4096
[2024-07-17 19:44:07,540] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4096, reducing to 2048
[2024-07-17 19:44:09,300] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2048, reducing to 1024
[2024-07-17 19:44:19,048] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1024, reducing to 512
[2024-07-17 19:44:50,382] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 512, reducing to 256
[2024-07-17 19:45:07,165] [INFO] [logging.py:96:log_dist] [Rank 0] step=500, skipped=9, lr=[0.0049051], mom=[[0.9, 0.98]]
[2024-07-17 19:45:07,166] [INFO] [timer.py:258:stop] epoch=0/micro_step=500/global_step=500, RunningAvgSamplesPerSec=16.543493968672824, CurrSamplesPerSec=17.400610263292727, MemAllocated=0.02GB, MaxMemAllocated=0.97GB
[2024-07-17 19:46:07,521] [INFO] [logging.py:96:log_dist] [Rank 0] step=1000, skipped=9, lr=[0.0099001], mom=[[0.9, 0.98]]
[2024-07-17 19:46:07,522] [INFO] [timer.py:258:stop] epoch=0/micro_step=1000/global_step=1000, RunningAvgSamplesPerSec=16.558021050068817, CurrSamplesPerSec=17.281347468346606, MemAllocated=0.02GB, MaxMemAllocated=0.97GB
[2024-07-17 19:47:09,909] [INFO] [logging.py:96:log_dist] [Rank 0] step=1500, skipped=9, lr=[0.0083683], mom=[[0.9, 0.98]]
[2024-07-17 19:47:09,910] [INFO] [timer.py:258:stop] epoch=0/micro_step=1500/global_step=1500, RunningAvgSamplesPerSec=16.37900488784748, CurrSamplesPerSec=15.552286788003284, MemAllocated=0.02GB, MaxMemAllocated=0.97GB
[2024-07-17 19:48:14,992] [INFO] [logging.py:96:log_dist] [Rank 0] step=2000, skipped=9, lr=[0.006703300000000001], mom=[[0.9, 0.98]]
[2024-07-17 19:48:14,993] [INFO] [timer.py:258:stop] epoch=0/micro_step=2000/global_step=2000, RunningAvgSamplesPerSec=16.113892734914483, CurrSamplesPerSec=14.6544845971357, MemAllocated=0.02GB, MaxMemAllocated=0.97GB
[2024-07-17 19:49:19,610] [INFO] [logging.py:96:log_dist] [Rank 0] step=2500, skipped=9, lr=[0.0050383], mom=[[0.9, 0.98]]
[2024-07-17 19:49:19,610] [INFO] [timer.py:258:stop] epoch=0/micro_step=2500/global_step=2500, RunningAvgSamplesPerSec=15.982786991454645, CurrSamplesPerSec=16.03645984675853, MemAllocated=0.02GB, MaxMemAllocated=0.97GB
[2024-07-17 19:49:29,863] [INFO] [loss_scaler.py:190:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1024, but hysteresis is 2. Reducing hysteresis to 1
[2024-07-17 19:50:22,010] [INFO] [logging.py:96:log_dist] [Rank 0] step=3000, skipped=10, lr=[0.0033766300000000003], mom=[[0.9, 0.98]]
[2024-07-17 19:50:22,011] [INFO] [timer.py:258:stop] epoch=0/micro_step=3000/global_step=3000, RunningAvgSamplesPerSec=15.99056605536519, CurrSamplesPerSec=15.759469462135302, MemAllocated=0.02GB, MaxMemAllocated=0.97GB
[2024-07-17 19:51:23,297] [INFO] [logging.py:96:log_dist] [Rank 0] step=3500, skipped=10, lr=[0.0017116300000000002], mom=[[0.9, 0.98]]
[2024-07-17 19:51:23,298] [INFO] [timer.py:258:stop] epoch=0/micro_step=3500/global_step=3500, RunningAvgSamplesPerSec=16.036943131294425, CurrSamplesPerSec=17.25923182644907, MemAllocated=0.02GB, MaxMemAllocated=0.97GB
[2024-07-17 19:51:35,403] [INFO] [loss_scaler.py:190:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2048, but hysteresis is 2. Reducing hysteresis to 1
[2024-07-17 19:51:54,774] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2048, reducing to 1024
[2024-07-17 19:52:23,846] [INFO] [logging.py:96:log_dist] [Rank 0] step=4000, skipped=12, lr=[5.329e-05], mom=[[0.9, 0.98]]
[2024-07-17 19:52:23,847] [INFO] [timer.py:258:stop] epoch=0/micro_step=4000/global_step=4000, RunningAvgSamplesPerSec=16.095795852718908, CurrSamplesPerSec=17.06993117987245, MemAllocated=0.02GB, MaxMemAllocated=0.97GB
[2024-07-17 19:52:53,931] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1024, reducing to 512
[2024-07-17 19:53:24,980] [INFO] [logging.py:96:log_dist] [Rank 0] step=4500, skipped=13, lr=[1e-05], mom=[[0.9, 0.98]]
[2024-07-17 19:53:24,981] [INFO] [timer.py:258:stop] epoch=0/micro_step=4500/global_step=4500, RunningAvgSamplesPerSec=16.124955167964504, CurrSamplesPerSec=16.807402094161112, MemAllocated=0.02GB, MaxMemAllocated=0.97GB
/home/lg/miniconda3/envs/jaxtest/lib/python3.9/site-packages/torch/optim/lr_scheduler.py:138: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
/home/lg/miniconda3/envs/jaxtest/lib/python3.9/site-packages/torch/optim/lr_scheduler.py:138: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
epoch: 0, loss: 1.17383, t2-t1: 561.80249, trainL2: 17024.88278, testL2: 10.81152
epoch: 0, loss: 1.17383, t2-t1: 562.04985, trainL2: 17270.87160, testL2: 10.81152
```
This one reports a lot of "[2024-07-17 19:53:24,981] [INFO] [timer.py:258:stop] epoch=0/micro_step=4500/global_step=4500, RunningAvgSamplesPerSec=16.124955167964504, CurrSamplesPerSec=16.807402094161112, MemAllocated=0.02GB, MaxMemAllocated=0.97GB".
it looks like the adam optimizer calculates 4,500 steps every 1 epoch, which results in taking a long time to train for 1 epoch. why is there so much output?
**ds_report output**
```shell
[2024-07-17 19:59:25,808] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] please install triton==1.0.0 if you want to use sparse attention
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
```
**System info (please complete the following information):**
- OS: Ubuntu 20.04
- GPU count and types: 4*3090
- Python version: 3.9.18
- deepspeed 0.14.4
| closed | 2024-07-17T12:11:02Z | 2024-10-11T16:26:42Z | https://github.com/deepspeedai/DeepSpeed/issues/5779 | [
"bug",
"training"
] | qwerfdsadad | 3 |
albumentations-team/albumentations | deep-learning | 1,807 | Fix available keys | We need some checks to be sure that propper targets are sent to the Compose.
Possible scenarious
- [x] `strict=True`
`ImageOnly`
will not trigger on
- `image`
- `mask`
- `masks`
- `bboxes` if bboxParams is specified
- `keypoints` if KeypointParams is specified
- parameters specified in transforms in `def targets_as_params(self)`
`Dual`
will not trigger on
- `image`
- `mask`
- `masks`
- `bboxes` if bboxParams is specified and **all transforms support bboxes**
- `keypoints` if KeypointParams is specified and **all transforms support keypoints**
- parameters specified in transforms in `def targets_as_params(self)`
- [x] `strict=False`, the same as `True`, but will allow any other keys, and they will not be affected by transforms
- [x] If bboxParams are specified and there are Dual that support `bboxes` and or `ImageOnly`, but we do not pass `bboxes` => we do not return bbox key
- [x] If keypointParams are specified and there are Dual that support keypoints and or `ImageOnely`, but we do not pass `keypoints` => we do not return bbox key | closed | 2024-06-19T23:46:43Z | 2024-06-22T17:39:11Z | https://github.com/albumentations-team/albumentations/issues/1807 | [
"bug"
] | ternaus | 0 |
krish-adi/barfi | streamlit | 48 | Color of the Flow background and the Edges | Hi Aditya,
Hope you're well.
Have a Question, is there any way to apply the CSS code on the flow to change the look and feel of it?
| closed | 2025-02-03T09:23:36Z | 2025-02-04T07:11:32Z | https://github.com/krish-adi/barfi/issues/48 | [] | abrarzahoor004 | 1 |
ets-labs/python-dependency-injector | flask | 795 | Singleton provider throws RuntimeError: Task got bad yield for generators | When a generator is provided by `Factory`, I can inject my dependency synchronously, and **example_1** below works as expected. When a generator is provided by `Singleton`, I experience forced asynchronous behavior (why do I get that?) and encounter a *bad yield exception* (see **example_2** below).
Does anyone know why this happens and how I can fix it?
**System:**
Python 3.9.16
dependency-injector 4.41.0
**Exception:**
```
Traceback (most recent call last):
File "issue.py", line 37, in <module>
asyncio.run(example_2())
File "/usr/lib64/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib64/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File ".venv/lib64/python3.9/site-packages/dependency_injector/wiring.py", line 994, in _patched
return await _async_inject(
File "src/dependency_injector/_cwiring.pyx", line 62, in _async_inject
File "src/dependency_injector/providers.pyx", line 2986, in dependency_injector.providers.BaseSingleton._async_init_instance
File "issue.py", line 8, in generator_factory
yield 1
RuntimeError: Task got bad yield: 1
```
**Program (issue.py):**
```python
import asyncio
from typing import AsyncIterator, Iterator
from dependency_injector import containers, providers, wiring
def generator_factory() -> Iterator[int]:
yield 1
yield 2
yield 3
yield 4
yield 5
class Container(containers.DeclarativeContainer):
generator_1 = providers.Factory(generator_factory)
generator_2 = providers.Singleton(generator_factory)
@wiring.inject
def example_1(generator_1: Iterator[int] = wiring.Provide[Container.generator_1]) -> None:
for i in generator_1:
print(i)
@wiring.inject
async def example_2(generator_2: AsyncIterator[int] = wiring.Provide[Container.generator_2]) -> None:
async for i in generator_2:
print(i)
if __name__ == "__main__":
container = Container()
container.wire(modules=[__name__])
example_1()
asyncio.run(example_2())
``` | open | 2024-04-22T09:09:31Z | 2024-11-13T19:48:44Z | https://github.com/ets-labs/python-dependency-injector/issues/795 | [] | jonaslalin | 1 |
simple-login/app | flask | 1,969 | Block registration with maskmy.id (it is violating ToS) | Provider is https://skiff.com/quick-alias and it's actually subdomains, each user can generate their own subdomain. I tested it, very easy.
The page title of https://skiff.com/quick-alias is "Quick alias burner email address". In the HTML I see
```
<meta name="description" content="Secure and quick-to-create burner email addresses with Skiff!
```
There is a browser plugin https://github.com/irazasyed/email-masker to create one email per website, skiff.com's twitter account promoted it, the github repository is tagged as 'burner-email'
You should add:
- `maskmy.id` because it is there main domain
- `anything.maskmy.id` because it allows you to create disposable addresses with random subdomains

 | closed | 2023-12-12T22:02:38Z | 2024-01-03T12:59:12Z | https://github.com/simple-login/app/issues/1969 | [] | ghost | 5 |
apachecn/ailearning | python | 593 | Python3 | Can the code be updated to fit Python3? | closed | 2020-05-21T07:55:17Z | 2021-09-07T17:44:54Z | https://github.com/apachecn/ailearning/issues/593 | [] | Harvey9610 | 1 |
streamlit/streamlit | deep-learning | 10,843 | The button type parameter causes the button function to be invalid | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
When I set the type parameter as a variable, the first trigger of the button does not work. Here are two test cases I conducted on the button: when 'type='secondary', the button triggers normally, and when 'type=type()', the first click on the button is invalid.

### Reproducible Code Example
```Python
import streamlit as st
from datetime import datetime
def onClict_reset():
st.session_state
@st.fragment
def test_reset0():
if reset := st.button(
key='st_reset_secondary',
label='Reset',
type='secondary'
):
st.write(f'fragment: {datetime.now()}')
st.write(f'button: {reset}')
@st.fragment
def test_reset1():
type = lambda : 'primary' if 'st_reset_type' in st.session_state and st.session_state.st_reset_type else 'secondary'
if reset := st.button(
key='st_reset_type',
label='Reset',
type=type()
):
st.write(f'fragment: {datetime.now()}')
st.write(f'button: {reset}')
st.write(f'app: {datetime.now()}')
cols = st.columns([1, 1], border=True)
with cols[0]:
st.markdown("constants: `type='secondary'`")
test_reset0()
with cols[1]:
st.markdown("variable: `type=type()`")
test_reset1()
```
### Steps To Reproduce
Click the second button
### Expected Behavior
output
button: True
### Current Behavior
Clicking on it for the first time still returns False, and it will only return True for the second time
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | closed | 2025-03-19T11:35:06Z | 2025-03-24T17:58:18Z | https://github.com/streamlit/streamlit/issues/10843 | [
"type:bug",
"priority:P3",
"feature:st.button"
] | lkdd-ao | 3 |
darrenburns/posting | automation | 113 | Importing from an openapi file doesn't override deleted endpoints | When running `posting import path/to/openapi.{yaml,json}` new endpoints are registered but deleted ones stay in place. | closed | 2024-09-27T18:15:00Z | 2024-11-18T17:17:53Z | https://github.com/darrenburns/posting/issues/113 | [] | StitiFatah | 1 |
coleifer/sqlite-web | flask | 39 | Edit with Spreadsheet UI | I've been looking for a hosted alternative to Google Sheets, for relatively small datasets. If a table could be edited like a spreadsheet, this would certainly be a candidate.
I know this opens up a whole can of worms on the concurrent editing side of things, but at least for our use case it would be fine to simply detect the concurrent edit and reject it with an error message. | closed | 2018-03-07T10:52:39Z | 2023-04-18T15:39:59Z | https://github.com/coleifer/sqlite-web/issues/39 | [] | mbarkhau | 2 |
tensorlayer/TensorLayer | tensorflow | 1,002 | A Writing errors | There may be a writing error in https://github.com/tensorlayer/tensorlayer/blob/master/examples/text_generation/tutorial_generate_text.py
line 250
: ‘train_weights = net.weights’ , I think this should be changed to ‘train_weights=net.trainable_weights’,Otherwise, the program will report errors on my computer.
| closed | 2019-06-14T06:03:02Z | 2019-06-14T06:31:34Z | https://github.com/tensorlayer/TensorLayer/issues/1002 | [] | veraanddamao | 2 |
kymatio/kymatio | numpy | 347 | ENH warning when making doc | fresh clone + cd doc + make html leads to:
"WARNING: could not relabel citation reference [paper]"
I'm not sure what's the issue. | closed | 2019-02-24T20:29:37Z | 2019-02-25T19:15:40Z | https://github.com/kymatio/kymatio/issues/347 | [
"bug",
"doc"
] | edouardoyallon | 2 |
waditu/tushare | pandas | 1,374 | pro api: stock_basic接口缺少000670这只股票 | stock_basic接口,无论传入参数为L,D还是P,都没有000670这只股票。
id: 370310 | open | 2020-06-12T03:45:16Z | 2020-07-15T15:39:07Z | https://github.com/waditu/tushare/issues/1374 | [] | tomjamescn | 1 |
home-assistant/core | python | 141,120 | Roborock: Not setting up P20 Pro because the coordinator failed to get data for the first time using the offline client | ### The problem
is there a way to avoid the download of map data?
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
roborock
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
2025-03-22 16:23:36.142 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry ****@****.com for roborock
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/aiohttp/connector.py", line 1116, in _wrap_create_connection
sock = await aiohappyeyeballs.start_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
File "/usr/local/lib/python3.13/site-packages/aiohappyeyeballs/impl.py", line 73, in start_connection
sock = await _connect_sock(
^^^^^^^^^^^^^^^^^^^^
...<6 lines>...
)
^
File "/usr/local/lib/python3.13/site-packages/aiohappyeyeballs/impl.py", line 208, in _connect_sock
await loop.sock_connect(sock, address)
File "/usr/local/lib/python3.13/asyncio/selector_events.py", line 641, in sock_connect
return await fut
^^^^^^^^^
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/aiohttp/client.py", line 703, in _request
conn = await self._connector.connect(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
req, traces=traces, timeout=real_timeout
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/site-packages/aiohttp/connector.py", line 548, in connect
proto = await self._create_connection(req, traces, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/aiohttp/connector.py", line 1056, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/aiohttp/connector.py", line 1411, in _create_direct_connection
raise last_exc
File "/usr/local/lib/python3.13/site-packages/aiohttp/connector.py", line 1380, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<7 lines>...
)
^
File "/usr/local/lib/python3.13/site-packages/aiohttp/connector.py", line 1113, in _wrap_create_connection
async with ceil_timeout(
~~~~~~~~~~~~^
timeout.sock_connect, ceil_threshold=timeout.ceil_threshold
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "/usr/local/lib/python3.13/asyncio/timeouts.py", line 116, in __aexit__
raise TimeoutError from exc_val
TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 753, in __async_setup_with_context
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/roborock/__init__.py", line 50, in async_setup_entry
home_data = await api_client.get_home_data_v2(user_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/roborock/web_api.py", line 317, in get_home_data_v2
home_id = await self._get_home_id(user_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/roborock/web_api.py", line 274, in _get_home_id
home_id_response = await home_id_request.request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
)
^
File "/usr/local/lib/python3.13/site-packages/roborock/web_api.py", line 454, in request
async with session.request(method, _url, params=params, data=data, headers=_headers, json=json) as resp:
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/aiohttp/client.py", line 1425, in __aenter__
self._resp: _RetType = await self._coro
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/aiohttp/client.py", line 707, in _request
raise ConnectionTimeoutError(
f"Connection timeout to host {url}"
) from exc
aiohttp.client_exceptions.ConnectionTimeoutError: Connection timeout to host https://cniot.roborock.com/api/v1/getHomeDetail
2025-03-22 16:26:07.458 WARNING (MainThread) [homeassistant.components.roborock] Not setting up P20 Pro because the coordinator failed to get data for the first time using the offline client Please create an issue with the following error included: Failed to get map data: {err}
```
### Additional information
_No response_ | open | 2025-03-22T15:51:13Z | 2025-03-23T15:32:39Z | https://github.com/home-assistant/core/issues/141120 | [
"needs-more-information",
"integration: roborock"
] | giorgiopogliani | 8 |
ray-project/ray | deep-learning | 50,883 | [Serve] Ray Serve APIs for users to define when the Ray Serve applications are ready to serve requests | ### Description
It'd be useful for the Ray Serve API to allow users to configure settings such as custom timeouts for when applications are ready to serve requests.
### Use case
This would be useful for scenarios such as: https://github.com/ray-project/enhancements/pull/58#discussion_r1968439611, where a large number of non-declaratively created applications which frequently update may make it difficult for the controller to find a state where all Serve apps are in a "Ready" state. | open | 2025-02-25T03:39:23Z | 2025-02-25T17:29:47Z | https://github.com/ray-project/ray/issues/50883 | [
"enhancement",
"triage",
"serve"
] | ryanaoleary | 0 |
babysor/MockingBird | deep-learning | 683 | 请教大佬,现在一分钟跑一个,怎么提升速度呢?(附截图) | 

| closed | 2022-07-27T18:54:54Z | 2022-07-28T11:36:03Z | https://github.com/babysor/MockingBird/issues/683 | [] | yunqi777 | 1 |
gradio-app/gradio | deep-learning | 10,867 | Documentation for gr.Interface cache_examples is incorrect | ### Describe the bug
The documentation for gr.Interface cache_examples says:
> In HuggingFace Spaces, this parameter is True (as long as `fn` and `outputs` are also provided). The default option otherwise is False.
But in my experience HuggingFace Spaces uses lazy caching by default.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
NA
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
NA
```
### Severity
I can work around it | open | 2025-03-24T14:42:07Z | 2025-03-24T18:15:57Z | https://github.com/gradio-app/gradio/issues/10867 | [
"bug"
] | edmcman | 1 |
encode/databases | sqlalchemy | 369 | CockroachDB support | Hey, guys!
Do you have any plans to support CockroachDB, like a `sqlalchemy-cockroachdb`? | closed | 2021-08-17T09:14:27Z | 2021-08-26T12:18:45Z | https://github.com/encode/databases/issues/369 | [] | unittolabs | 1 |
aimhubio/aim | tensorflow | 2,592 | Add view/hide run via legend in metrics explorer | ## 🚀 Feature
Add the possibility to click on the elements of the legend in the metric explorers so that one can quickly view/hide the related runs or charts.
### Motivation
Quick explorations and visualisation of the results is an essential feature for a good metric explorer especially when having large amounts of hyper parameters. At the moment to obtain this one has to either query the results again or go through the entire table and select one by one the runs to hide or view.
### Pitch
As attached in the screenshots it would be useful to have the elements of the legends for all subgroups (colours, charts and strokes) to be clickable in order to hide or view the correspondent run/chart from the panel.
### Additional context

| open | 2023-03-15T17:58:37Z | 2023-03-16T07:53:41Z | https://github.com/aimhubio/aim/issues/2592 | [
"type / enhancement",
"area / Web-UI"
] | dngfra | 1 |
serengil/deepface | deep-learning | 663 | 'deepface.commons.functions' has no attribute 'preprocess_face' | I'm trying to call DeepFace.stream() (library 0.0.78 installed from pip)
but get an error message
```
AttributeError: module 'deepface.commons.functions' has no attribute 'preprocess_face'
``` | closed | 2023-02-07T15:29:39Z | 2023-02-07T15:31:00Z | https://github.com/serengil/deepface/issues/663 | [
"bug"
] | noonv | 1 |
plotly/dash | jupyter | 2,517 | [BUG] Dash Design Kit's ddk.Notification does not render correctly on React 18.2.0 | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.9.3
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dash_cytoscape 0.2.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: MacOS[e.g. iOS]
- Browser Firefox, Chrome
- Version [e.g. 22]
**Describe the bug**
ddk.Notification rendering is inconsistent. It does not render immediately until the next UI event. This is incredibly buggy on React 18.
Reproduction Steps on this Example:
1. Click on "Click Me"
2. Observe that "was inserted!" is added to the DOM, but the ddk.Notification does not show up.
```python
import dash
import dash_design_kit as ddk
from dash import Dash, dcc, html, Input, Output
app = Dash(__name__)
# Enable react 18
# See https://github.com/plotly/dash/pull/2260/files
dash._dash_renderer._set_react_version("18.2.0")
app.layout = ddk.App(
children=[
ddk.Header(ddk.Title("Hi")),
html.H1(children="Hello Dash"),
html.Button(id="click", children="Click Me!"),
html.Div(id="stuff"),
]
)
@app.callback(
Output("stuff", "children"), Input("click", "n_clicks"), prevent_initial_call=True
)
def insert_notification(n_clicks):
return html.Div(
children=[
html.Div("was inserted!"),
ddk.Notification(
type="danger",
title=f"n_clicks: {n_clicks}",
timeout=-1,
),
]
)
if __name__ == "__main__":
app.run_server(debug=True)
```
**Expected behavior**
It renders immediately on each key press.
**Screenshots**
I've included a screencapture of this behavior comparing React 16 and React 18.
React 16: https://user-images.githubusercontent.com/1694040/235269740-57d35c94-530e-432f-b052-0b7bf7de4302.mov
React 18: https://user-images.githubusercontent.com/1694040/235269470-159ee33b-994a-4ba3-a5a2-ae42eff829a5.mov
| closed | 2023-04-28T23:34:28Z | 2024-05-06T14:16:28Z | https://github.com/plotly/dash/issues/2517 | [] | rymndhng | 6 |
benbusby/whoogle-search | flask | 294 | [QUESTION] IP address query | This is just a random thing that popped into my mind.
When you usually google "my ip" there is a snippet on top of the results page that shows the current IP address. When you search that on whoogle, nothing shows up. Is that a limitation from google?
Was just curious in finding out from which IP results are fetched from google. Heroku normally has a pool of ip addresses. | closed | 2021-04-22T00:54:09Z | 2021-04-27T01:33:36Z | https://github.com/benbusby/whoogle-search/issues/294 | [
"question"
] | milachevalier | 2 |
plotly/dash-component-boilerplate | dash | 120 | Fix python-3.6 CircleCI tests | Seems like it's [currently failing](https://app.circleci.com/pipelines/github/plotly/dash-component-boilerplate/208/workflows/27e8f11f-098c-45b3-a776-13792dec311b/jobs/521):
```
#!/bin/bash -eo pipefail
. venv/bin/activate
pip install -r tests/requirements.txt --quiet
pytest-cookies 0.5.1 has requirement pytest<6.0.0,>=3.3.0, but you'll have pytest 6.2.2 which is incompatible.
Command "/home/circleci/project/venv/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-7nfc7x4u/cryptography/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-fhb8jvs6/install-record.txt --single-version-externally-managed --compile --install-headers /home/circleci/project/venv/include/site/python3.6/cryptography" failed with error code 1 in /tmp/pip-install-7nfc7x4u/cryptography/
You are using pip version 18.1, however version 21.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Exited with code exit status 1
CircleCI received exit code 1
``` | open | 2021-02-22T22:39:16Z | 2021-02-23T03:49:51Z | https://github.com/plotly/dash-component-boilerplate/issues/120 | [] | xhluca | 3 |
allenai/allennlp | data-science | 5,457 | AllenNLP Models 2.8 | I'm trying to upgrade AllenNLP to 2.8.0, but since AllenNLP models wasn't upgraded accordingly, I can't keep the new version. Isn't the models project going to get released to 2.8.0 too?
Thanks! | closed | 2021-11-03T17:27:24Z | 2021-11-06T00:59:00Z | https://github.com/allenai/allennlp/issues/5457 | [
"bug"
] | pvcastro | 1 |
babysor/MockingBird | deep-learning | 592 | 根据作者训练进度继续训练合成器报错 | size mismatch for gst.stl.attention.W_query.weight: copying a param with shape torch.Size([512, 256]) from checkpoint, the shape in current model is torch.Size([512, 512]).
是顺着my_run8_25k.pt练的
顺着pretrained-11-7-21_75k.pt就没问题 | closed | 2022-05-30T02:11:48Z | 2022-07-09T06:48:49Z | https://github.com/babysor/MockingBird/issues/592 | [] | Noct-Cp | 4 |
scikit-image/scikit-image | computer-vision | 7,107 | Morphological operations for label images | Hi all,
I’ve recently implemented some simple morphological operations for label images using scikit-image. Specifically, I’ve implemented erosion, dilation, opening and closing, taking into account potential overlaps among labels and treating all label values equally. To illustrate, I’ve extracted some code from my private repo to this gist: https://gist.github.com/jwindhager/71fa5b149e85d61f83c9613e57d5b3f4
Would there be interest in adding such functionality to scikit-image? I would be happy to make a more polished PR, but wanted to check here first, since this would be my first contribution to this project. | open | 2023-08-26T00:03:38Z | 2024-02-27T02:23:01Z | https://github.com/scikit-image/scikit-image/issues/7107 | [
":pray: Feature request"
] | jwindhager | 6 |
flasgger/flasgger | flask | 47 | Authorization Header + UI input view | Anyone can help me with how to implement custom headers e.g. "Authorization header" in requests + adding UI element for it with flasgger?
I need to use JWT with some of the endpoints.
I was able to achieve this by modifying flasgger source code, but that shouldn't be the way! | open | 2017-01-01T15:56:58Z | 2023-10-08T06:27:20Z | https://github.com/flasgger/flasgger/issues/47 | [
"enhancement",
"help wanted",
"hacktoberfest"
] | saeid | 4 |
jofpin/trape | flask | 271 | Dear Sensei! | I have been trying to run Trape one way or another I need it badly!
Today after installing it again.. and running requirements.txt and utils and db. I got this results..
Note I am runing simultaneously on VM and a phisical pc.
Both same results also changing rw permissions.
I know you are extremely bussy,, and your v3 is more than acclaimed.. but meantime,, help us to get it back and running.
Please ....SS.
_
| |_ ____ ____ ____ ____
| _) / ___) _ | _ \ / _ )
| |__| | ( ( | | | | ( (/ /
\___)_| \_||_| ||_/ \____)
|_| 2018 by Jose Pino (@jofpin)
-----------------------------------------------
People tracker on internet for OSINT research |=-
-----------------------------------------------
| v2.0 |
--------
@-=[ UPDATES: RUNNING RECENT VERSION
LOCAL INFORMATION
-------------------
>-=[ Lure for the users: http://192.168.100.28:8080/www.whatsapp.com
>-=[ Your REST API path: http://192.168.100.28:8080/016047da189a.js
>-=[ Control Panel Link: http://127.0.0.1:8080/ngrok
>-=[ Your Access key: c27af40ec449f6652bd633e9
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 267, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/whatsapp/trape/core/ngrok.py", line 81, in start_ngrok
result = subprocess.check_output([str_ngrok, "http", port])
File "/usr/lib/python2.7/subprocess.py", line 223, in check_output
raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command '['./ngrok', 'http', '8080']' returned non-zero exit status 1
PUBLIC INFORMATION
-------------------
>-=[ Public lure: http://ba415709b359.ngrok.io/www.whatsapp.com
>-=[ Control Panel link: http://ba415709b359.ngrok.io/ngrok
[>] Start time: 2020-11-06 - 02:21:32
[?] Do not forget to close Trape, after use. Press Control C
[¡] Waiting for the users to fall...
^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[B[2020-11-06 02:24:06,775] ERROR in app: Exception on /www.whatsapp.com [GET]
Traceback (most recent call last):
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/whatsapp/.local/lib/python2.7/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/whatsapp/trape/core/user.py", line 69, in homeVictim
html = assignScripts(victim_inject_code(opener.open(trape.url_to_clone).read(), 'payload', trape.url_to_clone, trape.gmaps, trape.ipinfo))
File "/whatsapp/trape/core/dependence/urllib2.py", line 406, in open
response = meth(req, response)
File "/whatsapp/trape/core/dependence/urllib2.py", line 519, in http_response
'http', request, response, code, msg, hdrs)
File "/whatsapp/trape/core/dependence/urllib2.py", line 444, in error
return self._call_chain(*args)
File "/whatsapp/trape/core/dependence/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/whatsapp/trape/core/dependence/urllib2.py", line 527, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 400: Bad Request
[2020-11-06 02:24:07,145] ERROR in app: Exception on /www.whatsapp.com [GET]
Traceback (most recent call last):
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/whatsapp/.local/lib/python2.7/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/whatsapp/.local/lib/python2.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/whatsapp/trape/core/user.py", line 69, in homeVictim
html = assignScripts(victim_inject_code(opener.open(trape.url_to_clone).read(), 'payload', trape.url_to_clone, trape.gmaps, trape.ipinfo))
File "/whatsapp/trape/core/dependence/urllib2.py", line 406, in open
response = meth(req, response)
File "/whatsapp/trape/core/dependence/urllib2.py", line 519, in http_response
'http', request, response, code, msg, hdrs)
File "/whatsapp/trape/core/dependence/urllib2.py", line 444, in error
return self._call_chain(*args)
File "/whatsapp/trape/core/dependence/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/whatsapp/trape/core/dependence/urllib2.py", line 527, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 400: Bad Request
| open | 2020-11-06T08:34:09Z | 2020-11-07T01:36:57Z | https://github.com/jofpin/trape/issues/271 | [] | Sallysheridan | 6 |
miguelgrinberg/Flask-SocketIO | flask | 1,345 | Problem emitting from one namespace to another | Hi,
I have 2 clients connecting to a server using web sockets. The one client connects to the server from a Python script to the namespace "edge", the other client connects from a ReactJS frontend to the "ui" namespace. For good measure, each also joins their on respective room.
```python
@socketio.on('connect', namespace='/ui')
def client_connect():
join_room("ui")
print("UI connecting")
emit('ui', {'message': 'Server connected 2'}, namespace='/ui')
@socketio.on('connect', namespace='/edge')
def edge_connect():
print("EDGE connecting")
if current_user.is_authenticated:
print("EDGE authenticated")
join_room("edge")
```
When the client connects via the UI namespace, the message the code emits is received by the ReactJS ui, and the same with the EDGE code. The problem I am having is emitting a message from the 'edge' namespace to the 'ui' namespace, as illustrated in the code (emit is the last line):
```python
@socketio.on('client_data', namespace='/edge')
# @authenticated_only
def client_message(message):
print("EDGE Received.. : " + request.sid + " " + message['quid'])
emit('ui', {'message': 'edge data received'}, namespace='/ui')
```
It would appear that server can emit to both connected clients respectively, but the problem is sending between the namespaces - the UI client does not receive the message from the EDGE namespace.
Worst case scenario I could consider 2 separate servers each dealing with their respective namespace clients, and then exchanging messages between them using something like RabbitMQ/Kafka, but for my POCO that seems a bit OTT.
Any suggestions would be appreciated | closed | 2020-07-31T15:16:55Z | 2020-08-04T11:34:51Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1345 | [
"question"
] | dian-workz | 2 |
2noise/ChatTTS | python | 900 | windows11 python3.11生成语音时报错 The expanded size of the tensor (42) must match the existing size (41) at non-singleton dimension 3. Target sizes: [2, 12, 1, 42]. Tensor sizes: [2, 1, 1, 41] | ```
[INFO] #2 download copy: '[asset/gpt/config.json asset/gpt/model.safetensors]'.
[WARNING] #2.2 skip exist file D:\software\ChatTTS/asset/gpt/model.safetensors
[WARNING] #2.1 skip exist file D:\software\ChatTTS/asset/gpt/config.json
[INFO] #3 open target folder 'D:\software\ChatTTS/asset/tokenizer'.
[INFO] #3 download copy: '[asset/tokenizer/special_tokens_map.json asset/tokenizer/tokenizer_config.json asset/tokenizer/tokenizer.json]'.
[WARNING] #3.3 skip exist file D:\software\ChatTTS/asset/tokenizer/tokenizer.json
[WARNING] #3.1 skip exist file D:\software\ChatTTS/asset/tokenizer/special_tokens_map.json
[WARNING] #3.2 skip exist file D:\software\ChatTTS/asset/tokenizer/tokenizer_config.json
[INFO] all download tasks finished.
[+0800 20250218 11:39:50] [INFO] ChatTTS | dl | checking assets...
[+0800 20250218 11:39:51] [INFO] ChatTTS | dl | all assets are already latest.
[+0800 20250218 11:39:51] [INFO] ChatTTS | core | use device cuda:0
[+0800 20250218 11:39:51] [INFO] ChatTTS | core | vocos loaded.
[+0800 20250218 11:39:52] [INFO] ChatTTS | core | dvae loaded.
[+0800 20250218 11:39:52] [INFO] ChatTTS | core | embed loaded.
[+0800 20250218 11:39:52] [INFO] ChatTTS | core | gpt loaded.
[+0800 20250218 11:39:52] [INFO] ChatTTS | core | speaker loaded.
[+0800 20250218 11:39:52] [INFO] ChatTTS | core | decoder loaded.
[+0800 20250218 11:39:52] [INFO] ChatTTS | core | tokenizer loaded.
[+0800 20250218 11:39:52] [WARN] WebUI | funcs | Package nemo_text_processing not found!
[+0800 20250218 11:39:52] [WARN] WebUI | funcs | Run: conda install -c conda-forge pynini=2.1.5 && pip install nemo_text_processing
[+0800 20250218 11:39:52] [WARN] WebUI | funcs | Package WeTextProcessing not found!
[+0800 20250218 11:39:52] [WARN] WebUI | funcs | Run: conda install -c conda-forge pynini=2.1.5 && pip install WeTextProcessing
[+0800 20250218 11:39:52] [INFO] WebUI | webui | Models loaded successfully.
信息: 用提供的模式无法找到文件。
* Running on local URL: http://0.0.0.0:8080
To create a public link, set `share=True` in `launch()`.
[+0800 20250218 11:40:01] [INFO] ChatTTS | core | split text into 2 parts
text: 0%|▏ | 1/384(max) [00:00, 3.50it/s]Traceback (most recent call last):
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\gradio\queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\gradio\blocks.py", line 2098, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\gradio\blocks.py", line 1645, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\anyio\_backends\_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\anyio\_backends\_asyncio.py", line 962, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\gradio\utils.py", line 883, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "D:\software\ChatTTS\examples\web\funcs.py", line 148, in refine_text
text = chat.infer(
^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\ChatTTS\core.py", line 268, in infer
return next(res_gen)
^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\ChatTTS\core.py", line 418, in _infer
refined = self._refine_text(
^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\ChatTTS\core.py", line 728, in _refine_text
result = next(
^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\torch\utils\_contextlib.py", line 36, in generator_context
response = gen.send(None)
^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\ChatTTS\model\gpt.py", line 415, in generate
outputs: BaseModelOutputWithPast = self.gpt(
^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\transformers\models\llama\modeling_llama.py", line 594, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\transformers\models\llama\modeling_llama.py", line 336, in forward
hidden_states, self_attn_weights = self.self_attn(
^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\transformers\models\llama\modeling_llama.py", line 292, in forward
attn_output, attn_weights = attention_interface(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\test2\AppData\Roaming\Python\Python312\site-packages\transformers\integrations\sdpa_attention.py", line 53, in sdpa_attention_forward
attn_output = torch.nn.functional.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: The expanded size of the tensor (42) must match the existing size (41) at non-singleton dimension 3. Target sizes: [2, 12, 1, 42]. Tensor sizes: [2, 1, 1, 41]
``` | closed | 2025-02-18T03:42:59Z | 2025-02-18T06:18:45Z | https://github.com/2noise/ChatTTS/issues/900 | [
"bug",
"documentation"
] | 625093700 | 1 |
flasgger/flasgger | rest-api | 194 | Request: release a new version (0.8.2?) with the latest bug fixes | Please publish a new release of Flasgger. I am currently relying on a local build of this repository at a fixed commit, and it would be nice to resume using an official release. Thank you! | closed | 2018-04-26T18:53:39Z | 2018-04-27T19:31:05Z | https://github.com/flasgger/flasgger/issues/194 | [] | abstiles | 2 |
FactoryBoy/factory_boy | django | 457 | Lost Dict params when using Traits | For explanation purpose I've created some simple code:
```python
class Y:
def __init__(self, a: int, b: str):
self.a = a
self.b = b
class X:
def __init__(self, y: {str:[Y]}):
self.y = y
class YItemFactory(factory.Factory):
class Meta:
model = Y
a = 1
b = 'c'
class XItemFactory(factory.Factory):
class Meta:
model = X
y = factory.Dict({
'i': factory.List([factory.SubFactory(YItemFactory)]),
'j': factory.List([factory.SubFactory(YItemFactory)]),
})
class Params:
u = factory.Trait(
y__i=factory.List([
factory.SubFactory(YItemFactory, a=2)
]),
)
```
now, creating instances of X model, using u-param, it's losing part of given dictionary. Is it
`print(XItemFactory.create().y)`
gives `{'j': [<tests.entity.Y object at 0x7f9887f6e710>]}`, and
`print(XItemFactory.create(u=True).y)`
gives `{'i': [<tests.entity.Y object at 0x7f9887f6eac8>], 'j': [<tests.entity.Y object at 0x7f9887f6ecc0>]}`
| open | 2018-02-19T11:04:38Z | 2023-02-24T18:00:24Z | https://github.com/FactoryBoy/factory_boy/issues/457 | [
"Bug"
] | marjanoitaljano | 2 |
ultralytics/ultralytics | computer-vision | 18,839 | optimizer set SGD, The SGD of hyperparameter is not effective | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
The optimizer was set to SGD to fine tune the model and set some hyperparameters, but these hyperparameters did not take effect, as shown on the left side of the figure below (hyperparameters did not take effect); On the right is when using auto and selecting sgd (normal)



### Environment
Ultralytics 8.3.62 🚀 Python-3.8.13 torch-2.0.0+cu117
CUDA:4 (NVIDIA GeForce RTX 3090, 24260MiB)
CUDA:5 (NVIDIA GeForce RTX 3090, 24260MiB)
CUDA:6 (NVIDIA GeForce RTX 3090, 24260MiB)
CUDA:7 (NVIDIA GeForce RTX 3090, 24260MiB)
### Minimal Reproducible Example
# # SGD
# # lr0: 0.01 # (float) initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
lr0: 0.000003467 # (float) initial learning rate (i.e. SGD=1E-2) # epoch=100->3.467e-06
lrf: 0.01 # (float) final learning rate (lr0 * lrf)
momentum: 0.937 # (float) SGD momentum/Adam beta1
weight_decay: 0.0005 # (float) optimizer weight decay 5e-4
warmup_epochs: 3.0 # (float) warmup epochs (fractions ok)
warmup_momentum: 0.8 # (float) warmup initial momentum
warmup_bias_lr: 0.1 # (float) warmup initial bias lr
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-01-23T07:56:36Z | 2025-01-23T08:26:30Z | https://github.com/ultralytics/ultralytics/issues/18839 | [
"bug"
] | bluceliuljx | 2 |
FactoryBoy/factory_boy | django | 705 | Fix simple typo: charactes -> characters | # Issue Type
[x] Bug (Typo)
# Steps to Replicate
1. Examine docs/fuzzy.rst.
2. Search for `charactes`.
# Expected Behaviour
1. Should read `characters`.
| closed | 2020-02-26T07:18:08Z | 2020-02-26T10:37:20Z | https://github.com/FactoryBoy/factory_boy/issues/705 | [] | timgates42 | 1 |
tensorpack/tensorpack | tensorflow | 670 | Question about crop_and_resize in Faster RCNN | Hi, I have a simple question about your implementation of crop_and_resize. When you are deciding the initial sampling point, you used:
> nx0 = x0_box + spacing/2 - 0.5
https://github.com/tensorpack/tensorpack/blob/f417c49fe45759fb2c69cafcabe6613ea85ec469/examples/FasterRCNN/model_box.py#L105
What I don't understand is that why did you minus 0.5 here? One bad case I have encountered is that, when the size of ROI is quite small and the ROI locates near the boundary of the feature map (e.g., a ROI with x0=0, y0=0, w=0.5, h=0.5), -0.5 will result in a negative nx0, hence the output of the ROIAlign could be filled with zeros (tf.image.crop_and_resize will extrapolate the input feature map with zeros if the ROI is outside the feature map).
Although you commented that
> "-0.5 because bilinear sample assumes floating point coordinate (0.0, 0.0) is the same as pixel value (0, 0)"
I didn't fully understand it. Could you please explain why is it beneficial to minus 0.5 here?
Thank you, looking forward to your reply.
| closed | 2018-02-23T09:23:58Z | 2019-03-10T10:30:55Z | https://github.com/tensorpack/tensorpack/issues/670 | [
"examples",
"upstream issue"
] | jiang1st | 6 |
ivy-llc/ivy | pytorch | 27,945 | Fixed complex dtype not supported at jax and troch backend | closed | 2024-01-17T19:40:44Z | 2024-01-17T23:08:02Z | https://github.com/ivy-llc/ivy/issues/27945 | [
"Sub Task"
] | samthakur587 | 0 | |
LibrePhotos/librephotos | django | 1,113 | [meta] invalidate cache after action | This issue is for refactoring to RTK tracking only. It will be closed when refactoring is complete. | open | 2024-01-02T15:56:34Z | 2025-01-04T13:41:00Z | https://github.com/LibrePhotos/librephotos/issues/1113 | [
"bug"
] | sickelap | 2 |
FactoryBoy/factory_boy | django | 505 | Add functionality to mute specific receivers of signals (not just all signals) | #### The problem
As described at https://factoryboy.readthedocs.io/en/latest/orms.html#disabling-signals, it is possible to mute all signals of a certain type using the `factory.django.mute_signals` decorator. In my use case, hwoever, I have a specific receiver of `post_save` signals which calls an external API, and I would like to mute only this receiver (and not all `post_save` signals).
#### Proposed solution
We could write a `mute_receivers` decorator/context manager along the lines of [`mute_signals`](https://github.com/FactoryBoy/factory_boy/blob/2d735767b7f3e1f9adfc3f14c28eeef7acbf6e5a/factory/django.py#L256) which does this.
| open | 2018-08-15T22:04:07Z | 2024-10-16T12:55:51Z | https://github.com/FactoryBoy/factory_boy/issues/505 | [
"Feature"
] | khpeek | 5 |
pydata/bottleneck | numpy | 143 | package bottleneck 1.1.0 for debian | @toobaz, do you plan to add bottleneck 1.1.0 to debian?
| closed | 2016-09-14T19:53:32Z | 2016-10-14T22:22:48Z | https://github.com/pydata/bottleneck/issues/143 | [] | kwgoodman | 6 |
jina-ai/clip-as-service | pytorch | 227 | Error with BERT server | Hi,
after installing bert berver with ```pip install bert-serving-server``` I am not able to launch the server anymore.
Few day ago this command worked, but today an error occurs:
Command:
```
bert-serving-start -model_dir multi_cased_L-12_H-768_A-12 -num_worker=12 -max_seq_len=100
```
Error:
```
File "/usr/local/bin/bert-serving-start", line 7, in <module>
from bert_serving.server.cli import main
File "/usr/local/lib/python3.5/dist-packages/bert_serving/server/__init__.py", line 286
logger.error(f'received a wrongly-formatted request (expected 4 frames, got {len(msg)})')
```
I am using python3.5 and tf 1.12.
Thanks | closed | 2019-02-01T12:03:02Z | 2019-02-03T15:43:30Z | https://github.com/jina-ai/clip-as-service/issues/227 | [] | simonefrancia | 5 |
marcomusy/vedo | numpy | 471 | Automatic transformation matrix | Hi @marcomusy,
I have a couple of point clouds (of a similar object but different type, plains in this case) which are randomly positioned. Now I would like to find the transformation matrix so that all the point clouds to have the same orientation/translation:
```
from glob import glob
import natsort
import vedo as vd
pcdFiles = natsort.natsorted(glob('**/*.xyz', recursive=True))
for i, pcdFile in enumerate(pcdFiles):
print(pcdFile)
s = vd.Points(pcdFile).alpha(0.2)
vd.show(s, axes=8)
```
Target position (airplane_0000.xyz):

Transformation to be found:



I am not sure how easy this can be done, possibly using a shape similarity metric or something similar.
[testing_samples.zip](https://github.com/marcomusy/vedo/files/7245552/testing_samples.zip)
| closed | 2021-09-28T15:32:31Z | 2021-09-30T13:46:26Z | https://github.com/marcomusy/vedo/issues/471 | [] | ttsesm | 5 |
nerfstudio-project/nerfstudio | computer-vision | 3,087 | ns-render出现RuntimeError: stack expects a non-empty TensorList | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
| open | 2024-04-17T09:44:50Z | 2024-07-16T20:22:34Z | https://github.com/nerfstudio-project/nerfstudio/issues/3087 | [] | 713Lyf | 5 |
zihangdai/xlnet | nlp | 113 | How to export? | I needed to know how to write the serving function to export the trained xlnet model.
I have this right now:
def serving_input_fn():
with tf.variable_scope("model"):
feature_spec = {
"input_ids": tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64),
"input_mask": tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64),
"segment_ids": tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64),
"label_ids": tf.FixedLenFeature([], tf.int64),
}
serialized_tf_example = tf.placeholder(dtype=tf.string,
shape=[None],
name='input_example_tensor')
receiver_tensors = {'examples': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
EXPORT_DIR = 'gs://{}/export/{}'.format(BUCKET, TASK_VERSION)
estimator._export_to_tpu = False # this is important
path = estimator.export_savedmodel(EXPORT_DIR, serving_input_fn)
This is throwing me errors.
Please note: this is the function that I used for Bert, and as I am no expert in tensorflow, I don't understand why it won't work.
It throws a type mismatch error | closed | 2019-07-03T14:43:53Z | 2020-01-03T20:25:23Z | https://github.com/zihangdai/xlnet/issues/113 | [] | jinamshah | 11 |
comfyanonymous/ComfyUI | pytorch | 6,809 | Linux版的ComfyUI | 请问一下,如何在Linux配置开机后台自启动ComfyUI?比如如何写成service方式?非常感谢! | closed | 2025-02-14T00:29:25Z | 2025-02-14T10:32:57Z | https://github.com/comfyanonymous/ComfyUI/issues/6809 | [] | wchuanxin | 1 |
deeppavlov/DeepPavlov | tensorflow | 1,003 | BERT Classification files | Sorry for bothering I was wondering could I do classification the pdf files to different folders by the context of the file. For example, I have 3 folders invoices, cv, and input folder and could I train with bert nlp folders cv and invoices so when smb will put the pdf file to the input folder it will automatically decide what it is cv or invoice. | closed | 2019-09-18T13:26:38Z | 2020-05-06T12:54:45Z | https://github.com/deeppavlov/DeepPavlov/issues/1003 | [] | AnnaTumanova | 5 |
pyjanitor-devs/pyjanitor | pandas | 450 | [INF] Need to sync up requirements-dev.txt and environment.yml | There are some packages in `requirements-dev.txt` that are not present inside `environment.yml`.
Need to manually check to synchronize the two of them.
Version numbers in `requirements-dev.txt` should not be put back into `environment.yml`.
I will work on this. | closed | 2019-07-14T16:01:53Z | 2019-07-14T16:29:55Z | https://github.com/pyjanitor-devs/pyjanitor/issues/450 | [
"infrastructure"
] | ericmjl | 0 |
Johnserf-Seed/TikTokDownload | api | 224 | [BUG]TikTokMulti.py: error: unrecognized arguments: https://v.douyin.com/xxxxxxxx/ | **描述出现的错误**
对bug的清晰而简洁的描述。
执行命令行,报这个异常
`TikTokMulti.py: error: unrecognized arguments: https://v.douyin.com/xxxxxx/`
**bug复现**
复现这次行为的步骤:
1.更改了什么什么
2.点击了什么什么
3.“……”
**截图**
如果适用,添加屏幕截图以帮助解释您的问题。
**桌面(请填写以下信息):**
-操作系统:[例如windows10 64bit]
-vpn代理[例如开启、关闭]
-版本[如1.2.3]
**附文**
在此处添加有关此问题的文字。
| closed | 2022-09-28T11:55:46Z | 2022-10-08T12:47:21Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/224 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | bosen365 | 1 |
jupyterhub/repo2docker | jupyter | 937 | Support ignoring paths from a repo by reading a .dockerignore file | Is there a means to exclude files of the git repository from the docker image?
The use case is that of a (private) repository including solution notebooks to an assignment.
I presume this could be achieved with a post-build or so, but simply asking if we could have a configuration thing to include e.g. directories from the build (as well as the .git folder I guess). | closed | 2020-08-04T11:50:47Z | 2024-01-24T10:39:24Z | https://github.com/jupyterhub/repo2docker/issues/937 | [] | SylvainCorlay | 4 |
jupyter/nbviewer | jupyter | 695 | plotly figure not rendered nicely with slide view | My plotly figure is not rendered well when viewed with slide mode.
Here is my plot [viewed in a slide.](https://nbviewer.jupyter.org/format/slides/github/tarokiritani/testjupyter/blob/master/test%20plotly.ipynb#/)
This looks better when [viewed as a notebook.](https://nbviewer.jupyter.org/github/tarokiritani/testjupyter/blob/master/test%20plotly.ipynb?flush_cash=true)
I suspected this was a problem in plotly in the beginning and already opened an issue on [their repo](https://github.com/plotly/plotly.py/issues/750). However, I feel more and more this might be an issue in nbviewer. I also had this problem when using `jupyter nbconvert --to slides` on my local machine. | closed | 2017-05-05T11:25:15Z | 2017-05-05T14:26:35Z | https://github.com/jupyter/nbviewer/issues/695 | [] | tarokiritani | 1 |
lexiforest/curl_cffi | web-scraping | 126 | Sometime use over 100% cpus | When send some requests(use proxy), will pose following error by continuous. Then cpu will keeping using over 100% cpus and almost let system cant work with everything.
**Failed to perform, ErrCode: 28, Reason: 'Operation timed out after 30001 milliseconds with 0 bytes received'. This may be a libcurl error, See https://curl.se/libcurl/c/libcurl-errors.html first for more details.
Failed to perform, ErrCode: 28, Reason: 'Operation timed out after 30001 milliseconds with 0 bytes received'. This may be a libcurl error, See https://curl.se/libcurl/c/libcurl-errors.html first for more details.
Failed to perform, ErrCode: 16, Reason: ''. This may be a libcurl error, See https://curl.se/libcurl/c/libcurl-errors.html first for more details.**
In the process of occupying a large amount of CPU, the code of other threads in the python program will be suspended. It seems that the global lock has been occupied by some behavior all the time. Every time the 30s timeout expires, other code will resume running.
Once occur this error, any request made using this session will be reported as an error, even if it is normal or even does not use the proxy.
Btw: This will not happen after creating a new session and transferring the cookies of the old session to the new session. new session work normally.
At present, I haven't found the code that can reproduce this error condition, because it always happens occasionally (in the case of a large number of concurrent cases, it may occur 5-6 times in a month), but I seem to find that setting timeout can reduce the abnormal occupation of CPU when this situation occurs.
I hope the situations I have discovered above can help you locate the problem. | closed | 2023-09-19T23:31:51Z | 2025-03-07T05:36:08Z | https://github.com/lexiforest/curl_cffi/issues/126 | [
"bug"
] | WhZzi | 9 |
eriklindernoren/ML-From-Scratch | machine-learning | 111 | No module named 'mlfromscratch.utils.loss_functions' | Traceback (most recent call last):
File "C:\G\ML-From-Scratch\mlfromscratch\examples\gradient_boosting_regressor.py", line 9, in <module>
from mlfromscratch.utils.loss_functions import SquareLoss
ModuleNotFoundError: No module named 'mlfromscratch.utils.loss_functions' | open | 2024-11-20T08:51:03Z | 2024-11-20T08:51:03Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/111 | [] | LeiYangGH | 0 |
graphql-python/graphql-core | graphql | 34 | GraphQLError is unhashable | It seems that the `logging` library in python assumes that exceptions are hashable, in order to be logged. It'd be great if we could treat GraphQL errors the same way as other builtin exceptions.
See also, a similar issue in the `schematics` project:
https://github.com/schematics/schematics/issues/452 | closed | 2019-05-30T03:11:20Z | 2019-05-31T22:40:44Z | https://github.com/graphql-python/graphql-core/issues/34 | [
"bug"
] | andrew-humu | 1 |
tfranzel/drf-spectacular | rest-api | 824 | @extend_schema has no effect on GET/retrieve extra actions in a ModelViewSet | DRF v3.11.1
drf-spectacular v0.15.0
When adding an extra `GET` action (not merely overriding `create` or `retrieve`) in a `ModelViewSet` and decorating it with `@extend_schema`, the specified request serializer is not represented as a `component` in the generated schema.
In the example below, the `search/` endpoint shows up under `paths`, but the fields under `SearchRequestSerializer` are not included under `components/schemas`. Changing the action to a `POST` method instead of a `GET` resolves the issue, but I believe the method really should be a `GET`.
Example View and method:
```
class MyObjectViewSet(viewsets.ModelViewSet):
http_method_names = ['get', 'post']
serializer_class = MyObjectSerializer
@sensitive_variables('access_token')
@extend_schema(
request=SearchRequestSerializer,
responses={
(200,'application/gzip'): OpenApiTypes.BINARY,
401: OpenApiResponse(response=GeneralResponseSerializer),
404: OpenApiResponse(response=GeneralResponseSerializer)
},
)
@action(detail=False, methods=['GET'])
def search(self, request):
data = SearchRequestSerializer(data=self.request.query_params)
data.is_valid(raise_exception=True)
...
return response
```
Excerpt from the generated schema (as a GET):
```
paths:
/api/documents/search/:
get:
operationId: documents_search_retrieve
tags:
- documents
security:
- basicAuth: []
- tokenAuth: []
- {}
responses:
'200':
content:
application/gzip:
schema:
type: string
format: binary
description: ''
'401':
content:
application/json:
schema:
$ref: '#/components/schemas/GeneralResponse'
description: null
'404':
content:
application/json:
schema:
$ref: '#/components/schemas/GeneralResponse'
description: null
```
Excerpt from the generated schema (if I change it to a POST):
```
/api/documents/search/:
post:
operationId: documents_search_create
tags:
- documents
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/SearchRequest'
application/x-www-form-urlencoded:
schema:
$ref: '#/components/schemas/SearchRequest'
multipart/form-data:
schema:
$ref: '#/components/schemas/SearchRequest'
required: true
security:
- basicAuth: []
- tokenAuth: []
- {}
responses:
'200':
content:
application/gzip:
schema:
type: string
format: binary
description: ''
'401':
content:
application/json:
schema:
$ref: '#/components/schemas/GeneralResponse'
description: null
'404':
content:
application/json:
schema:
$ref: '#/components/schemas/GeneralResponse'
description: null
...
components:
schemas:
SearchRequest:
<details from the SearchRequest serializer>
```
I would expect a `component`called `SearchRequest` to be included in the schema, along with the field metadata defined in `SearchRequestSerializer`.
I realize I may be misunderstanding the expected behavior here. If so, I could really use a pointer in the right direction. | closed | 2022-09-30T17:59:33Z | 2022-10-04T10:31:12Z | https://github.com/tfranzel/drf-spectacular/issues/824 | [] | cantus-firmus | 5 |
apify/crawlee-python | web-scraping | 767 | Add code coverage badge to README | - [covecov](https://about.codecov.io/)
- Apify service account?
- Same for the SDK & client | open | 2024-11-29T14:46:23Z | 2024-11-29T14:46:40Z | https://github.com/apify/crawlee-python/issues/767 | [
"t-tooling"
] | vdusek | 0 |
vipstone/faceai | tensorflow | 52 | 大佬请问表情识别的模型为什么选择 simple_CNN.530-0.65 | 我看到原始的repo除了这个模型还有simple cnn 0.66 和 xception,请问为什么最后选择这个模型呢? | open | 2020-06-22T02:27:04Z | 2020-09-05T07:42:07Z | https://github.com/vipstone/faceai/issues/52 | [] | duchengyao | 1 |
marshmallow-code/apispec | rest-api | 4 | APISpec.add_path overwrites paths | I was toying around with writing a wsgi middleware that used smore while fleshing out some ideas and noticed that `smore.apispec.APISpec.add_path` does a [full replace via `dict.update`](https://github.com/marshmallow-code/smore/blob/dev/smore/apispec/core.py#L106) when adding a new path that matches an existing path. It ends up being a last one wins situation.
What do you think about storing the Path object in `_paths` and doing a recursive merge of all of its properties when calling `add_path` and then converting to dict when `smore.apispec.APISpec.to_dict` is called?
| closed | 2015-04-03T18:57:28Z | 2015-04-11T20:49:14Z | https://github.com/marshmallow-code/apispec/issues/4 | [] | hello-josh | 1 |
jina-ai/clip-as-service | pytorch | 274 | Can bert-as-service be used with saved model? for example Saved Model of fine-tuned BERT works well via TF serving on CPU only machine. | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- TensorFlow installed from (source or binary):
- TensorFlow version:
- Python version:
- `bert-as-service` version:
- GPU model and memory:
- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start YOUR_SERVER_ARGS
```
and calling the server via:
```python
bc = BertClient(YOUR_CLIENT_ARGS)
bc.encode()
```
Then this issue shows up:
... | closed | 2019-03-15T02:05:44Z | 2019-03-20T01:30:55Z | https://github.com/jina-ai/clip-as-service/issues/274 | [] | dhanaji | 1 |
numba/numba | numpy | 9,734 | `np.random.Generator.binomial` produces invalid values for "intermediate" sized `n`. | <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
As title, this:
```python
from numba import jit
import numpy as np
gen1 = np.random.default_rng(0)
gen2 = np.random.default_rng(0)
@jit
def foo(gen):
return gen.binomial(301, 0.1, 3)
print(foo(gen1))
print(foo.py_func(gen2))
```
gives:
```
[33, 31, 23]
[32, 31, 22]
``` | closed | 2024-09-27T15:30:39Z | 2024-10-17T20:27:25Z | https://github.com/numba/numba/issues/9734 | [
"numpy",
"bug - numerically incorrect"
] | stuartarchibald | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.