repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pydantic/logfire | pydantic | 581 | Document `instrument_httpx(client)` | ### Description
Document feature introduced on https://github.com/pydantic/logfire/pull/575. | closed | 2024-11-12T11:47:59Z | 2024-11-13T10:27:06Z | https://github.com/pydantic/logfire/issues/581 | [
"Feature Request"
] | Kludex | 0 |
n0kovo/fb_friend_list_scraper | web-scraping | 10 | Your Firefox profile cannot be loaded. It may be missing or inaccessible. | Hi. I wanted to try your script. I installed it via pip, but when I try to scrape something, the moment I enter my password I get the following error: "Your Firefox profile cannot be loaded. It may be missing or inaccessible".

A quick google search appears to indicate that the root of the issue is that Firefox in Ubuntu is installed as a snap package (instead of a .deb file), which does not use the default profile path. In my case I use Kubuntu 20.04 LTS
Is there any workaround? Thanks in advance. | open | 2024-03-02T22:30:01Z | 2024-03-02T22:30:01Z | https://github.com/n0kovo/fb_friend_list_scraper/issues/10 | [] | wonx | 0 |
modoboa/modoboa | django | 3,230 | 'Calendars' tab from webmail fails with "Error reading /srv/modoboa/...../modoboa_radicale/webpack-stats.json. Are you sure webpack has generated the file and the path is correct?" | # Impacted versions
* OS Type: Debian
* OS Version: 12 Bookworm
* Database Type: MySQL
* Database version: MariaDB 10.11.6
* Modoboa: 2.2.4
* installer used: Yes
* Webserver: Nginx
# Steps to reproduce
- Fresh Debian 12 VM
- Run installer as root: `./run.py mytest.mydomain.com`
- add MX and A record as prompted by installer
- have a coffee while it installs, as prompted by installer
- enable *DEBUG = True* in `/srv/modoboa/instance/instance/settings.py` and restart uwsgi service
- Log into the admin (admin:password) and set up a test domain with defaults
- create a test Simple User in the test domain
- in a new browser session, log into the new test user.
- Click the Calendars link at the top and the error occurs.
# Current behavior
An error is thrown: `Error reading /srv/modoboa/env/lib/python3.11/site-packages/modoboa_radicale/static/modoboa_radicale/webpack-stats.json. Are you sure webpack has generated the file and the path is correct?`
There is no static folder to be found at all under `/srv/modoboa/env/lib/python3.11/site-packages/modoboa_radicale`:
```
root@mailtest03:/srv/modoboa# ls -l /srv/modoboa/env/lib/python3.11/site-packages/modoboa_radicale
total 124
-rw-r--r-- 1 modoboa modoboa 362 Apr 9 01:24 __init__.py
drwxr-xr-x 2 modoboa modoboa 4096 Apr 9 01:24 __pycache__
-rw-r--r-- 1 modoboa modoboa 260 Apr 9 01:24 apps.py
drwxr-xr-x 3 modoboa modoboa 4096 Apr 9 01:24 backends
-rw-r--r-- 1 modoboa modoboa 848 Apr 9 01:24 factories.py
-rw-r--r-- 1 modoboa modoboa 1780 Apr 9 01:24 forms.py
-rw-r--r-- 1 modoboa modoboa 2384 Apr 9 01:24 handlers.py
drwxr-xr-x 16 modoboa modoboa 4096 Apr 9 01:24 locale
drwxr-xr-x 4 modoboa modoboa 4096 Apr 9 01:24 management
drwxr-xr-x 3 modoboa modoboa 4096 Apr 9 01:24 migrations
-rw-r--r-- 1 modoboa modoboa 1841 Apr 9 01:24 mocks.py
-rw-r--r-- 1 modoboa modoboa 4683 Apr 9 01:24 models.py
-rw-r--r-- 1 modoboa modoboa 863 Apr 9 01:24 modo_extension.py
-rw-r--r-- 1 modoboa modoboa 7385 Apr 9 01:24 serializers.py
-rw-r--r-- 1 modoboa modoboa 783 Apr 9 01:24 settings.py
drwxr-xr-x 3 modoboa modoboa 4096 Apr 9 01:24 templates
drwxr-xr-x 2 modoboa modoboa 4096 Apr 9 01:24 test_data
-rw-r--r-- 1 modoboa modoboa 23653 Apr 9 01:24 tests.py
-rw-r--r-- 1 modoboa modoboa 210 Apr 9 01:24 urls.py
-rw-r--r-- 1 modoboa modoboa 1067 Apr 9 01:24 urls_api.py
-rw-r--r-- 1 modoboa modoboa 1340 Apr 9 01:24 views.py
-rw-r--r-- 1 modoboa modoboa 9588 Apr 9 01:24 viewsets.py
```
I also had this same problem on an upgraded 2.2.4 instance.
# Expected behavior
The Calendars tab should open with calendar stuff in it from radicale.
| closed | 2024-04-09T02:06:15Z | 2024-04-09T16:52:57Z | https://github.com/modoboa/modoboa/issues/3230 | [] | cantrust-hosting-cooperative | 3 |
BeanieODM/beanie | pydantic | 402 | [BUG] AWS DocumentDB does not work with 1.14.0 - Not found for _id: ... | **Describe the bug**
I noticed, that since I updated to beanie 1.14.0, my program does not work with AWS DocumentDB anymore.
This has not been a problem prior, and the same code works perfectly with 1.13.1
Additionally, the code works perfectly fine with 1.14.0 against the local MongoDB test database in version 5.0.10.
The error message is not very helpful, the requested resources can simply be not found (although they are there)
```
NotFound '<some_OID>' for '<class 'mongodb.model.user.odm.User'>'
not found in database 'User' with id '<some_OID>' not found in database
```
To verify, that the resource is there I use a tool like NoSQLBooster or Robo3T
```
db.user.find( {"_id" : ObjectId("<some_OID>")} )
.projection({})
.sort({_id:-1})
.limit(100)
```
**To Reproduce**
```python
# Nothing special, just a simple find command
result = await model.find_one(model.id == oid)
```
**Expected behavior**
I expected beanie 1.14.0 to work with AWS DocumentDB the same way as 1.13.1
**Additional context**
I am glad to provide further information, or I can make some tests against DocumentDB if someone can give me hints what to do.
| closed | 2022-11-05T03:19:39Z | 2022-11-06T16:47:31Z | https://github.com/BeanieODM/beanie/issues/402 | [] | mickdewald | 7 |
iperov/DeepFaceLab | machine-learning | 966 | nadagit/lbfs linux port not utilizing GPU | Been trying for days to try and get lbfs/DeepFaceLab_Linux to work. Mainly encountered user error programs but now when running ./4_data_src_extract_faces_S3FD.sh it only utilizes the CPU and my GPU (Tesla M40 24GB) sits at idle when opening nvidia-smi. Can someone tell me what I am doing wrong.
anaconda3 (deepfacelab) meta:
nvidia-driver-455
python=3.6.8
cudnn=7.6.5
cudatoolkit=10.0.130
requirements-cuda.txt
-------------------------
tqdm
numpy==1.19.3
h5py
opencv-python==4.1.0.25
ffmpeg-python==0.1.17
scikit-image==0.14.2
colorama==0.4.4
tesnorflow-gpu==2.4.0rc1
pyqt5==5.15.2
^^^^^^^^^^^^^^^^^^^^ based on requirements-cuda.txt in https://libraries.io/github/iperov/DeepFaceLab
I have tried nagadit/DeepFaceLab_Linux as well with python==3.7 & cudatoolkit==10.1.243 but to no avail.
update 12/7/20 - installed nvidia-cuda-toolkit, no change in utilization. | open | 2020-12-07T17:14:19Z | 2023-06-08T21:44:25Z | https://github.com/iperov/DeepFaceLab/issues/966 | [] | TheGermanEngie | 5 |
prkumar/uplink | rest-api | 171 | isnt asyncio coroutine deprecated from python 3.5 ? | **Describe the bug**
```
File "/home/alexv/.local/share/virtualenvs/boken-WCZqebO_/lib/python3.7/site-packages/uplink/clients/io/asyncio_strategy.py", line 14, in AsyncioStrategy
@asyncio.coroutine
AttributeError: module 'asyncio' has no attribute 'coroutine'
```
**To Reproduce**
Use uplink.AiohttpClient() with python 3.7
**Expected behavior**
working, no matter which (current) version of python we are using. see https://devguide.python.org/#status-of-python-branches
So what about adopting the new await syntax in https://github.com/prkumar/uplink/blob/master/uplink/clients/io/asyncio_strategy.py ?
| closed | 2019-08-21T09:24:25Z | 2019-08-21T09:44:22Z | https://github.com/prkumar/uplink/issues/171 | [] | asmodehn | 1 |
dask/dask | scikit-learn | 11,229 | Removal of Sphinx context injection at build time | From Read The Docs
> We are announcing the deprecation of Sphinx context injection at build time for all the projects. The deprecation date is set on Monday, October 7th, 2024. After this date, Read the Docs won't install the readthedocs-sphinx-ext extension and won't manipulate the project's conf.py file.
>
> This will get us closer to our goal of having all projects build on Read the Docs be the exact same as on other build environments, making understanding of documentation builds much easier to understand.
>
> You can read our [blog post on the deprecation](https://about.readthedocs.com/blog/2024/07/addons-by-default/) for all the information about possible impacts of this change, in particular the READTHEDOCS and other variables in the Sphinx context are no longer set automatically. | open | 2024-07-16T18:24:11Z | 2024-07-24T00:44:32Z | https://github.com/dask/dask/issues/11229 | [
"needs triage"
] | aterrel | 1 |
NVIDIA/pix2pixHD | computer-vision | 226 | Can we get access to the interactive editing tool depicted in the original paper? | open | 2020-10-06T02:15:23Z | 2022-02-08T09:02:56Z | https://github.com/NVIDIA/pix2pixHD/issues/226 | [] | ghost | 2 | |
ydataai/ydata-profiling | data-science | 981 | TypeError From ProfileReport in Google Colab | ### Current Behaviour
In Google Colab the `.to_notebook_iframe` method on `ProfileReport` throws an error:
```Python
TypeError: concat() got an unexpected keyword argument 'join_axes'
```
This issue has been spotted in other contexts and there are questions in StackOverflow: https://stackoverflow.com/questions/61362942/concat-got-an-unexpected-keyword-argument-join-axes
### Expected Behaviour
This section not applicable. Reporting bug that throws an error.
### Data Description
You can reproduce the error with this data:
```Python
https://projects.fivethirtyeight.com/polls/data/favorability_polls.csv
```
### Code that reproduces the bug
```Python
import pandas as pd
from pandas_profiling import ProfileReport
df = pd.read_csv('https://projects.fivethirtyeight.com/polls/data/favorability_polls.csv')
profile = ProfileReport(df)
profile.to_notebook_iframe
```
### pandas-profiling version
Version 1.4.1
### Dependencies
```Text
absl-py==1.0.0
alabaster==0.7.12
albumentations==0.1.12
altair==4.2.0
appdirs==1.4.4
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arviz==0.12.0
astor==0.8.1
astropy==4.3.1
astunparse==1.6.3
atari-py==0.2.9
atomicwrites==1.4.0
attrs==21.4.0
audioread==2.1.9
autograd==1.4
Babel==2.10.1
backcall==0.2.0
beautifulsoup4==4.6.3
bleach==5.0.0
blis==0.4.1
bokeh==2.3.3
Bottleneck==1.3.4
branca==0.5.0
bs4==0.0.1
CacheControl==0.12.11
cached-property==1.5.2
cachetools==4.2.4
catalogue==1.0.0
certifi==2021.10.8
cffi==1.15.0
cftime==1.6.0
chardet==3.0.4
charset-normalizer==2.0.12
click==7.1.2
cloudpickle==1.3.0
cmake==3.22.4
cmdstanpy==0.9.5
colorcet==3.0.0
colorlover==0.3.0
community==1.0.0b1
contextlib2==0.5.5
convertdate==2.4.0
coverage==3.7.1
coveralls==0.5
crcmod==1.7
cufflinks==0.17.3
cvxopt==1.2.7
cvxpy==1.0.31
cycler==0.11.0
cymem==2.0.6
Cython==0.29.28
daft==0.0.4
dask==2.12.0
datascience==0.10.6
debugpy==1.0.0
decorator==4.4.2
defusedxml==0.7.1
descartes==1.1.0
dill==0.3.4
distributed==1.25.3
dlib @ file:///dlib-19.18.0-cp37-cp37m-linux_x86_64.whl
dm-tree==0.1.7
docopt==0.6.2
docutils==0.17.1
dopamine-rl==1.0.5
earthengine-api==0.1.307
easydict==1.9
ecos==2.0.10
editdistance==0.5.3
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.5/en_core_web_sm-2.2.5.tar.gz
entrypoints==0.4
ephem==4.1.3
et-xmlfile==1.1.0
fa2==0.3.5
fastai==1.0.61
fastdtw==0.3.4
fastjsonschema==2.15.3
fastprogress==1.0.2
fastrlock==0.8
fbprophet==0.7.1
feather-format==0.4.1
filelock==3.6.0
firebase-admin==4.4.0
fix-yahoo-finance==0.0.22
Flask==1.1.4
flatbuffers==2.0
folium==0.8.3
future==0.16.0
gast==0.5.3
GDAL==2.2.2
gdown==4.4.0
gensim==3.6.0
geographiclib==1.52
geopy==1.17.0
gin-config==0.5.0
glob2==0.7
google==2.0.3
google-api-core==1.31.5
google-api-python-client==1.12.11
google-auth==1.35.0
google-auth-httplib2==0.0.4
google-auth-oauthlib==0.4.6
google-cloud-bigquery==1.21.0
google-cloud-bigquery-storage==1.1.1
google-cloud-core==1.0.3
google-cloud-datastore==1.8.0
google-cloud-firestore==1.7.0
google-cloud-language==1.2.0
google-cloud-storage==1.18.1
google-cloud-translate==1.5.0
google-colab @ file:///colabtools/dist/google-colab-1.0.0.tar.gz
google-pasta==0.2.0
google-resumable-media==0.4.1
googleapis-common-protos==1.56.0
googledrivedownloader==0.4
graphviz==0.10.1
greenlet==1.1.2
grpcio==1.44.0
gspread==3.4.2
gspread-dataframe==3.0.8
gym==0.17.3
h5py==3.1.0
HeapDict==1.0.1
hijri-converter==2.2.3
holidays==0.10.5.2
holoviews==1.14.8
html5lib==1.0.1
httpimport==0.5.18
httplib2==0.17.4
httplib2shim==0.0.3
humanize==0.5.1
hyperopt==0.1.2
ideep4py==2.0.0.post3
idna==2.10
imageio==2.4.1
imagesize==1.3.0
imbalanced-learn==0.8.1
imblearn==0.0
imgaug==0.2.9
importlib-metadata==4.11.3
importlib-resources==5.7.1
imutils==0.5.4
inflect==2.1.0
iniconfig==1.1.1
intel-openmp==2022.1.0
intervaltree==2.1.0
ipykernel==4.10.1
ipython==5.5.0
ipython-genutils==0.2.0
ipython-sql==0.3.9
ipywidgets==7.7.0
itsdangerous==1.1.0
jax==0.3.8
jaxlib @ https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.3.7+cuda11.cudnn805-cp37-none-manylinux2014_x86_64.whl
jedi==0.18.1
jieba==0.42.1
Jinja2==2.11.3
joblib==1.1.0
jpeg4py==0.1.4
jsonschema==4.3.3
jupyter==1.0.0
jupyter-client==5.3.5
jupyter-console==5.2.0
jupyter-core==4.10.0
jupyterlab-pygments==0.2.2
jupyterlab-widgets==1.1.0
kaggle==1.5.12
kapre==0.3.7
keras==2.8.0
Keras-Preprocessing==1.1.2
keras-vis==0.4.1
kiwisolver==1.4.2
korean-lunar-calendar==0.2.1
libclang==14.0.1
librosa==0.8.1
lightgbm==2.2.3
llvmlite==0.34.0
lmdb==0.99
LunarCalendar==0.0.9
lxml==4.2.6
Markdown==3.3.6
MarkupSafe==2.0.1
matplotlib==3.2.2
matplotlib-inline==0.1.3
matplotlib-venn==0.11.7
missingno==0.5.1
mistune==0.8.4
mizani==0.6.0
mkl==2019.0
mlxtend==0.14.0
more-itertools==8.12.0
moviepy==0.2.3.5
mpmath==1.2.1
msgpack==1.0.3
multiprocess==0.70.12.2
multitasking==0.0.10
murmurhash==1.0.7
music21==5.5.0
natsort==5.5.0
nbclient==0.6.2
nbconvert==5.6.1
nbformat==5.3.0
nest-asyncio==1.5.5
netCDF4==1.5.8
networkx==2.6.3
nibabel==3.0.2
nltk==3.2.5
notebook==5.3.1
numba==0.51.2
numexpr==2.8.1
numpy==1.21.6
nvidia-ml-py3==7.352.0
oauth2client==4.1.3
oauthlib==3.2.0
okgrade==0.4.3
opencv-contrib-python==4.1.2.30
opencv-python==4.1.2.30
openpyxl==3.0.9
opt-einsum==3.3.0
osqp==0.6.2.post0
packaging==21.3
palettable==3.3.0
pandas==1.3.5
pandas-datareader==0.9.0
pandas-gbq==0.13.3
pandas-profiling==1.4.1
pandocfilters==1.5.0
panel==0.12.1
param==1.12.1
parso==0.8.3
pathlib==1.0.1
patsy==0.5.2
pep517==0.12.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==7.1.2
pip-tools==6.2.0
plac==1.1.3
plotly==5.5.0
plotnine==0.6.0
pluggy==0.7.1
pooch==1.6.0
portpicker==1.3.9
prefetch-generator==1.0.1
preshed==3.0.6
prettytable==3.2.0
progressbar2==3.38.0
prometheus-client==0.14.1
promise==2.3
prompt-toolkit==1.0.18
protobuf==3.17.3
psutil==5.4.8
psycopg2==2.7.6.1
ptyprocess==0.7.0
py==1.11.0
pyarrow==6.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycocotools==2.0.4
pycparser==2.21
pyct==0.4.8
pydata-google-auth==1.4.0
pydot==1.3.0
pydot-ng==2.0.0
pydotplus==2.0.2
PyDrive==1.3.1
pyemd==0.5.1
pyerfa==2.0.0.1
pyglet==1.5.0
Pygments==2.6.1
pygobject==3.26.1
pymc3==3.11.4
PyMeeus==0.5.11
pymongo==4.1.1
pymystem3==0.2.0
PyOpenGL==3.1.6
pyparsing==3.0.8
pyrsistent==0.18.1
pysndfile==1.3.8
PySocks==1.7.1
pystan==2.19.1.1
pytest==3.6.4
python-apt==0.0.0
python-chess==0.23.11
python-dateutil==2.8.2
python-louvain==0.16
python-slugify==6.1.2
python-utils==3.1.0
pytz==2022.1
pyviz-comms==2.2.0
PyWavelets==1.3.0
PyYAML==3.13
pyzmq==22.3.0
qdldl==0.1.5.post2
qtconsole==5.3.0
QtPy==2.1.0
regex==2019.12.20
requests==2.23.0
requests-oauthlib==1.3.1
resampy==0.2.2
rpy2==3.4.5
rsa==4.8
scikit-image==0.18.3
scikit-learn==1.0.2
scipy==1.4.1
screen-resolution-extra==0.0.0
scs==3.2.0
seaborn==0.11.2
semver==2.13.0
Send2Trash==1.8.0
setuptools-git==1.2
Shapely==1.8.1.post1
simplegeneric==0.8.1
six==1.15.0
sklearn==0.0
sklearn-pandas==1.8.0
smart-open==6.0.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
SoundFile==0.10.3.post1
soupsieve==2.3.2.post1
spacy==2.2.4
Sphinx==1.8.6
sphinxcontrib-serializinghtml==1.1.5
sphinxcontrib-websupport==1.2.4
SQLAlchemy==1.4.36
sqlparse==0.4.2
srsly==1.0.5
statsmodels==0.10.2
sympy==1.7.1
tables==3.7.0
tabulate==0.8.9
tblib==1.7.0
tenacity==8.0.1
tensorboard==2.8.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow @ file:///tensorflow-2.8.0-cp37-cp37m-linux_x86_64.whl
tensorflow-datasets==4.0.1
tensorflow-estimator==2.8.0
tensorflow-gcs-config==2.8.0
tensorflow-hub==0.12.0
tensorflow-io-gcs-filesystem==0.25.0
tensorflow-metadata==1.7.0
tensorflow-probability==0.16.0
termcolor==1.1.0
terminado==0.13.3
testpath==0.6.0
text-unidecode==1.3
textblob==0.15.3
Theano-PyMC==1.1.2
thinc==7.4.0
threadpoolctl==3.1.0
tifffile==2021.11.2
tinycss2==1.1.1
tomli==2.0.1
toolz==0.11.2
torch @ https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp37-cp37m-linux_x86_64.whl
torchaudio @ https://download.pytorch.org/whl/cu113/torchaudio-0.11.0%2Bcu113-cp37-cp37m-linux_x86_64.whl
torchsummary==1.5.1
torchtext==0.12.0
torchvision @ https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp37-cp37m-linux_x86_64.whl
tornado==5.1.1
tqdm==4.64.0
traitlets==5.1.1
tweepy==3.10.0
typeguard==2.7.1
typing-extensions==4.2.0
tzlocal==1.5.1
uritemplate==3.0.1
urllib3==1.24.3
vega-datasets==0.9.0
wasabi==0.9.1
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==1.0.1
widgetsnbextension==3.6.0
wordcloud==1.5.0
wrapt==1.14.0
xarray==0.18.2
xgboost==0.90
xkit==0.0.0
xlrd==1.1.0
xlwt==1.3.0
yellowbrick==1.4
zict==2.2.0
zipp==3.8.0
```
```
### OS
Google Colab
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Frequent Issues](https://pandas-profiling.ydata.ai/docs/master/rtd/pages/support.html#frequent-issues). | closed | 2022-05-13T19:00:16Z | 2022-05-16T18:06:54Z | https://github.com/ydataai/ydata-profiling/issues/981 | [
"documentation 📖"
] | adamrossnelson | 3 |
tox-dev/tox | automation | 2,422 | UnicodeDecodeError on line 100 of execute/stream.py | Currently using Version: 4.0.0b2 on Windows, and getting an error whereby Tox is trying to decode everything using "utf-8" alone while there might be other valid encodings as well.
This is the problematic line for me that happened on Windows:
Line 100 onwards of "execute/stream.py"
```
@property
def text(self) -> str:
with self._content_lock:
return self._content.decode("utf-8")
```
When using breakpoint() there it seems that it will produce a UnicodeDecodeError with one of the self._content having an encoding of "Windows-1252" if I use the chardet library to detect it.
My own personal patch is modifying the function this way but I suppose a more elegant fix can be put in by someone else hopefully:
```
import chardet
...
...
@property
def text(self) -> str:
with self._content_lock:
encoding_detected: str | None = chardet.detect(self._content).get('encoding')
if encoding_detected:
return self._content.decode(encoding_detected)
return self._content.decode('utf-8')
``` | closed | 2022-05-19T12:53:15Z | 2023-06-17T01:18:12Z | https://github.com/tox-dev/tox/issues/2422 | [
"bug:normal",
"help:wanted"
] | julzt0244 | 13 |
babysor/MockingBird | pytorch | 553 | 为什么缺失.h文件呢,缺了两个,不知道怎么回事,求教 | **Summary[问题简述(一句话)]**
为什么缺失.h文件呢,缺了两个,不知道怎么回事,求教
**Env & To Reproduce[复现与环境]**
py3.9 PyTorch11.0 cuda 11.6 不涉及模型
**Screenshots[截图(如有)]**
运行pip install -r requirements.txt的报错截图


运行pip install espnet的报错截图


就是这样,vc工具箱总是运行失败,是我少安装什么组件了吗,还是我缺少什么文件
| open | 2022-05-15T09:08:09Z | 2022-05-15T12:05:01Z | https://github.com/babysor/MockingBird/issues/553 | [] | NONAME-2121237 | 1 |
tensorpack/tensorpack | tensorflow | 1,561 | nr_tower | If you're asking about an unexpected problem which you do not know the root cause,
use this template. __PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
If you already know the root cause to your problem,
feel free to delete everything in this template.
### 1. What you did:
import argparse
import os
import tensorflow as tf
from tensorflow.contrib.layers import variance_scaling_initializer
from tensorpack import *
from tensorpack.dataflow import dataset
from tensorpack.tfutils.summary import *
from tensorpack.tfutils.symbolic_functions import *
from tensorpack.utils.gpu import get_nr_gpu
max_epoch=400,
nr_tower=max(get_nr_gpu(), 1),
session_init=SaverRestore(args.load) if args.load else None
launch_train_with_config(config,SyncMultiGPUTrainerParameterServer(nr_tower))
(1) **If you're using examples, what's the command you run:**
python /home/liuxp/LQ-Nets-master/cifar10-vgg-small.py --gpu 0 --qw 1 --qa 2
(2) **If you're using examples, have you made any changes to the examples? Paste `git status; git diff` here:**
(3) **If not using examples, help us reproduce your issue:**
It's always better to copy-paste what you did than to describe them.
Please try to provide enough information to let others __reproduce__ your issues.
Without reproducing the issue, we may not be able to investigate it.
### 2. What you observed:
(1) **Include the ENTIRE logs here:**
Traceback (most recent call last):
File "/home/liuxp/LQ-Nets-master/cifar10-vgg-small.py", line 157, in <module>
launch_train_with_config(config, SyncMultiGPUTrainerParameterServer(nr_tower))
NameError: name 'nr_tower' is not defined
Tensorpack typically saves stdout to its training log.
If stderr is relevant, you can run a command with `my_command 2>&1 | tee logs.txt`
to save both stdout and stderr to one file.
(2) **Other observations, if any:**
For example, CPU/GPU utilization, output images, tensorboard curves, if relevant to your issue.
### 3. What you expected, if not obvious.
I wanna debug this
### 4. Your environment:
Paste the output of this command: `python -m tensorpack.tfutils.collect_env`
If this command failed, also tell us your version of Python/TF/tensorpack.
python 3.6.13 h12debd9_1
python-dateutil 2.8.2
python-prctl 1.8.1
tensorflow 1.10.0 gpu_py36h8dbd23f_0
tensorflow-base 1.10.0 gpu_py36h3435052_0
tensorflow-gpu 1.10.0
tensorpack 0.9.9
Note that:
+ You can install tensorpack master by `pip install -U git+https://github.com/tensorpack/tensorpack.git`
and see if your issue is already solved.
+ If you're not using tensorpack under a normal command line shell (e.g.,
using an IDE or jupyter notebook), please retry under a normal command line shell.
You may often want to provide extra information related to your issue, but
at the minimum please try to provide the above information __accurately__ to save effort in the investigation.
| open | 2024-01-11T09:15:18Z | 2024-01-11T13:29:17Z | https://github.com/tensorpack/tensorpack/issues/1561 | [] | bon1996 | 0 |
roboflow/supervision | computer-vision | 1,554 | Allow TIFF (and more) image formats in `load_yolo_annotations` | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
* Currently, `load_yolo_annotations` only allows `png,` `jpg`, and `jpeg` file formats. `load_yolo_annotations` is internally called by `sv.DetectionDataset.from_yolo` functionality.
https://github.com/roboflow/supervision/blob/1860fdb0a4e21edc5fa03d973e9f31c055bdcf4f/supervision/dataset/formats/yolo.py#L156
* Ultralytics supports a wide variety of image formats. Copied the following table from [their website](https://docs.ultralytics.com/modes/predict/#image-and-video-formats:~:text=The%20below%20table%20contains%20valid%20Ultralytics%20image%20formats.).
| Image Suffix | Example Predict Command | Reference |
|---------------|-----------------------------------|---------------------------------|
| .bmp | yolo predict source=image.bmp | Microsoft BMP File Format |
| .dng | yolo predict source=image.dng | Adobe DNG |
| .jpeg | yolo predict source=image.jpeg | JPEG |
| .jpg | yolo predict source=image.jpg | JPEG |
| .mpo | yolo predict source=image.mpo | Multi Picture Object |
| .png | yolo predict source=image.png | Portable Network Graphics |
| .tif | yolo predict source=image.tif | Tag Image File Format |
| .tiff | yolo predict source=image.tiff | Tag Image File Format |
| .webp | yolo predict source=image.webp | WebP |
| .pfm | yolo predict source=image.pfm | Portable FloatMap |
* Use of TIFF files is common in satellite imagery such as Sentinel-2. One may prefer to preserve the TIFF format over convert it to PNG/JPG because TIFF allows the storage of georeferencing information.
* I see that the `load_yolo_annotations` uses `cv2.imread` to read the image files. [OpenCV seems to support](https://docs.opencv.org/4.x/d4/da8/group__imgcodecs.html#gacbaa02cffc4ec2422dfa2e24412a99e2) many of the Ultralytics-supported formats.
https://github.com/roboflow/supervision/blob/1860fdb0a4e21edc5fa03d973e9f31c055bdcf4f/supervision/dataset/formats/yolo.py#L170
### Proposals
* P1: We can expand the hardcoded list of allowed formats.
* P2: We may choose not to restrict the image format and let it fail later.
### Use case
* Everyone who'd like to use formats other than `png,` `jpg`, and `jpeg` will be able to use this extension.
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-09-28T15:04:57Z | 2025-01-19T13:02:38Z | https://github.com/roboflow/supervision/issues/1554 | [
"enhancement",
"hacktoberfest"
] | patel-zeel | 11 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 254 | file_dict.py没必要定义两个函数吧,似乎功能一样,只保留GetSymbolList_trash2()不是更简洁吗? | closed | 2021-08-24T11:55:44Z | 2021-09-02T14:14:02Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/254 | [] | DQ2020scut | 2 | |
seleniumbase/SeleniumBase | web-scraping | 2,251 | UC mode detected on PixelTest (Custom Script and Test Scripts) | Hello from youtube. I've been working on a scraping bot that uses seleniumbase and I need a little assistance with. It seems when I check with pixelscan and other tools my bot is being detected. Even when using a simple script., it seems to get detected. Here is the Frankenstein script I mentioned. I have replaced actual site with blahblahblah.com but for reference this basic concept is this:
**Python script that will collect card information for a trading card game, extract that info to a data frame, hold it, then search google with that information. Once the google search is complete the bot navigates to a website to get price data for that card relative the condition of the card. in the end it exports all of the information to an organized CSV file.**
You will probably think this code is hilarious as I used many things I now realized are built into seleniumbase and I probably didn't need some of them such as time and maybe even beautifulsoup (still learning a ton).
Anyways here is the script. You can modify it to stay open at the end and navigate to pixelscan if you want to test but you might know right away what could be going on . It might be better to troubleshoot with a simple script but I thought you might find this one insteresting and would love feedback
```
import time
import random
from seleniumbase import SB
from bs4 import BeautifulSoup
import pandas as pd
# Function to extract data from a given URL
def extract_data_from_url(url, data_frame):
with SB(uc=True, incognito=True) as sb:
sb.get(url)
time.sleep(random.uniform(5, 8)) # Random delay after opening the URL
page_source = sb.get_page_source()
soup = BeautifulSoup(page_source, 'html.parser')
data = {}
label_selectors = {
"Cert #": "div.certlookup-intro dl:nth-child(2) dd",
"Card Name": "div.certlookup-intro dl:nth-child(3) dd",
"Year": "div.certlookup-intro div:nth-child(4) dl:nth-child(2) dd",
"Language": "div.certlookup-intro div:nth-child(4) dl:nth-child(3) dd",
"Card Set": "div.certlookup-intro div:nth-child(5) dl:nth-child(1) dd",
"Card Number": "div.certlookup-intro div:nth-child(5) dl:nth-child(2) dd",
"Grade": "div.related-info.grades dl dd"
}
for label, selector in label_selectors.items():
element = soup.select_one(selector)
data[label] = element.text.strip() if element else "Data not found"
data_frame = data_frame._append(data, ignore_index=True)
return data_frame
# Function to get the price for a specific grade from the page source of blahblahblah123.com
def get_price_for_grade(page_source, grade):
soup = BeautifulSoup(page_source, 'html.parser')
price_rows = soup.select("#full-prices table tr")
for row in price_rows:
cells = row.find_all('td')
if cells:
grade_cell = cells[0].get_text(strip=True)
price_cell = cells[1].get_text(strip=True)
# Check for the special case of grade 10
if grade == '10' and 'PSA 10' in grade_cell:
return price_cell
elif grade_cell == f"Grade {grade}":
return price_cell
return "Price not found"
# Read a list of URLs from a file (e.g., "urls.txt")
with open("urls.txt", "r") as url_file:
urls = url_file.read().splitlines()
# Create an empty DataFrame to store the data
data_frame = pd.DataFrame(columns=["Cert #", "Year", "Language", "Card Set", "Card Name", "Card Number", "Grade", "Price"])
# Iterate through the list of URLs and extract data from each
for url in urls:
print(f"Extracting data from URL: {url}")
data_frame = extract_data_from_url(url, data_frame)
# Construct the Google URL based on the mapping
google_mapping = {
1: "Card Name",
2: "Year",
3: "Language",
4: "Card Set",
5: "Card Number"
}
google_url = "https://www.google.com/search?q=" + "+".join([data_frame.iloc[-1][google_mapping[i]] for i in range(1, 6)])
# Open the Google URL and click on the first hyperlink that goes to www.blahblahblah.com
with SB(uc=True, incognito=True) as sb:
sb.get(google_url)
time.sleep(random.uniform(5, 8)) # Random delay before searching
sb.wait_for_element('a[href*="www.blahblahblah.com"]')
first_blahblahblah_link = sb.find_element('a[href*="www.blahblahblah.com"]')
first_blahblahblah_link.click()
time.sleep(random.uniform(5, 8)) # Random delay after clicking
sb.wait_for_ready_state_complete()
blahblahblah_page_source = sb.get_page_source()
# Get the price for the grade from the page source of blahblahblah.com
grade = data_frame.iloc[-1]['Grade']
# Special handling for grade 10
grade = '10' if 'Gem Mint 10' in grade else grade
price = get_price_for_grade(blahblahblah_page_source, grade)
data_frame.at[data_frame.index[-1], 'Price'] = price
time.sleep(random.uniform(5, 8)) # Random delay after processing each URL
# Export the DataFrame to a CSV file
data_frame.to_csv('collected_data.csv', index=False)
# If you want to keep the browser open for inspection, otherwise remove this line
# sb.sleep(999)
```
So I decided to try a more simple script and still cannot get it to pass detections
```
from seleniumbase import SB
# Create a function to navigate to the specified URL and keep the browser open
def navigate_to_url_and_keep_open(url):
with SB(uc=True, incognito=True) as sb:
# Open the specified URL
sb.open(url)
sb.sleep(999)
# Specify the URL you want to navigate to
url = "www.pixelscan.net"
# Navigate to the specified URL and keep the browser open
navigate_to_url_and_keep_open(url)
```
I attempted sb.get on the above script with not luck as well. Confirmed I am using latest seleniumbase.
Any ideas, thoughts and recommendations are very very welcomed. I'm super new at python and scripts as a whole but have been spending a ton of time digging in with my spare time.
Appreciate your hard work and dedication! | closed | 2023-11-08T04:43:17Z | 2023-11-09T18:00:25Z | https://github.com/seleniumbase/SeleniumBase/issues/2251 | [
"question",
"UC Mode / CDP Mode"
] | anthonyg45157 | 2 |
pytest-dev/pytest-mock | pytest | 61 | Fix tests for pytest 3 | Hey!
Many tests are failing when running with pytest 3.0.1:
```
_____________________________ TestMockerStub.test_failure_message_with_no_name _____________________________
self = <test_pytest_mock.TestMockerStub instance at 0x7f0b7f55c758>
mocker = <pytest_mock.MockFixture object at 0x7f0b7f51add0>
def test_failure_message_with_no_name(self, mocker):
> self.__test_failure_message(mocker)
test_pytest_mock.py:182:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_pytest_mock.TestMockerStub instance at 0x7f0b7f55c758>
mocker = <pytest_mock.MockFixture object at 0x7f0b7f51add0>, kwargs = {}, expected_name = 'mock'
expected_message = 'Expected call: mock()\nNot called'
stub = <MagicMock spec='function' id='139687357426384'>, exc_info = <ExceptionInfo AssertionError tblen=3>
@py_assert1 = AssertionError('Expected call: mock()\nNot called',)
def __test_failure_message(self, mocker, **kwargs):
expected_name = kwargs.get('name') or 'mock'
expected_message = 'Expected call: {0}()\nNot called'.format(expected_name)
stub = mocker.stub(**kwargs)
with pytest.raises(AssertionError) as exc_info:
stub.assert_called_with()
> assert exc_info.value.msg == expected_message
E AttributeError: 'exceptions.AssertionError' object has no attribute 'msg'
test_pytest_mock.py:179: AttributeError
```
| closed | 2016-09-03T10:42:23Z | 2016-09-15T01:48:48Z | https://github.com/pytest-dev/pytest-mock/issues/61 | [
"bug"
] | vincentbernat | 3 |
iperov/DeepFaceLab | deep-learning | 5,405 | Not able to detect GPU - Build_10_09_2021 | ## Expected behavior
DeepFaceLab_NVIDIA_RTX3000_series_build_10_09_2021 should be able to detect installed GPU
## Actual behavior
DeepFaceLab_NVIDIA_RTX3000_series_build_10_09_2021 is not able to detect GPU, **while the older build (build_09_06_2021) is able to detect them.**
## Steps to reproduce
GPU: 3060 RTX 12 GB
Windows 10 Home
Nvidia Studio Driver
| closed | 2021-10-10T07:49:45Z | 2021-10-11T14:44:54Z | https://github.com/iperov/DeepFaceLab/issues/5405 | [] | exploreTech32 | 6 |
microsoft/nni | data-science | 5,653 | Error: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found | **Describe the issue**:
Hello, I encountered the following error while using NNI 2.10.1:
```
[2023-08-01 14:07:41] Creating experiment, Experiment ID: aj7wd2ey
[2023-08-01 14:07:41] Starting web server...
node:internal/modules/cjs/loader:1187
return process.dlopen(module, path.toNamespacedPath(filename));
^
Error: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /home/zzdx/.local/lib/python3.10/site-packages/nni_node/node_modules/sqlite3/lib/binding/napi-v6-linux-glibc-x64/node_sqlite3.node)
at Object.Module._extensions..node (node:internal/modules/cjs/loader:1187:18)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at Object.<anonymous> (/home/zzdx/.local/lib/python3.10/site-packages/nni_node/node_modules/sqlite3/lib/sqlite3-binding.js:4:17)
at Module._compile (node:internal/modules/cjs/loader:1103:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1157:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12) {
code: 'ERR_DLOPEN_FAILED'
}
Thrown at:
at Module._extensions..node (node:internal/modules/cjs/loader:1187:18)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Module._load (node:internal/modules/cjs/loader:822:12)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at /home/zzdx/.local/lib/python3.10/site-packages/nni_node/node_modules/sqlite3/lib/sqlite3-binding.js:4:17
at Module._compile (node:internal/modules/cjs/loader:1103:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1157:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Module._load (node:internal/modules/cjs/loader:822:12)
[2023-08-01 14:07:42] WARNING: Timeout, retry...
[2023-08-01 14:07:43] WARNING: Timeout, retry...
[2023-08-01 14:07:44] ERROR: Create experiment failed
Traceback (most recent call last):
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connection.py", line 395, in request
self.endheaders()
File "/usr/local/python3/lib/python3.10/http/client.py", line 1278, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/python3/lib/python3.10/http/client.py", line 1038, in _send_output
self.send(msg)
File "/usr/local/python3/lib/python3.10/http/client.py", line 976, in send
self.connect()
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connection.py", line 243, in connect
self.sock = self._new_conn()
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connection.py", line 218, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2e65bc04c0>: Failed to establish a new connection: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /api/v1/nni/check-status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2e65bc04c0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/zzdx/search_api/area_api/3_5_1126_1623_711_715-30min_20230801140738/main.py", line 51, in <module>
experiment.run(8080)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/experiment.py", line 180, in run
self.start(port, debug)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/experiment.py", line 135, in start
self._start_impl(port, debug, run_mode, None, [])
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/experiment.py", line 103, in _start_impl
self._proc = launcher.start_experiment(self._action, self.id, config, port, debug, run_mode,
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/launcher.py", line 148, in start_experiment
raise e
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/launcher.py", line 126, in start_experiment
_check_rest_server(port, url_prefix=url_prefix)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/launcher.py", line 196, in _check_rest_server
rest.get(port, '/check-status', url_prefix)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/rest.py", line 43, in get
return request('get', port, api, prefix=prefix)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/rest.py", line 31, in request
resp = requests.request(method, url, timeout=timeout)
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /api/v1/nni/check-status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2e65bc04c0>: Failed to establish a new connection: [Errno 111] Connection refused'))
[2023-08-01 14:07:44] Stopping experiment, please wait...
[2023-08-01 14:07:44] Experiment stopped
```
**Environment**:
- NNI version: 2.10.1
- Training service (local|remote|pai|aml|etc): remote
- Client OS: Windows 11
- Server OS (for remote mode only): Centos 7.9
- Python version: 3.10
- PyTorch/TensorFlow version: None
- Is conda/virtualenv/venv used?: no
- Is running in Docker?: no
**Log message**:
- nnimanager.log:
```
[2023-08-01 14:38:48] INFO (nni.experiment) Creating experiment, Experiment ID: s1dz03t2
[2023-08-01 14:38:48] INFO (nni.experiment) Starting web server...
[2023-08-01 14:38:49] WARNING (nni.experiment) Timeout, retry...
[2023-08-01 14:38:50] WARNING (nni.experiment) Timeout, retry...
[2023-08-01 14:38:51] ERROR (nni.experiment) Create experiment failed
[2023-08-01 14:38:51] INFO (nni.experiment) Stopping experiment, please wait...
```
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
| closed | 2023-08-01T06:42:16Z | 2023-08-01T09:07:31Z | https://github.com/microsoft/nni/issues/5653 | [] | yifan-dadada | 0 |
zappa/Zappa | flask | 1,064 | Lambda client read timeout should match maximum Lambda execution time | <!--- Provide a general summary of the issue in the Title above -->
## Context
The boto3 Lambda client used by the Zappa CLI has its read timeout set to [5 minutes](https://github.com/zappa/Zappa/blob/master/zappa/core.py#L340), while the maximum Lambda function execution time is now [15 minutes](https://aws.amazon.com/about-aws/whats-new/2018/10/aws-lambda-supports-functions-that-can-run-up-to-15-minutes). This means that `zappa invoke` or `zappa manage` could time out before the actual function execution completes. I encountered this recently with a long-running Django management command, using Zappa 0.54 on Python 3.8.
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.6/3.7/3.8 -->
## Expected Behavior
<!--- Tell us what should happen -->
The Lambda client used by the CLI should not time out before the execution completes.
## Actual Behavior
<!--- Tell us what happens instead -->
The client times out after 5 minutes regardless of the configured execution time, which can be up to 15 minutes.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
The timeout should be updated to the maximum Lambda function execution time.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Set up a Lambda function with a timeout greater than 5 minutes.
2. Deploy some code that takes more than 5 minutes to run.
3. Invoke the long-running code using `zappa invoke` or `zappa manage`.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.54
* Operating System and Python version: macOS 11.6.1, Python 3.8.11
* The output of `pip freeze`: `zappa==0.54.0`
| closed | 2021-11-03T21:54:29Z | 2021-11-05T17:40:25Z | https://github.com/zappa/Zappa/issues/1064 | [] | rolandcrosby-check | 0 |
koxudaxi/fastapi-code-generator | pydantic | 323 | Specs with vendor extensions might lead to invalid Python code | Users can add additional information with x-fields. But Python does not accept identifiers with dashses.
Here are some common vendor extensions documented: https://github.com/Redocly/redoc/blob/main/docs/redoc-vendor-extensions.md#x-logo
Here is example spec:
```yaml
openapi: 3.0.0
info:
title: Example
version: 1.0.0
x-audience: company-internal
x-logo:
url: https://www.example.com/company-logo.png
```
The generated code looks like this
```python
app = FastAPI(
title='Example',
version='1.0.0',
x - audience='company-internal',
x - logo={'url': 'https://www.example.com/company-logo.png'},
)
```
I fixed it this way:
```patch
--- fastapi_code_generator/template/main.jinja2.orig 2023-02-13 16:11:00.677867572 +0100
+++ fastapi_code_generator/template/main.jinja2 2023-02-13 16:37:28.944897684 +0100
@@ -8,7 +8,9 @@
{% if info %}
{% for key,value in info.items() %}
{% set info_value= value.__repr__() %}
+ {%- if not key.startswith("x-") -%}
{{ key }} = {{info_value}},
+ {%- endif -%}
{% endfor %}
{% endif %}
)
```
First I tried to add all fields as a dict passed as kwargs:
```python
app = FastAPI(**{
'title': 'Example',
'version': '1.0.0',
'x-audience': 'company-internal',
'x-logo': {'url': 'https://www.example.com/company-logo.png'},
})
```
But ignoring these fields seems sufficient because FastAPI ignores them anyway. | open | 2023-02-13T15:51:05Z | 2023-02-13T15:51:05Z | https://github.com/koxudaxi/fastapi-code-generator/issues/323 | [] | hajs | 0 |
keras-team/keras | data-science | 20,335 | Floating point exception (core dumped) with onednn opt on tensorflow backend | As shown in this [colab](https://colab.research.google.com/drive/1XjoAtDP4SC2qyLWslW8qWzqQusn9eDOu?usp=sharing), the kernel, not the program, crashes if the OneDNN OPT is on and the output tensor shape contains a zero dimension.
As discussed in tensorflow/tensorflow#77131, and also shown in the above colab, we found that the error disapeared after downgrading from keras 3.0 to keras 2.0.
Therefore, I think some errors are introduced when updating from keras 2.0 to keras 3.0.
| closed | 2024-10-09T08:41:45Z | 2024-11-14T02:01:52Z | https://github.com/keras-team/keras/issues/20335 | [
"type:bug/performance",
"stat:awaiting response from contributor",
"stale",
"backend:tensorflow"
] | Shuo-Sun20 | 4 |
flasgger/flasgger | flask | 54 | cannot access the website using my host ip | Hi , flasgger made my task easier in creating documentation for my api but I cannot access the documentation unless the flask-api is hosted on 5000 and also cannot access documentation on my IP(http://10.0.5.40:5000/apidocs/index.html) but can access when localhost is used. Any suggestions? | closed | 2017-03-23T12:39:21Z | 2017-03-23T19:00:46Z | https://github.com/flasgger/flasgger/issues/54 | [] | coolguy456 | 2 |
lux-org/lux | pandas | 144 | Speed up test by using shared global variable | Use pytest fixture global variable to share dataframes to prevent having to load in the test dataset every time across the different tests. For example:

This might also help resolve #97 . | closed | 2020-11-17T14:41:00Z | 2020-11-19T12:36:00Z | https://github.com/lux-org/lux/issues/144 | [
"easy",
"test"
] | dorisjlee | 1 |
milesmcc/shynet | django | 263 | [Discussion] Support Docker Secrets | I recently discovered that passing secrets to Docker containers is discouraged, and that is the reason Docker does not support out of the shelf mounting secrets into env variables:
> Developers often rely on environment variables to store sensitive data, which is okay for some scenarios but not recommended for Docker containers. Environment variables are even less secure than files. They are vulnerable in more ways, such as:
> * [Linked containers](https://docs.docker.com/network/links/)
> * The [docker inspect](https://stackoverflow.com/questions/30342796/how-to-get-env-variable-when-doing-docker-inspect) command
> * [Child processes](https://devblogs.microsoft.com/oldnewthing/20150915-00/?p=91591)
> * Event log files
(https://snyk.io/blog/keeping-docker-secrets-secure/)
I've been using a utility I made for a while in my Django projects to easily get Docker secrets with fallback to Env environment, and even supporting custom environ objects:
https://gist.github.com/sergioisidoro/7972229bb5826c25f12e7a406f11e7cd
I'm wondering if you would be willing to accept a PR which uses this wrapper for most sensitive stuff (Django secret key, DB password, etc) | open | 2023-04-03T17:02:39Z | 2023-04-04T07:39:09Z | https://github.com/milesmcc/shynet/issues/263 | [] | sergioisidoro | 2 |
horovod/horovod | deep-learning | 3,115 | [MacOS] Race condition makes parallel tests hang on macOS | Sometimes our tests in `test/parallel` hang on macOS, e.g. [here](https://github.com/horovod/horovod/runs/3338155166?check_suite_focus=true):
[1,0]<stderr>:Missing ranks:
[1,0]<stderr>:0: [allgather.duplicate_name, allgather.noname.1362]
[1,0]<stderr>:1: [allgather.noname.1361]
[1,0]<stderr>:[2021-08-16 13:29:51.206828: W /private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/pip-req-build-
e3uh2f38/horovod/common/stall_inspector.cc:107] One or more tensors were submitted to be reduced, gathered or
broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may
indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting
tensors, which will cause deadlock.
This has been mitigated by retrying these tests but should be investigated, understood and fixed systematically. | open | 2021-08-17T14:01:50Z | 2021-08-17T14:03:28Z | https://github.com/horovod/horovod/issues/3115 | [
"bug"
] | EnricoMi | 0 |
marimo-team/marimo | data-visualization | 3,402 | Don't notify on save | When pressing the save button, a notification pops up.
Unless saving fails, I don't see a need for a notification, especially since it blocks the "run all stale cells button"
 | closed | 2025-01-12T09:24:05Z | 2025-01-14T04:54:39Z | https://github.com/marimo-team/marimo/issues/3402 | [] | Hofer-Julian | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 413 | 训练后的lora模型无法加载 | ```
HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './out/lora/checkpoint-70000'. Use `repo_type` argument if needed.
```
我这里自定义了一个 output_dir,按照repo自带的训练脚本,保存之后lora无法再次被重新加载 | closed | 2023-05-23T08:29:03Z | 2023-06-05T22:02:19Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/413 | [
"stale"
] | lucasjinreal | 5 |
pydata/xarray | pandas | 9,582 | Disable methods implemented by map_over_subtree and inheritance from Dataset | ### What is your issue?
Today we discussed how there are a bunch of methods for DataTree that are implemented using the somewhat sketchy approach of taking the equivalent method from `Dataset`, then applying it using `map_over_subtree`. Because there are so many of these methods, and they are ultimately just syntactic sugar that saves the user about 3 lines of code each, these were originally implemented in `xarray-contrib/datatree` without tests.
We decided that it would be safer to disable these methods for now, and then add them back in as we double-check that they actually work / write unit tests for them. The logic being that it would be less painful to just force users to write out their loops over the tree in early versions than it would be for our new inheritance model to break the method in some subtle way without anyone realizing.
Some very important methods we will try to get into the first release (particularly `isel`/`sel` and arithmetic).
The code with the lists of methods to be disabled is here
https://github.com/pydata/xarray/blob/main/xarray/core/datatree_ops.py
cc @shoyer, @flamingbear, @kmuehlbauer, @owenlittlejohns , @castelao, @keewis | closed | 2024-10-04T21:53:19Z | 2024-10-07T11:42:01Z | https://github.com/pydata/xarray/issues/9582 | [
"topic-DataTree"
] | TomNicholas | 0 |
FujiwaraChoki/MoneyPrinterV2 | automation | 100 | about to crashout over this tweet button , cant get the PR fix to work either . |
ℹ️ => Fetching songs...
============ OPTIONS ============
1. YouTube Shorts Automation
2. Twitter Bot
3. Affiliate Marketing
4. Outreach
5. Quit
=================================
Select an option: 2
ℹ️ Starting Twitter Bot...
+----+--------------------------------------+----------+-------------------+
| ID | UUID | Nickname | Account Topic |
+----+--------------------------------------+----------+-------------------+
| 1 | 4568db8e-8f05-4238-96d1-f0af9ebaff61 | rtts | music pop culture |
+----+--------------------------------------+----------+-------------------+
❓ Select an account to start: 1
============ OPTIONS ============
1. Post something
2. Reply to something
3. Show all Posts
4. Setup CRON Job
5. Quit
=================================
❓ Select an option: 1
ℹ️ Generating a post...
ℹ️ Length of post: 269
ℹ️ Generating a post...
ℹ️ Length of post: 225
=> Posting to Twitter: 🎶✨ The resurgence of 90s pop i...
Traceback (most recent call last):
File "/Users/acetwotimes/MoneyPrinterV2/src/classes/Twitter.py", line 83, in post
bot.find_element(By.XPATH, "//a[@data-testid='SideNav_NewTweet_Button']").click()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 888, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "/usr/local/anaconda3/lib/python3.11/site-packages/selenium/webdriver/remote/errorhandler.py", line 232, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //a[@data-testid='SideNav_NewTweet_Button']; For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:197:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:527:5
dom.find/</<@chrome://remote/content/shared/DOM.sys.mjs:136:16
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/acetwotimes/MoneyPrinterV2/src/main.py", line 438, in <module>
main()
File "/Users/acetwotimes/MoneyPrinterV2/src/main.py", line 277, in main
twitter.post()
File "/Users/acetwotimes/MoneyPrinterV2/src/classes/Twitter.py", line 86, in post
bot.find_element(By.XPATH, "//a[@data-testid='SideNav_NewTweet_Button']").click()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 888, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "/usr/local/anaconda3/lib/python3.11/site-packages/selenium/webdriver/remote/errorhandler.py", line 232, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //a[@data-testid='SideNav_NewTweet_Button']; For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:197:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:527:5
dom.find/</<@chrome://remote/content/shared/DOM.sys.mjs:136:16
##########################################################################
❓ Select an option: 3
+----+----------------------+-------------------------------------------------------------------+
| ID | Date | Content |
+----+----------------------+-------------------------------------------------------------------+
| 1 | 02/16/2025, 03:04:45 | 🎶✨ The resurgence of 90s pop icons in today's music scene is... |
+----+----------------------+-------------------------------------------------------------------+
============ OPTIONS ============
1. Post something
2. Reply to something
3. Show all Posts
4. Setup CRON Job
5. Quit
=================================
❓ Select an option: 1
ℹ️ Generating a post...
ℹ️ Length of post: 208
=> Preparing to post on Twitter: 🎶✨ The resurgence of 90s pop i...
Failed to find the text box element.
Tweet content (printed to terminal):
🎶✨ The resurgence of 90s pop icons in today's music scene is a nostalgic treat for fans! From remixes to surprise collaborations, it's a reminder that great music never truly fades away. #90sPop #MusicRevival
x#########################################################################
so i can generate but not type or find the buttons or boxes, even tried flipping the cod so i can type first...didnt work either . any suggestions? ive been stuck on this part for a imnute, and ive done all the fixes ive found so far including using css selector . i get nada.
twitter.py code
import re
import g4f
import sys
import os
import json
from cache import *
from config import *
from status import *
from constants import *
from typing import List
from datetime import datetime
from termcolor import colored
from selenium_firefox import *
from selenium import webdriver
from selenium.common import exceptions
from selenium.webdriver.common import keys
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.service import Service
from selenium.webdriver.firefox.options import Options
from webdriver_manager.firefox import GeckoDriverManager
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Import joblib for parallel processing
from joblib import Parallel, delayed
class Twitter:
"""
Class for the Bot that grows a Twitter account.
"""
def __init__(self, account_uuid: str, account_nickname: str, fp_profile_path: str, topic: str) -> None:
"""
Initializes the Twitter Bot.
Args:
account_uuid (str): The account UUID
account_nickname (str): The account nickname
fp_profile_path (str): The path to the Firefox profile
topic (str): The topic to generate posts about
Returns:
None
"""
self.account_uuid: str = account_uuid
self.account_nickname: str = account_nickname
self.fp_profile_path: str = fp_profile_path
self.topic: str = topic
# Initialize the Firefox options
self.options: Options = Options()
# Set headless state of browser if enabled
if get_headless():
self.options.add_argument("--headless")
# Set the Firefox profile path
self.options.set_preference("profile", fp_profile_path)
# Initialize the Firefox service
self.service: Service = Service(GeckoDriverManager().install())
# Initialize the browser
self.browser: webdriver.Firefox = webdriver.Firefox(service=self.service, options=self.options)
# Initialize the wait instance
self.wait: WebDriverWait = WebDriverWait(self.browser, 40)
def post(self, text: str = None) -> None:
"""
Starts the Twitter Bot.
Args:
text (str): The text to post
Returns:
None
"""
bot: webdriver.Firefox = self.browser
verbose: bool = get_verbose()
bot.get("https://x.com/compose/post")
post_content: str = self.generate_post()
now: datetime = datetime.now()
# Show a preview of the tweet content
print(colored(" => Preparing to post on Twitter:", "blue"), post_content[:30] + "...")
# Determine the tweet content (either generated or provided)
body = post_content if text is None else text
# Try to find the text box and type the content
text_box = None
selectors = [
(By.CSS_SELECTOR, "div.notranslate.public-DraftEditor-content[role='textbox']"),
(By.XPATH, "//div[@data-testid='tweetTextarea_0']//div[@role='textbox']")
]
for selector in selectors:
try:
text_box = self.wait.until(EC.element_to_be_clickable(selector))
text_box.click()
text_box.send_keys(body)
break
except exceptions.TimeoutException:
continue
# If the text box wasn't found, print error, cache the post, and exit gracefully
if text_box is None:
print(colored("Failed to find the text box element.", "red"))
print(colored("Tweet content (printed to terminal):", "yellow"))
print(body)
self.add_post({
"content": post_content,
"date": now.strftime("%m/%d/%Y, %H:%M:%S")
})
return
# Try to find the "Post" button and click it
tweet_button = None
selectors = [
(By.XPATH, "//span[contains(@class, 'css-1jxf684') and text()='Post']"),
(By.XPATH, "//*[text()='Post']")
]
for selector in selectors:
try:
tweet_button = self.wait.until(EC.element_to_be_clickable(selector))
tweet_button.click()
break
except exceptions.TimeoutException:
continue
# If the tweet button wasn't found, print error, cache the post, and exit gracefully
if tweet_button is None:
print(colored("Failed to find the tweet button element.", "red"))
print(colored("Tweet content (printed to terminal):", "yellow"))
print(body)
self.add_post({
"content": post_content,
"date": now.strftime("%m/%d/%Y, %H:%M:%S")
})
return
if verbose:
print(colored(" => Pressed [ENTER] Button on Twitter..", "blue"))
# Wait for confirmation that the tweet has been posted
self.wait.until(EC.presence_of_element_located((By.XPATH, "//div[@data-testid='tweetButton']")))
# Add the post to the cache
self.add_post({
"content": post_content,
"date": now.strftime("%m/%d/%Y, %H:%M:%S")
})
success("Posted to Twitter successfully!")
def get_posts(self) -> List[dict]:
"""
Gets the posts from the cache.
Returns:
posts (List[dict]): The posts
"""
if not os.path.exists(get_twitter_cache_path()):
# Create the cache file if it doesn't exist
with open(get_twitter_cache_path(), 'w') as file:
json.dump({"posts": []}, file, indent=4)
with open(get_twitter_cache_path(), 'r') as file:
parsed = json.load(file)
# Find our account and its posts
accounts = parsed.get("accounts", [])
for account in accounts:
if account["id"] == self.account_uuid:
posts = account.get("posts", [])
return posts
return []
def add_post(self, post: dict) -> None:
"""
Adds a post to the cache.
Args:
post (dict): The post to add
Returns:
None
"""
posts = self.get_posts()
posts.append(post)
with open(get_twitter_cache_path(), 'r') as file:
previous_json = json.load(file)
# Find our account and append the new post
accounts = previous_json.get("accounts", [])
for account in accounts:
if account["id"] == self.account_uuid:
account["posts"].append(post)
# Commit changes to the cache
with open(get_twitter_cache_path(), "w") as f:
json.dump(previous_json, f, indent=4)
def generate_post(self) -> str:
"""
Generates a post for the Twitter account based on the topic.
Returns:
post (str): The post
"""
completion = g4f.ChatCompletion.create(
model=parse_model(get_model()),
messages=[
{
"role": "user",
"content": f"Generate a Twitter post about: {self.topic} in {get_twitter_language()}. "
"The limit is 2 sentences. Choose a specific sub-topic of the provided topic."
}
]
)
if get_verbose():
info("Generating a post...")
if completion is None:
error("Failed to generate a post. Please try again.")
sys.exit(1)
# Remove asterisks and quotes from the generated content
completion = re.sub(r"\*", "", completion).replace("\"", "")
if get_verbose():
info(f"Length of post: {len(completion)}")
# Instead of recursively generating a new post, trim it if it's too long.
max_length = 260
if len(completion) > max_length:
# Optionally, you could trim on a word boundary instead of a strict character limit.
trimmed = completion[:max_length].rsplit(" ", 1)[0] + "..."
if get_verbose():
info(f"Trimmed post to {len(trimmed)} characters.")
return trimmed
return completion
| open | 2025-02-16T08:15:55Z | 2025-02-18T20:37:48Z | https://github.com/FujiwaraChoki/MoneyPrinterV2/issues/100 | [] | ragnorcap | 3 |
roboflow/supervision | machine-learning | 1,543 | [InferenceSlicer] Contradictory documentation regarding overlap_ratio_wh | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
`InferenceSlicer` object in the latest update. Specifically, the `overlap_ratio_wh` parameter is mentioned as both a new feature and deprecated in different sections of the release notes.
In the "Changed" section, it states:
> InferenceSlicer now features an `overlap_ratio_wh` parameter, making it easier to compute slice sizes when handling overlapping slices. [#1434](https://github.com/supervisely/supervisely/issues/1434)
However, in the "Deprecated" section, it mentions:
> `overlap_ratio_wh` in InferenceSlicer.__init__ is deprecated and will be removed in supervision-0.27.0. Use `overlap_wh` instead.
This seems contradictory: on one hand, `overlap_ratio_wh` is presented as a new feature, while on the other hand, it is marked for deprecation and removal in favor of `overlap_wh`.
Could you clarify whether `overlap_ratio_wh` should be used, or if we should transition directly to `overlap_wh`? Additionally, updating the documentation to make this distinction clear would help avoid confusion for future users.
Thank you!
### Environment
- Supervision 0.23.0
- Python 3.11
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | closed | 2024-09-25T12:32:13Z | 2024-10-01T12:18:14Z | https://github.com/roboflow/supervision/issues/1543 | [
"bug"
] | tibeoh | 5 |
sktime/pytorch-forecasting | pandas | 1,032 | Can you please help in intepreting the output of plot_prediction_actual_by_variable()? | PyTorch-Forecasting version: 0.10.2
Torch: 1.10.1
Python version: 3.8
Operating System: Windows
I woukd need some help to interpret the plot produced by _plot_prediction_actual_by_variable()_
I take two hereafter as an example (one related to a continuous and one to a categorical variable).


questions:
- are these the forecast of the X variables? if yes, is such forecast produced by the lstm-decoder? (all of it then fed into the self-attention)
- is such plot representing the validation set of the Xs over which I compare with the actual (real) data? (the model should split each X in X_train and X_val )
- how to interpret it? I mean what are the axes (x,y,z) meaning? since it is a prediction, I would expect on the x-axis the time index which in my idea would correspond to the decoder length
- what the average means? in my idea in every batch of training, the model should forecast the Xs before doing the forecast for y. But in every batch the forecast of Xs is probably different so I would guess i will end up with several estimate (equal to the number of batch) of X at t+1..is the average simply the algebraic average of these estimates?
- regarding the categorical variable (months), what is the meaning of prediction? the distance between the blue and the red dot what does it mean? These are variable time-dependent which are known into the future, why the model is predicting those? | open | 2022-06-15T15:06:13Z | 2022-06-15T15:06:13Z | https://github.com/sktime/pytorch-forecasting/issues/1032 | [] | LuigiDarkSimeone | 0 |
axnsan12/drf-yasg | django | 369 | readOnly on SchemaRef | Let say I have a nested serializer like this:
```python
class AccountSerializer(Serializer):
name = CharField()
class PostSerializer(Serializer):
title = CharField()
account = AccountSerializer(read_only=True)
```
The corresponding Schema generated for account field in Post is `SchemaRef([('$ref', '#/definitions/Account')])` but the "read_only=True" is lost.
How can I keep the readOnly attribute with nested serializer? | closed | 2019-05-17T15:21:01Z | 2019-06-12T23:50:31Z | https://github.com/axnsan12/drf-yasg/issues/369 | [] | luxcem | 1 |
autogluon/autogluon | scikit-learn | 4,756 | How to Use Focal Loss Function in Classification Problems with Autogluon? | First of all, thanks for such an excellent open-source project.
Currently, I'm using Autogluon for small-sample classification tasks and I've found that the Focal loss function is very helpful for learning with imbalanced samples.
So, how can I use the Focal loss function in classification problems with Autogluon? | closed | 2024-12-27T06:10:11Z | 2025-01-13T23:33:14Z | https://github.com/autogluon/autogluon/issues/4756 | [] | lovechang1986 | 1 |
Farama-Foundation/Gymnasium | api | 385 | [Question] Interpretation of ground contact forces in Ant-v4 | ### Question
I am trying to detect if the agent of the Ant-v4 environment has ground contact on any of its legs and had the idea to monitor the contact forces (via `use_contact_forces=True`). However, from the documentation it is not clear to me which force corresponds to what. My first guess was the "ground link", however these forces are constantly zero in my experiments.
Could you please clarify which forces are applicable to my problem?
Thank you very much in advance. | closed | 2023-03-14T10:45:48Z | 2023-03-15T09:52:51Z | https://github.com/Farama-Foundation/Gymnasium/issues/385 | [
"question"
] | BeFranke | 5 |
seanharr11/etlalchemy | sqlalchemy | 28 | UnicodeDecodeError with Postgres -> MSSQL migration | I tried to use etlalchemy@1.1.1 to copy some data from Postgres (via psycopg2@2.7.3.1) to MSSQL (via pyodbc@4.0.17) using python 2.7 (since per #14 python3 doesn't seem to be supported). (This is on Windows, but I expected this to become important later when running external commands.)
It failed with:
File "...\sqlalchemy\sql\compiler.py", line 1895, in visit_insert
for crud_param_set in crud_params
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 101: ordinal not in range(128)
while trying to generate the `INSERT`s for the non-ASCII data to dump into the intermediate file.
While poking around I realized I don't understand how non-Unicode data is supposed to be handled.
The source database uses UTF8 encoding and has this table definition:
CREATE TABLE public.TABLE_NAME
(
id numeric(18,0) NOT NULL,
name character varying(255) COLLATE pg_catalog."default",
...
)
The `client_encoding` is UTF8 (both with the `?client_encoding=utf8` in the connection string and without).
SQLAlchemy returns `unicode` strings, then in `standardize_column_type` (currently at https://github.com/seanharr11/etlalchemy/blob/master/etlalchemy/ETLAlchemySource.py#L201 ) they get converted to `str`:
elif "STRING" in base_classes
...
if isinstance(row[idx], unicode):
row[idx] = row[idx].encode('utf-8', 'ignore')
But SQLAlchemy really likes unicode, and in `visit_insert` in compiler.py it does:
text = "INSERT "
#...
text += "INTO "
#...
table_text = preparer.format_table(insert_stmt.table) # returns quoted_name
#...
text += table_text # gets converted to `unicode`
#...
elif insert_stmt._has_multi_parameters:
text += " VALUES %s" % ( # non-ascii characters passed here
...which raises the UnicodeDecodeError.
Removing the encoding logic from `standardize_column_type` and adding it to `dump_to_sql_statement` in literal_value_generator.py:
fp.write(stmt.encode('UTF-8'))
...sorta works, but I'd like to understand what I'm missing.
The migration log:
Sending source '<etlalchemy.ETLAlchemySource.ETLAlchemySource instance at 0x0000000004E663C8>' to destination 'mssql+pyodbc:///?odbc_connect=XXX'
ETLAlchemySource (INFO) -
*************************
*** Total Tables: 1 ***
*************************
ETLAlchemySource (INFO) - Reading Table Schema 'TABLE_NAME'...
ETLAlchemySource (INFO) - Loaded indexes and FKs for table 'TABLE_NAME'
ETLAlchemySource (INFO) - Building query to fetch all rows from TABLE_NAME
ETLAlchemySource (INFO) - Done. (315 total rows)
ETLAlchemySource (INFO) - Loading all rows into memory...
ETLAlchemySource (INFO) - Done
ETLAlchemySource (INFO) - (id) NUMERIC
ETLAlchemySource (INFO) - Bases: ['NUMERIC']
ETLAlchemySource (INFO) - --> id...{'Decimal': 315}
ETLAlchemySource (WARNING) - Column 'id' is of type 'Decimal', but contains no mantissas > 0. (i.e. 3.00, 2.00, etc..)
ETLAlchemySource (WARNING) - Coercing to 'Integer'
ETLAlchemySource (INFO) - Checking column for elimination status...
ETLAlchemySource (INFO) - (pname) VARCHAR
ETLAlchemySource (INFO) - Bases: ['STRING']
...
ETLAlchemySource (WARNING) - Table '{0}' already exists - not creating table, reflecting to get new changes instead..
ETLAlchemySource (INFO) - Transforming & Dumping 315 total rows from table 'TABLE_NAME' into 'path_to/TABLE_NAME.sql'.
ETLAlchemySource (INFO) - (TABLE_NAME) -- Transforming rows: 0 -> 315...(315 Total)
ETLAlchemySource (INFO) - (TABLE_NAME) -- Dumping rows: 0 -> 315 to 'TABLE_NAME.sql'...(315 Total)[Table 0/1]
ETLAlchemySource (INFO) - Gathering unique columns for upsert.
ETLAlchemySource (INFO) - Unique columns are '[Column('id', INTEGER(), table=<TABLE_NAME>, primary_key=True, nullable=False, default=Sequence('id_identity', start=1, increment=1, metadata=MetaData(bind=Engine(mssql+pyodbc:///?odbc_connect=XXX))))]'
ETLAlchemySource (INFO) - Creating 'upsert' statements for '315' rows, and dumping to 'TABLE_NAME.sql'.
ETLAlchemySource (INFO) - Creating 'insert' stmts for (the remaining)315 rows, and dumping to 'TABLE_NAME.sql' (because they DNE in the table!).
ETLAlchemySource (INFO) - (315) -- Inserting remaining '315' rows.
Traceback (most recent call last):
File "test.py", line 12, in <module>
tgt.migrate()
File "...\etlalchemy\ETLAlchemyTarget.py", line 86, in migrate
migrate_data=migrate_data)
File "...\etlalchemy\ETLAlchemySource.py", line 1140, in migrate
pks, Session)
File "...\etlalchemy\ETLAlchemySource.py", line 871, in dump_data
self.dst_engine, T.name)
File "...\etlalchemy\literal_value_generator.py", line 234, in dump_to_sql_statement
compiler = LiteralCompiler(dialect, statement)
File "...\sqlalchemy\dialects\mssql\base.py", line 1043, in __init__
super(MSSQLCompiler, self).__init__(*args, **kwargs)
File "...\sqlalchemy\sql\compiler.py", line 395, in __init__
Compiled.__init__(self, dialect, statement, **kwargs)
File "...\sqlalchemy\sql\compiler.py", line 190, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "...\sqlalchemy\sql\compiler.py", line 213, in process
return obj._compiler_dispatch(self, **kwargs)
File "...\sqlalchemy\sql\visitors.py", line 81, in _compiler_dispatch
return meth(self, **kw)
File "...\sqlalchemy\sql\compiler.py", line 1895, in visit_insert
for crud_param_set in crud_params
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 101: ordinal not in range(128)
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/49988924-unicodedecodeerror-with-postgres-mssql-migration?utm_campaign=plugin&utm_content=tracker%2F41641218&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F41641218&utm_medium=issues&utm_source=github).
</bountysource-plugin> | open | 2017-10-03T01:01:39Z | 2017-10-03T01:01:45Z | https://github.com/seanharr11/etlalchemy/issues/28 | [] | nickolay | 0 |
man-group/arctic | pandas | 357 | Python LZ4 0.9.2 breaks arctic | #### Arctic Version
```
All
```
#### Arctic Store
```
# All
```
latest version of Python LZ4 requires inputs to compression be bytes or bytearrays see https://github.com/python-lz4/python-lz4/issues/35 | closed | 2017-05-17T13:07:51Z | 2017-05-31T01:01:34Z | https://github.com/man-group/arctic/issues/357 | [] | bmoscon | 1 |
facebookresearch/fairseq | pytorch | 5,559 | clarification on additional_special_tokens | Is there a map telling me which of these special tokens map to which language?
<html>
<body>
<!--StartFragment-->
"additional_special_tokens": [
--
| "ace_Arab",
| "ace_Latn",
| "acm_Arab",
| "acq_Arab",
| "aeb_Arab",
| "afr_Latn",
| "ajp_Arab",
| "aka_Latn",
| "amh_Ethi",
| "apc_Arab",
| "arb_Arab",
| "ars_Arab",
| "ary_Arab",
| "arz_Arab",
| "asm_Beng",
| "ast_Latn",
| "awa_Deva",
| "ayr_Latn",
| "azb_Arab",
| "azj_Latn",
| "bak_Cyrl",
| "bam_Latn",
| "ban_Latn",
| "bel_Cyrl",
| "bem_Latn",
| "ben_Beng",
| "bho_Deva",
| "bjn_Arab",
| "bjn_Latn",
| "bod_Tibt",
| "bos_Latn",
| "bug_Latn",
| "bul_Cyrl",
| "cat_Latn",
| "ceb_Latn",
| "ces_Latn",
| "cjk_Latn",
| "ckb_Arab",
| "crh_Latn",
| "cym_Latn",
| "dan_Latn",
| "deu_Latn",
| "dik_Latn",
| "dyu_Latn",
| "dzo_Tibt",
| "ell_Grek",
| "eng_Latn",
| "epo_Latn",
| "est_Latn",
| "eus_Latn",
| "ewe_Latn",
| "fao_Latn",
| "pes_Arab",
| "fij_Latn",
| "fin_Latn",
| "fon_Latn",
| "fra_Latn",
| "fur_Latn",
| "fuv_Latn",
| "gla_Latn",
| "gle_Latn",
| "glg_Latn",
| "grn_Latn",
| "guj_Gujr",
| "hat_Latn",
| "hau_Latn",
| "heb_Hebr",
| "hin_Deva",
| "hne_Deva",
| "hrv_Latn",
| "hun_Latn",
| "hye_Armn",
| "ibo_Latn",
| "ilo_Latn",
| "ind_Latn",
| "isl_Latn",
| "ita_Latn",
| "jav_Latn",
| "jpn_Jpan",
| "kab_Latn",
| "kac_Latn",
| "kam_Latn",
| "kan_Knda",
| "kas_Arab",
| "kas_Deva",
| "kat_Geor",
| "knc_Arab",
| "knc_Latn",
| "kaz_Cyrl",
| "kbp_Latn",
| "kea_Latn",
| "khm_Khmr",
| "kik_Latn",
| "kin_Latn",
| "kir_Cyrl",
| "kmb_Latn",
| "kon_Latn",
| "kor_Hang",
| "kmr_Latn",
| "lao_Laoo",
| "lvs_Latn",
| "lij_Latn",
| "lim_Latn",
| "lin_Latn",
| "lit_Latn",
| "lmo_Latn",
| "ltg_Latn",
| "ltz_Latn",
| "lua_Latn",
| "lug_Latn",
| "luo_Latn",
| "lus_Latn",
| "mag_Deva",
| "mai_Deva",
| "mal_Mlym",
| "mar_Deva",
| "min_Latn",
| "mkd_Cyrl",
| "plt_Latn",
| "mlt_Latn",
| "mni_Beng",
| "khk_Cyrl",
| "mos_Latn",
| "mri_Latn",
| "zsm_Latn",
| "mya_Mymr",
| "nld_Latn",
| "nno_Latn",
| "nob_Latn",
| "npi_Deva",
| "nso_Latn",
| "nus_Latn",
| "nya_Latn",
| "oci_Latn",
| "gaz_Latn",
| "ory_Orya",
| "pag_Latn",
| "pan_Guru",
| "pap_Latn",
| "pol_Latn",
| "por_Latn",
| "prs_Arab",
| "pbt_Arab",
| "quy_Latn",
| "ron_Latn",
| "run_Latn",
| "rus_Cyrl",
| "sag_Latn",
| "san_Deva",
| "sat_Beng",
| "scn_Latn",
| "shn_Mymr",
| "sin_Sinh",
| "slk_Latn",
| "slv_Latn",
| "smo_Latn",
| "sna_Latn",
| "snd_Arab",
| "som_Latn",
| "sot_Latn",
| "spa_Latn",
| "als_Latn",
| "srd_Latn",
| "srp_Cyrl",
| "ssw_Latn",
| "sun_Latn",
| "swe_Latn",
| "swh_Latn",
| "szl_Latn",
| "tam_Taml",
| "tat_Cyrl",
| "tel_Telu",
| "tgk_Cyrl",
| "tgl_Latn",
| "tha_Thai",
| "tir_Ethi",
| "taq_Latn",
| "taq_Tfng",
| "tpi_Latn",
| "tsn_Latn",
| "tso_Latn",
| "tuk_Latn",
| "tum_Latn",
| "tur_Latn",
| "twi_Latn",
| "tzm_Tfng",
| "uig_Arab",
| "ukr_Cyrl",
| "umb_Latn",
| "urd_Arab",
| "uzn_Latn",
| "vec_Latn",
| "vie_Latn",
| "war_Latn",
| "wol_Latn",
| "xho_Latn",
| "ydd_Hebr",
| "yor_Latn",
| "yue_Hant",
| "zho_Hans",
| "zho_Hant",
| "zul_Latn"
| ],
<!--EndFragment-->
</body>
</html>
| open | 2024-10-21T09:15:03Z | 2024-10-21T09:17:15Z | https://github.com/facebookresearch/fairseq/issues/5559 | [
"question",
"needs triage"
] | hwang136 | 1 |
python-gitlab/python-gitlab | api | 3,052 | gitlab project-commit create does not work with valid actions | ## Description of the problem, including code/CLI snippet
It appears that the CLI may be passing the `actions` flag as a string instead of as JSON. I saw [this potentially related discussion](https://github.com/python-gitlab/python-gitlab/discussions/2145) (it's an old discussion, so it may be out of date), but I couldn't find a corresponding issue.
Command run: `gitlab -d project-commit create --project-id ****** --branch main --commit-message 'create multiple/' --actions '[{"action": "create", "file_path": "multiple/file-to-commit1.txt", "content": "@multiple/file-to-commit1.txt"}, {"action": "create", "file_path": "multiple/file-to-commit2.txt", "content": "@multiple/file-to-commit2.txt"}, {"action": "create", "file_path": "multiple/file-to-commit3.txt", "content": "@multiple/file-to-commit3.txt"}]'`
Output:
```
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): *************:443
DEBUG:http.client:send: b'GET /api/v4/user HTTP/1.1\r\nHost: *************\r\nUser-Agent: *********************\r\nAccept-Encoding: gzip, deflate\r\nAccept: _/_\r\nConnection: keep-alive\r\nContent-type: application/json\r\nAuthorization: Bearer [MASKED]\r\n\r\n'
DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n'
DEBUG:http.client:header: Server: nginx
DEBUG:http.client:header: Date: Fri, 29 Nov 2024 16:18:55 GMT
DEBUG:http.client:header: Content-Type: application/json
DEBUG:http.client:header: Transfer-Encoding: chunked
DEBUG:http.client:header: Connection: keep-alive
DEBUG:http.client:header: Vary: Accept-Encoding
DEBUG:http.client:header: Cache-Control: max-age=0, private, must-revalidate
DEBUG:http.client:header: Vary: Origin
DEBUG:http.client:header: X-Content-Type-Options: nosniff
DEBUG:http.client:header: X-Frame-Options: SAMEORIGIN
DEBUG:http.client:header: X-Gitlab-Meta: {"correlation*id":"***********","version":"1"}
DEBUG:http.client:header: X-Request-Id: ***************
DEBUG:http.client:header: Referrer-Policy: strict-origin-when-cross-origin
DEBUG:http.client:header: Content-Encoding: gzip
DEBUG:urllib3.connectionpool:https://*************:443 "GET /api/v4/user HTTP/11" 200 None
DEBUG:http.client:send: b'POST /api/v4/******/repository/commits
HTTP/1.1\r\nHost: *************\r\nUser-Agent: **********\r\nAccept-Encoding: gzip, deflate\r\nAccept: */\_\r\nConnection: keep-alive\r\nContent-type: application/json\r\nContent-Length: 368\r\nAuthorization: Bearer [MASKED]\r\n\r\n'
LINE OF POTENTIAL INTEREST
vvvvvvvvvvvvvvvvvvvvvvvvvvv
DEBUG:http.client:send: b'{"branch": "main", "commit_message": "create multiple/", "actions": "[{action: create, file_path: multiple/file-to-commit1.txt, content: @multiple/file-to-commit1.txt}, {action: create, file_path: multiple/file-to-commit2.txt, content: @multiple/file-to-commit2.txt}, {action: create, file_path: multiple/file-to-commit3.txt, content: @multiple/file-to-commit3.txt}]"}'
DEBUG:http.client:reply: 'HTTP/1.1 400 Bad Request\r\n'
DEBUG:http.client:header: Server: nginx
DEBUG:http.client:header: Date: Fri, 29 Nov 2024 16:18:55 GMT
DEBUG:http.client:header: Content-Type: application/json
DEBUG:http.client:header: Content-Length: 30
DEBUG:http.client:header: Connection: keep-alive
DEBUG:http.client:header: Cache-Control: no-cache
DEBUG:http.client:header: Vary: Origin
DEBUG:http.client:header: X-Content-Type-Options: nosniff
DEBUG:http.client:header: X-Frame-Options: SAMEORIGIN
DEBUG:http.client:header: X-Gitlab-Meta: {"correlation_id":"***************","version":"1"}
DEBUG:http.client:header: X-Request-Id: ***************
DEBUG:urllib3.connectionpool:https://*************:443 "POST /api/v4/******/repository/commits HTTP/11" 400 30
Impossible to create object (400: actions is invalid)
```
## Expected Behavior
Create a new commit in the repo with the new files.
## Actual Behavior
Fails with a 400 error saying "actions is invalid" (and it's possible I could be missing something here. I checked that my JSON was valid and checked with the official Gitlab API spec as first steps in debugging).
## Specifications
- python-gitlab version: 5.0.0
- Gitlab server version (or gitlab.com):
| open | 2024-11-29T16:48:11Z | 2024-12-05T16:46:59Z | https://github.com/python-gitlab/python-gitlab/issues/3052 | [
"cli"
] | Anthony-Fiddes | 1 |
raphaelvallat/pingouin | pandas | 119 | t-test giving wrong output for 95% CI | Hi,
Thank you for the great package. I was doing one-sample t-test using the `pingouin` as follows,
`from pingouin import ttest`
`ttest([5.5, 2.4, 6.8, 9.6, 4.2], 4).round(2)`
Output,
T dof tail p-val CI95% cohen-d BF10 power
T-test 1.4 4 two-sided 0.23 **[-1.68, 5.08]** 0.62 0.766 0.19
I did the same in R and the output for 95% CI does not match,
`t.test(c(5.5, 2.4, 6.8, 9.6, 4.2), mu=4)`
Output,
One Sample t-test
data: c(5.5, 2.4, 6.8, 9.6, 4.2)
t = 1.3974, df = 4, p-value = 0.2348
alternative hypothesis: true mean is not equal to 4
95 percent confidence interval:
**2.322309 9.077691**
sample estimates:
mean of x
5.7
Is there something I am missing here? | closed | 2020-08-23T19:41:53Z | 2020-09-07T18:24:19Z | https://github.com/raphaelvallat/pingouin/issues/119 | [
"bug :boom:"
] | reneshbedre | 3 |
modin-project/modin | data-science | 7,178 | Add type hints for DataFrame | closed | 2024-04-13T13:55:13Z | 2024-04-15T12:13:59Z | https://github.com/modin-project/modin/issues/7178 | [
"new feature/request 💬"
] | anmyachev | 0 | |
matplotlib/matplotlib | data-science | 29,294 | [Doc]: Style methods documented as part of matplotlib but not pyplot | ### Documentation Link
https://matplotlib.org/stable/api/pyplot_summary.html
### Problem
The documentation on [style sheets](https://matplotlib.org/stable/users/explain/customizing.html#using-style-sheets) indicates that the methods are part of the `matplotlib` namespace and shows examples with the `pyplot` namespace. The methods are documented in the [`matplotlib.style`](https://matplotlib.org/stable/api/style_api.html#matplotlib.style.use) part of the API but not in [`matplotlib.pyplot`](https://matplotlib.org/stable/api/pyplot_summary.html).
### Suggested improvement
Include references in the `matplotlib.pyplot` API documentation to methods that it inherits from `matplotlib`. | open | 2024-12-12T18:20:05Z | 2024-12-14T08:44:19Z | https://github.com/matplotlib/matplotlib/issues/29294 | [
"Documentation"
] | jsdodge | 1 |
keras-team/keras | data-science | 20,106 | Unrecognized keyword arguments passed to LSTM: {'batch_input_shape' | model = Sequential()
model.add(LSTM(4, batch_input_shape=(1, X_train.shape[1], X_train.shape[2]), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, y_train, epochs=100, batch_size=1, verbose=1, shuffle=False)
ValueError: Unrecognized keyword arguments passed to LSTM: {'batch_input_shape': (1, 1, 7)}
My version :
TensorFlow version: 2.17.0
Keras version: 3.4.1
I've seen similar issue raised on stackoverflow. I was able to run the code 2 weeks ago without error. What new keyword argument should I use?
https://stackoverflow.com/questions/78805181/valueerror-unrecognized-keyword-arguments-passed-to-lstm-batch-input-shape | closed | 2024-08-09T19:19:55Z | 2025-03-22T12:16:24Z | https://github.com/keras-team/keras/issues/20106 | [
"type:support",
"stat:awaiting response from contributor"
] | Ineedsomehelpah | 7 |
deezer/spleeter | deep-learning | 208 | Installing the new 16 khz cutoff Spletter 1.49? | Are new stems-16kHz folders installed when this is done?
Also, what & how are the newer finetune training models being used?
Is there a way to convert the output stems to mono channel flac files?
Thanks, Roger
| closed | 2019-12-28T15:33:45Z | 2019-12-30T14:57:09Z | https://github.com/deezer/spleeter/issues/208 | [
"question"
] | Mixerrog | 1 |
mars-project/mars | numpy | 3,274 | [BUG] Ray executor init ray twice? | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Start Mars compute by `mars.new_session(backend="ray")` without `ray.init`, there may raise the init ray twice exception or the core worker crashes because of duplicate init.
Thread A
```python
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/admin/Work/mars/mars/lib/aio/isolation.py", line 36, in _run
self.loop.run_until_complete(self._stopped.wait())
File "/home/admin/Work/mars/mars/services/task/supervisor/processor.py", line 372, in run
await self._process_stage_chunk_graph(*stage_args)
File "/home/admin/Work/mars/mars/services/task/supervisor/processor.py", line 250, in _process_stage_chunk_graph
chunk_to_result = await self._executor.execute_subtask_graph(
File "/home/admin/Work/mars/mars/services/task/execution/ray/executor.py", line 524, in execute_subtask_graph
output_object_refs = self._ray_executor.options(
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/remote_function.py", line 215, in remote
return func_cls._remote(args=args, kwargs=kwargs, **updated_options)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/util/tracing/tracing_helper.py", line 307, in _invocation_remote_span
return method(self, args, kwargs, *_args, **_kwargs)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/remote_function.py", line 235, in _remote
if client_mode_should_convert(auto_init=True):
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 124, in client_mode_should_convert
ray.init()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/_private/worker.py", line 1050, in init
traceback.print_stack()
```
Thread B
```python
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/concurrent/futures/thread.py", line 80, in _worker
work_item.run()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 100, in wrapper
if client_mode_should_convert(auto_init=auto_init):
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 124, in client_mode_should_convert
ray.init()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/site-packages/ray/_private/worker.py", line 1050, in init
traceback.print_stack()
```
This is because gc call ray.wait in thread pool when ray executor submitting ray tasks. `ray.init` is not thread safe.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-10-11T08:28:27Z | 2022-10-13T03:41:38Z | https://github.com/mars-project/mars/issues/3274 | [
"type: bug",
"mod: ray integration"
] | fyrestone | 0 |
HumanSignal/labelImg | deep-learning | 155 | There is no resources.py program | <!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
How do you expect to run this in windows!! Pls Fix this
- **OS:** Windows
- **PyQt version:** 4
| closed | 2017-08-29T14:00:56Z | 2017-09-25T01:53:04Z | https://github.com/HumanSignal/labelImg/issues/155 | [] | ASH1998 | 3 |
aleju/imgaug | machine-learning | 642 | Combination of EdgeDetection and SomeOf | Is it possible to apply the Edge Detection Augmenter always to the Image and with the SomeOf class? | closed | 2020-03-18T07:57:44Z | 2020-03-19T15:17:55Z | https://github.com/aleju/imgaug/issues/642 | [] | Trevirirus | 0 |
AntonOsika/gpt-engineer | python | 608 | Sweep: When specifying dependencies, use the packaging tool rather than generating the files | ### Details
For python the tool should create a new virtual environment and install the dependencies into it then pip freeze >requirements.txt to generate it.
For NPM the tool should "npm init" the "npm install" each of the dependencies to create the package.json.
Today, versions are often hallucinated and the packages specified do not exist.
<details>
<summary>Checklist</summary>
- [X] `Makefile`
> • Replace the `install-dependencies` target with the following commands to create a new virtual environment, install the dependencies, and generate the requirements.txt file:
> ```bash
> install-dependencies:
> @echo -e "$(COLOR_CYAN)Creating virtual environment...$(COLOR_RESET)" && \
> python -m venv venv && \
> source venv/bin/activate && \
> pip install -e . >> /dev/null && \
> pip freeze > requirements.txt
> ```
- [X] `scripts/create_package_json.sh`
> • Add the following script to initialize a new Node.js project and install the dependencies:
> ```bash
> #!/bin/bash
> npm init -y
> for dep in $@; do
> npm install $dep
> done
> ```
> • This script takes a list of dependencies as arguments and installs each one using `npm install`.
</details>
| closed | 2023-08-16T22:24:06Z | 2023-09-11T18:46:21Z | https://github.com/AntonOsika/gpt-engineer/issues/608 | [
"sweep",
"triage"
] | spullara | 6 |
PrefectHQ/prefect | automation | 16,815 | Include additional metadata as deployment delete/update events | ### Describe the current behavior
As part of an internal audit review (for SOX compliance), we are required to conduct a periodic review of changes made to Prefect deployments.
We are attempting to use Events in the Prefect UI, but it appears that currently for the "prefect.deployment.updated" and "prefect.deployment.deleted" events, we can only filter by deployment ID. Additionally, the deployment ID seems to change every time a deployment is deleted.
It is not possible keep track for changes made to a deployment, for a given date range, if there are too many deployment and changes are frequent.
### Describe the proposed behavior
Having additional metadata like flow name, flow id attached with deployment deleted/updated events will help with auditing the deployment history.
### Example Use
_No response_
### Additional context
_No response_ | open | 2025-01-22T17:18:27Z | 2025-02-14T16:45:59Z | https://github.com/PrefectHQ/prefect/issues/16815 | [
"enhancement"
] | vijay-varikota | 0 |
flasgger/flasgger | rest-api | 613 | Swagger interface allows the injection of JavaScript code | Hello,
I've come across this security issue with **flasgger**.
Swagger interface allows the injection of JavaScript code, which can be injected using the remote Swagger **configUrl** and **url**. As a result, someone could execute arbitrary JavaScript code in the context of the domain that hosts the swagger file.
Examples:
* https://localhost:8000/swagger/index.html?url=https://jumpy-floor.surge.sh/test.yaml#/activationcode/updateActivationCode
* https://localhost:8000/swagger/index.html?configUrl=https://jumpy-floor.surge.sh/test.yaml#/activationcode/updateActivationCode
I've tried to remove the query parameters and to reset the values for `queryConfig` from _flasgger\ui3\static\swagger-ui-bundle.js.map_
but did not help.
How can I remove completely query parameters from swagger?
| open | 2024-03-14T13:07:14Z | 2024-03-14T13:07:14Z | https://github.com/flasgger/flasgger/issues/613 | [] | catalinapopa-uipath | 0 |
hyperspy/hyperspy | data-visualization | 2,707 | Improve s.rebin with integer dtype | When using `s.rebin` with an integer dtype, the output dtype is automatically set to `uint64` or `int64`. For example:
```python
import numpy as np
import hyperspy.api as hs
data = np.random.randint(0, 2**16, size=(20, 20, 100, 100), dtype=np.uint16)
s = hs.signals.Signal2D(data)
s_rebin = s.rebin(scale=(1, 1, 2, 2))
s.data.dtype # dtype('uint16')
s_bin.data.dtype # dtype('uint64')
```
Thus, the rebin upscales the dtype all the way to `uint64`, while we know that the values in `s_bin` can't be higher than 2 * 2 * 2**16. Thus, in this case, `s.rebin` should ideally set the dtype to `uint32`.
This can easily be calculated from the amount of scaling. For example if `scale=(10, 10, 50, 50)` it should automatically pick `np.uint64`.
The reason for this is `dask.array.coarsen` automatically using `np.uint64` for this: https://github.com/hyperspy/hyperspy/blob/RELEASE_next_minor/hyperspy/misc/array_tools.py#L205. This can be sorted by passing a `dtype` argument to `dask.array.coarsen`.
```python
import numpy as np
import dask.array as da
dask_array = da.from_array(data, chunks=(10, 10, 50, 50))
data_coarsen0 = da.coarsen(np.sum, dask_array, axes={0: 1, 1: 1, 2: 2, 3: 2})
data_coarsen1 = da.coarsen(np.sum, dask_array, axes={0: 1, 1: 1, 2: 2, 3: 2}, dtype=np.uint32)
print(data_coarsen0.dtype) # uint64
print(data_coarsen1.dtype) # uint32
```
This also applies to signed integers.
| open | 2021-04-13T16:31:22Z | 2021-04-13T16:31:22Z | https://github.com/hyperspy/hyperspy/issues/2707 | [
"type: proposal"
] | magnunor | 0 |
Miserlou/Zappa | flask | 1,728 | Is it Possible to handle sessions using zappa in AWS Lambda | Hi,
I have created a web application and deployed on AWS using zappa. I was unable to handle the sessions. Is it possible to handle sessions using zappa on AWS? If possible, How?
Thanks & Regards,
N Sai Kumar | open | 2018-12-11T15:16:10Z | 2018-12-18T05:43:57Z | https://github.com/Miserlou/Zappa/issues/1728 | [] | saikumar-neelam | 5 |
hbldh/bleak | asyncio | 805 | "Too many open files" error in BlueZ log after a while | I have a script using bleak to occasionally write to a BLE device, like:
```
async with BleakClient(address) as client:
await client.write_gatt_char(UUID_0, bytearray([0x01]))
await client.write_gatt_char(UUID_1, bytearray([0xff]))
```
Sometimes this hits an exception because the device has a long polling interval. My code catches the exception and retries a couple of times, with a backoff period.
Every couple of weeks I see this error in `journalctl -u bluetooth`:
```
Apr 13 21:09:26 localhost bluetoothd[942102]: ATT bt_io_connect(xx:xx:xx:xx:xx:xx): socket(SEQPACKET, L2CAP): Too many open files (24)
Apr 13 21:09:30 localhost bluetoothd[942102]: ATT bt_io_connect(xx:xx:xx:xx:xx:xx): socket(SEQPACKET, L2CAP): Too many open files (24)
Apr 13 21:09:32 localhost bluetoothd[942102]: ATT bt_io_connect(xx:xx:xx:xx:xx:xx): socket(SEQPACKET, L2CAP): Too many open files (24)
Apr 13 21:23:58 localhost bluetoothd[942102]: ATT bt_io_connect(xx:xx:xx:xx:xx:xx): socket(SEQPACKET, L2CAP): Too many open files (24)
Apr 13 21:24:02 localhost bluetoothd[942102]: ATT bt_io_connect(xx:xx:xx:xx:xx:xx): socket(SEQPACKET, L2CAP): Too many open files (24)
Apr 13 21:24:04 localhost bluetoothd[942102]: ATT bt_io_connect(xx:xx:xx:xx:xx:xx): socket(SEQPACKET, L2CAP): Too many open files (24)
```
The address in the log matches the device that bleak is trying to query.
This error seems to break all BLE connectivity on the machine. The error (and BLE outage) goes away immediately when I restart the Python script that is using bleak.
Is it possible that bleak is leaking resources, maybe when an operation fails? If you could advise on how to figure out what resources (e.g. dbus connections) are currently in use by bleak, I can instrument my code to monitor this. | closed | 2022-04-14T16:59:56Z | 2022-07-29T15:18:24Z | https://github.com/hbldh/bleak/issues/805 | [
"Backend: BlueZ"
] | nickrbogdanov | 4 |
proplot-dev/proplot | matplotlib | 181 | Issue with autoformat for 2D plot with xarray | ### Description
Passing autoformat=False to subplots gives errors for 2D plot with xarray
### Steps to reproduce
```python
import xarray as xr
import numpy as np
import proplot as plot
da = xr.DataArray(
np.array(
[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]]),
dims=["x", "y"],
coords={"x": [0, 1, 2, 4]})
fig, ax = plot.subplots(autoformat=False)
ax.contourf(da)
```
**Expected behavior**: [What you expected to happen]
Disable auto format
**Actual behavior**: [What actually happened]
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-3-a3cf3db361dc> in <module>
39 fig, ax = plot.subplots(autoformat=False)
40
---> 41 ax.contourf(da)
~/miniconda3/envs/basic2/lib/python3.7/site-packages/proplot/ui.py in _iterator(*args, **kwargs)
670 result = []
671 for func in objs:
--> 672 result.append(func(*args, **kwargs))
673 if len(self) == 1:
674 return result[0]
~/miniconda3/envs/basic2/lib/python3.7/site-packages/proplot/axes/plot.py in _wrapper(self, *args, **kwargs)
3818 @functools.wraps(func)
3819 def _wrapper(self, *args, **kwargs):
-> 3820 return driver(self, func, *args, **kwargs)
3821 name = func.__name__
3822 if name not in proplot_methods:
~/miniconda3/envs/basic2/lib/python3.7/site-packages/proplot/axes/plot.py in standardize_2d(self, func, order, globe, *args, **kwargs)
868 # was stripped by globe=True.
869 colorbar_kw = kwargs.pop('colorbar_kw', None) or {}
--> 870 colorbar_kw.setdefault('label', colorbar_label)
871 return func(self, x, y, *Zs, colorbar_kw=colorbar_kw, **kwargs)
872
UnboundLocalError: local variable 'colorbar_label' referenced before assignment
### Equivalent steps in matplotlib
```python
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
da = xr.DataArray(
np.array(
[[ 0, 1, 2],
[ 4, 4, 5],
[ 6, 7, 8],
[ 13, 10, 11]]),
dims=["x", "y"],
coords={"x": [0, 1, 2, 4]})
fig, ax = plt.subplots()
da.plot.contourf(ax=ax, add_labels=False)
```
### Proplot version
0.6.1 | closed | 2020-06-02T18:05:53Z | 2020-06-02T18:27:17Z | https://github.com/proplot-dev/proplot/issues/181 | [
"bug"
] | kinyatoride | 2 |
LAION-AI/Open-Assistant | machine-learning | 3,190 | Use Nairaland as training data for African answering questions | Nairaland could provide a good source for training data since a lot of African people depend on Nairaland for highly valuable information and content . | closed | 2023-05-18T00:44:35Z | 2023-06-11T08:34:11Z | https://github.com/LAION-AI/Open-Assistant/issues/3190 | [
"data"
] | x64x2 | 0 |
PokemonGoF/PokemonGo-Bot | automation | 6,177 | Bot is stopping/crashing when "TransferPokemon" in config.json is set to true | ### Expected Behavior
The expected behavior is that the bot starts and that it only keeps one Pokémon of each type with the best CP, all Pokémon of this type with lower CP will be transfered.
### Actual Behavior
If I enable the task "Transfer Pokémon" and start the bot I get this error:
> see below in **Output when issue occurred**
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
```
{
"websocket_server": true,
"heartbeat_threshold": 10,
"enable_social": false,
"check_niantic_api": true,
"solve_captcha": false,
"live_config_update": {
"enabled": false,
"tasks_only": false
},
"tasks": [
{
"type": "TelegramTask",
"config": {
"enabled": true,
"master": "user",
"password": "pass",
"// old syntax, still supported: alert_catch": [
"all"
],
"// new syntax:": {},
"alert_catch": {
"all": {
"operator": "and",
"cp": 1300,
"iv": 0.95
},
"Snorlax": {
"operator": "or",
"cp": 900,
"iv": 0.9
}
}
}
},
{
"type": "DiscordTask",
"config": {
"enabled": false,
"master": null,
"// old syntax, still supported: alert_catch": [
"all"
],
"// new syntax:": {},
"alert_catch": {
"all": {
"operator": "and",
"cp": 1300,
"iv": 0.95
},
"Snorlax": {
"operator": "or",
"cp": 900,
"iv": 0.9
}
}
}
},
{
"//NOTE: This task MUST be placed on the top of task list": {},
"type": "RandomAlivePause",
"config": {
"enabled": false,
"min_duration": "00:00:10",
"max_duration": "00:10:00",
"min_interval": "00:05:00",
"max_interval": "01:30:00"
}
},
{
"type": "HandleSoftBan"
},
{
"type": "RandomPause",
"config": {
"enabled": false,
"min_duration": "00:00:10",
"max_duration": "00:10:00",
"min_interval": "00:10:00",
"max_interval": "02:00:00"
}
},
{
"type": "CompleteTutorial",
"config": {
"enabled": false,
"// set a name": "",
"nickname": "",
"// 0 = No Team, 1 = Blue, 2 = Red, 3 = Yellow": "",
"team": 0
}
},
{
"type": "CollectLevelUpReward",
"config": {
"collect_reward": true,
"level_limit": -1
}
},
{
"type": "BuddyPokemon",
"config": {
"enabled": true,
"buddy_list": "dratini, magikarp",
"best_in_family": true,
"// candy_limit = 0 means no limit, so it will never change current buddy": {},
"candy_limit": 0,
"candy_limit_absolute": 0,
"// force_first_change = true will always change buddy at start removing current one": {},
"force_first_change": false,
"buddy_change_wait_min": 3,
"buddy_change_wait_max": 5,
"min_interval": 120
}
},
{
"type": "IncubateEggs",
"config": {
"enabled": true,
"infinite_longer_eggs_first": false,
"infinite_random_eggs": false,
"breakable_longer_eggs_first": true,
"min_interval": 120,
"infinite": [
2,
5,
10
],
"breakable": [
2,
5,
10
]
}
},
{
"type": "UpdateLiveStats",
"config": {
"enabled": false,
"min_interval": 10,
"stats": [
"username",
"uptime",
"stardust_earned",
"xp_earned",
"xp_per_hour",
"stops_visited",
"total_stardust"
],
"terminal_log": true,
"terminal_title": true
}
},
{
"type": "UpdateLiveInventory",
"config": {
"enabled": false,
"min_interval": 120,
"show_all_multiple_lines": false,
"items": [
"pokemon_bag",
"space_info",
"pokeballs",
"greatballs",
"ultraballs",
"razzberries",
"luckyegg"
]
}
},
{
"type": "UpdateHashStats",
"config": {
"enabled": true,
"min_interval": 60,
"stats" : ["period", "remaining", "maximum", "expiration"]
}
},
{
"type": "ShowBestPokemon",
"config": {
"enabled": true,
"min_interval": 60,
"amount": 5,
"order_by": "cp",
"info_to_show": [
"cp",
"ivcp",
"dps",
"hp"
]
}
},
{
"type": "TransferPokemon",
"config": {
"enabled": true,
"min_free_slot": 5,
"transfer_wait_min": 3,
"transfer_wait_max": 5
}
},
{
"type": "NicknamePokemon",
"config": {
"enabled": false,
"nickname_above_iv": 0.9,
"nickname_template": "{iv_pct}-{iv_ads}",
"nickname_wait_min": 3,
"nickname_wait_max": 5
}
},
{
"type": "EvolvePokemon",
"config": {
"enabled": true,
"log_interval": 120,
"// evolve only pidgey and drowzee": "",
"// evolve_list": [
"pidgey, drowzee",
"all"
],
"// donot_evolve_list": [
"none",
"pidgey, drowzee"
],
"// evolve all but pidgey and drowzee": "",
"evolve_list": "all",
"donot_evolve_list": "none",
"first_evolve_by": "cp",
"evolve_above_cp": 500,
"evolve_above_iv": 0.8,
"logic": "or",
"min_evolve_speed": 25,
"max_evolve_speed": 30,
"min_pokemon_to_be_evolved": 1,
"use_lucky_egg": false
}
},
{
"type": "UseIncense",
"config": {
"use_incense": false,
"use_order": [
"ordinary",
"spicy",
"cool",
"floral"
]
}
},
{
"type": "RecycleItems",
"config": {
"enabled": true,
"min_empty_space": 15,
"max_balls_keep": 200,
"max_potions_keep": 0,
"max_berries_keep": 0,
"max_revives_keep": 0,
"item_filter": {
"Pokeball": {
"keep": 200
},
"Potion": {
"keep": 0
},
"Super Potion": {
"keep": 0
},
"Hyper Potion": {
"keep": 0
},
"Revive": {
"keep": 0
},
"Razz Berry": {
"keep": 20
}
},
"recycle_wait_min": 3,
"recycle_wait_max": 5,
"recycle_force": true,
"recycle_force_min": "00:01:00",
"recycle_force_max": "00:05:00"
}
},
{
"type": "CatchLimiter",
"config": {
"enabled": false,
"min_balls": 20,
"duration": 15
}
},
{
"type": "Sniper",
"config": {
"enabled": false,
"mode": "social",
"bullets": 1,
"homing_shots": true,
"cooldown_enabled": false,
"loiter_after_snipe": false,
"special_iv": 100,
"order": [
"missing",
"vip",
"priority"
],
"teleport_back_to_last_location": false,
"sources": [
{
"enabled": false,
"url": "http://localhost:5000/raw_data",
"timeout": 3,
"key": "pokemons",
"mappings": {
"id": {
"param": "pokemon_id"
},
"name": {
"param": "pokemon_name"
},
"latitude": {
"param": "latitude"
},
"longitude": {
"param": "longitude"
},
"expiration": {
"param": "disappear_time",
"format": "milliseconds"
}
}
},
{
"enabled": false,
"url": "https://pokewatchers.com/grab/",
"timeout": 10,
"mappings": {
"iv": {
"param": "iv"
},
"id": {
"param": "pid"
},
"name": {
"param": "pokemon"
},
"latitude": {
"param": "cords"
},
"longitude": {
"param": "cords"
},
"expiration": {
"param": "timeend",
"format": "seconds"
}
}
},
{
"enabled": false,
"url": "http://pokesnipers.com/api/v1/pokemon.json",
"timeout": 10,
"key": "results",
"mappings": {
"iv": {
"param": "iv"
},
"name": {
"param": "name"
},
"latitude": {
"param": "coords"
},
"longitude": {
"param": "coords"
},
"expiration": {
"param": "until",
"format": "utc"
}
}
}
],
"catch": {
"Snorlax": 1000,
"Dragonite": 1000,
"Growlithe": 600,
"Clefairy": 500,
"Kabuto": 500,
"Dratini": 500,
"Dragonair": 500,
"Mr. Mime": 500,
"Magmar": 500,
"Electabuzz": 500,
"Tangela": 500,
"Tauros": 500,
"Primeape": 500,
"Chansey": 500,
"Pidgey": 100,
"Caterpie": 100,
"Weedle": 100
}
}
},
{
"type": "CatchPokemon",
"config": {
"enabled": true,
"catch_visible_pokemon": true,
"catch_lured_pokemon": true,
"catch_incensed_pokemon": true,
"min_ultraball_to_keep": 5,
"berry_threshold": 0.35,
"use_pinap_on_vip": false,
"pinap_on_level_below": 0,
"pinap_operator": "or",
"pinap_ignore_threshold": false,
"smart_pinap_enabled": true,
"smart_pinap_threshold": 0.85,
"smart_pinap_to_keep": 3,
"vip_berry_threshold": 0.9,
"treat_unseen_as_vip": true,
"daily_catch_limit": 500,
"exit_on_limit_reached": false,
"vanish_settings": {
"consecutive_vanish_limit": 10,
"rest_duration_min": "02:00:00",
"rest_duration_max": "04:00:00"
},
"catch_throw_parameters": {
"excellent_rate": 0.1,
"great_rate": 0.5,
"nice_rate": 0.3,
"normal_rate": 0.1,
"spin_success_rate": 0.6,
"hit_rate": 0.75
},
"catch_simulation": {
"flee_count": 3,
"flee_duration": 2,
"catch_wait_min": 3,
"catch_wait_max": 6,
"berry_wait_min": 3,
"berry_wait_max": 5,
"changeball_wait_min": 3,
"changeball_wait_max": 5,
"newtodex_wait_min": 20,
"newtodex_wait_max": 30
}
}
},
{
"type": "SpinFort",
"config": {
"enabled": true,
"spin_wait_min": 3,
"spin_wait_max": 5,
"daily_spin_limit": 1900,
"use_lure": false
}
},
{
"type": "UpdateWebInventory",
"config": {
"enabled": true
}
},
{
"type": "GymPokemon",
"config": {
"enabled": false,
"order_by": "cp",
"min_interval":360,
"min_recheck":30,
"max_recheck":120,
"chain_fill_gyms": true,
"ignore_max_cp_pokemon": ["Blissey"],
"never_place": ["Machamp"],
"leave_at_least_spots": 1,
"take_at_most": 10,
"pick_random_pokemon": true,
"can_be_disabled_by_catch_limter": false
}
},
{
"type": "MoveToFort",
"config": {
"enabled": true,
"lure_attraction": true,
"lure_max_distance": 2000,
"walker": "StepWalker",
"log_interval": 5
}
},
{
"type": "FollowSpiral",
"config": {
"enabled": true,
"diameter": 4,
"step_size": 70
}
}
],
"map_object_cache_time": 5,
"forts": {
"avoid_circles": true,
"max_circle_size": 50,
"cache_recent_forts": true
},
"pokemon_bag": {
"// if 'show_at_start' is true, it will log all the pokemons in the bag (not eggs) at bot start": {},
"show_at_start": true,
"// if 'show_count' is true, it will show the amount of each pokemon (minimum 1)": {},
"show_count": false,
"// if 'show_candies' is true, it will show the amount of candies for each pokemon": {},
"show_candies": false,
"// 'pokemon_info' parameter define which info to show for each pokemon": {},
"// the available options are": {},
"// ['cp', 'iv_ads', 'iv_pct', 'ivcp', 'ncp', 'level', 'hp', 'moveset', 'dps']": {},
"pokemon_info": [
"cp",
"iv_pct"
]
},
"walk_max": 9.2,
"walk_min": 6.12,
"alt_min": 500,
"alt_max": 1000,
"sleep_schedule": {
"enabled": true,
"enable_reminder": false,
"reminder_interval": 600,
"entries": [
{
"enabled": true,
"time": "00:10",
"duration": "3:30",
"time_random_offset": "00:30",
"duration_random_offset": "00:30",
"wake_up_at_location": ""
},
{
"enabled": true,
"time": "13:45",
"duration": "3:00",
"time_random_offset": "01:00",
"duration_random_offset": "00:30",
"wake_up_at_location": ""
}
]
},
"gps_default_altitude": 8,
"replicate_gps_xy_noise": false,
"replicate_gps_z_noise": false,
"gps_xy_noise_range": 1.25E-4,
"gps_z_noise_range": 12.5,
"debug": false,
"test": false,
"walker_limit_output": false,
"health_record": true,
"location_cache": true,
"distance_unit": "km",
"reconnecting_timeout": 15,
"logging": {
"color": true,
"show_datetime": true,
"show_process_name": true,
"show_log_level": true,
"show_thread_name": false
},
"catch": {
"any": {
"candy_threshold": 400,
"catch_above_cp": 0,
"catch_above_iv": 0,
"logic": "or"
},
"// Example of always catching Rattata:": {},
"// Rattata": {
"always_catch": true
},
"// Example of catching only Diglett and Horsea needed for Bubblestrat; you might want to also configure sniping (MoveToMap - helps you get hold of neede pokemons faster) and disable SpinFort (or you will advance past level 2 which will make it impossible to catch level-1 pokemons)": {},
"Diglett": {
"candy_threshold": 1,
"catch_below_cp": 11,
"catch_above_iv": 0,
"logic": "and",
"fast_attack": [
"Scratch",
"Mud Slap"
]
},
"Horsea": {
"candy_threshold": 1,
"catch_below_cp": 11,
"catch_above_iv": 0,
"logic": "and",
"fast_attack": [
"Bubble"
]
},
"// Example of catching Vaporeon only with Water Gun and Hydro Pump": {},
"Vaporeon": {
"catch_above_iv": 0.99,
"charged_attack": [
"Hydro Pump"
],
"fast_attack": [
"Water Gun"
]
}
},
"release": {
"any": {
"release_below_cp": 500,
"release_below_iv": 0,
"release_below_ivcp": 0,
"logic": "or"
},
"// Example of always releasing Rattata:": {},
"// Rattata": {
"always_release": true
},
"// Example of keeping 3 stronger (based on CP) Pidgey:": {},
"// Pidgey": {
"keep_best_cp": 3
},
"// Example of keeping 2 best (based on IV) Zubat:": {},
"// Zubat": [
{
"keep_best_iv": 2
},
{
"keep_best_cp": 2,
"keep_best_iv": 3
},
{
"keep_best_custom": "iv, cp, hp_max",
"amount": 2
}
],
"// Keep no more than 3 best IV pokemon for every pokemon type": {},
"//any": [
{
"keep_best_iv": 1
},
{
"keep_best_ivcp": 1
}
],
"// Keep no more than 3 best IVCP pokemon for every pokemon type": {},
"// Discard all pokemon in bag except 100 pokemon with best CP": {},
"// all": {
"keep_best_cp": 100
},
"// Example of keeping the 2 strongest (based on CP) and 3 best (based on IV) Zubat:": {},
"// Example of custom order of static criterion": {}
},
"vips": {
"Any pokemon put here directly force to use Berry & Best Ball to capture, to secure the capture rate": {},
"any": {
"catch_above_cp": 1200,
"catch_above_iv": 0.9,
"logic": "or"
},
"Lapras": {},
"Moltres": {},
"Zapdos": {},
"Articuno": {},
"// S-Tier pokemons (if pokemon can be evolved into tier, list the representative)": {},
"Mewtwo": {},
"Dragonite": {},
"Snorlax": {},
"// Mew evolves to Mewtwo": {},
"Mew": {},
"Arcanine": {},
"Vaporeon": {},
"Gyarados": {},
"Exeggutor": {},
"Muk": {},
"Weezing": {},
"Flareon": {}
},
"websocket": {
"start_embedded_server": true,
"server_url": "127.0.0.1:4000"
}
}
```
### Output when issue occurred
```
Traceback (most recent call last):
File "pokecli.py", line 884, in <module>
main()
File "pokecli.py", line 206, in main
bot.tick()
File "/home/antongericke44/PokemonGo-Bot/pokemongo_bot/__init__.py", line 834, in tick
if worker.work() == WorkerResult.RUNNING:
File "/home/antongericke44/PokemonGo-Bot/pokemongo_bot/cell_workers/transfer_pokemon.py", line 32, in work
self._release_pokemon_worst_in_group(group, pokemon_name)
File "/home/antongericke44/PokemonGo-Bot/pokemongo_bot/cell_workers/transfer_pokemon.py", line 60, in _release_pokemon_worst_in_group
pokemon_name)
File "/home/antongericke44/PokemonGo-Bot/pokemongo_bot/cell_workers/transfer_pokemon.py", line 318, in _validate_keep_best_config
keep_best_cp = release_config.get('keep_best_cp', 0)
AttributeError: 'list' object has no attribute 'get'
[2017-08-01 10:55:02] [sentry.errors] [ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) (url: https://app.getsentry.com/api/90254/store/)
Traceback (most recent call last):
File "/home/antongericke44/PokemonGo-Bot/local/lib/python2.7/site-packages/raven/transport/threaded.py", line 174, in send_sync
super(ThreadedHTTPTransport, self).send(data, headers)
File "/home/antongericke44/PokemonGo-Bot/local/lib/python2.7/site-packages/raven/transport/http.py", line 47, in send
ca_certs=self.ca_certs,
File "/home/antongericke44/PokemonGo-Bot/local/lib/python2.7/site-packages/raven/utils/http.py", line 66, in urlopen
return opener.open(url, data, timeout)
File "/home/antongericke44/PokemonGo-Bot/local/lib/python2.7/site-packages/future/backports/urllib/request.py", line 494, in open
response = self._open(req, data)
File "/home/antongericke44/PokemonGo-Bot/local/lib/python2.7/site-packages/future/backports/urllib/request.py", line 512, in _open
'_open', req)
File "/home/antongericke44/PokemonGo-Bot/local/lib/python2.7/site-packages/future/backports/urllib/request.py", line 466, in _call_chain
result = func(*args)
File "/home/antongericke44/PokemonGo-Bot/local/lib/python2.7/site-packages/raven/utils/http.py", line 46, in https_open
return self.do_open(ValidHTTPSConnection, req)
File "/home/antongericke44/PokemonGo-Bot/local/lib/python2.7/site-packages/future/backports/urllib/request.py", line 1284, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "/usr/lib/python2.7/httplib.py", line 1042, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1082, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 1038, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 880, in _send_output
msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128)
[2017-08-01 10:55:02] [sentry.errors.uncaught] [ERROR] [u"AttributeError: 'list' object has no attribute 'get'", u' File "pokecli.py", line 884, in <module>', u' File "pokecli.py", line 206, in main', u' File "pokemongo_bot/__init__.py", line 834, in tick', u' File "pokemongo_bot/cell_workers/transfer_pokemon.py", line 32, in work', u' File "pokemongo_bot/cell_workers/transfer_pokemon.py", line 60, in _release_pokemon_worst_in_group', u' File "pokemongo_bot/cell_workers/transfer_pokemon.py", line 318, in _validate_keep_best_config']
Tue Aug 1 10:55:02 UTC 2017 Pokebot Stopped.
Press any button or wait 20 seconds to continue.
```
### Steps to Reproduce
Go into the config.json and enable "TransferPokemon". I did this in 2 different VMs with a clean installation, same error.
### Other Information
OS: **Debian GNU/Linux 9 (stretch)**
Branch: master
Git Commit: **e974e276d6d52375cb24882c1d1f40ce731f2fcd**
Python Version: **Python 2.7.13**
Any other relevant files/configs (eg: path files): **none**
| closed | 2017-08-01T11:27:40Z | 2017-08-01T22:39:25Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/6177 | [] | twisteddebian | 6 |
jina-ai/clip-as-service | pytorch | 172 | Data used in example4 | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- TensorFlow installed from (source or binary):pip it
- TensorFlow version: 1.12.0 GPU
- Python version: 3.6.7
- `bert-as-service` version:
- GPU model and memory: GeForce GTX TITANS * 4
- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start YOUR_SERVER_ARGS
```
and calling the server via:
```bash
python example4.py
```
Then this issue shows up:
```bash
NotFoundError (see above for traceback): /data/cips/data/lab/data/dataset/final_all_data/exercise_contest/data_train.json; No such file or directory
[[node IteratorGetNext (defined at example4.py:47) = IteratorGetNext[output_shapes=[<unknown>, <unknown>], output_types=[DT_FLOAT, DT_INT64], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]
```
I did not find any data described in the codes, if I want to run the example, shall I use my own data instead.
If that is the case, how should I preprocess data (i.e. in what format so that I could utilize the code)
| closed | 2019-01-04T09:08:34Z | 2019-01-05T07:57:26Z | https://github.com/jina-ai/clip-as-service/issues/172 | [] | PaulZhangIsing | 1 |
pandas-dev/pandas | python | 60,770 | BUG: arrow backend get wrong result | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
### Describe the bug, including details regarding any error messages, version, and platform.
when pandas has a null column,compare will get a False,
import duckdb as dd
df=dd.sql("select null as id").df()
df['id']>1
0 False
Name: id, dtype: bool
but change to arrow, will get NA, how to get False?
import pyarrow as pa
import pandas as pd
df2=pa.Table.from_pandas(df).to_pandas(types_mapper=pd.ArrowDtype,use_threads=True)
df2['id']>1
0 <NA>
Name: id, dtype: bool[pyarrow]
### Component(s)
Python
```
### Issue Description
pandas2.2.3,use arrow backend,
got NA,need False,
how to got same result?
### Expected Behavior
df['id']>1,want return False
### Installed Versions
pandas2.2.3
| closed | 2025-01-23T07:37:39Z | 2025-01-25T02:12:09Z | https://github.com/pandas-dev/pandas/issues/60770 | [
"Bug",
"Missing-data",
"Arrow"
] | wonb168 | 1 |
arogozhnikov/einops | tensorflow | 296 | [Feature suggestion] Allow performing a view instead of a reshape | # Context
First thank you for maintaining this great software.
I would like to be able to call rearrange on a tensor and be sure that it will not copy any data (just change the stride) and if the operation is impossible to raise an error. Currently the behavior is to try do not copy but copy if it is impossible. I would like to have a bit more granularity here.
To speak in pytorch term I want to be able to perform a `view` operation instead of a `reshape`.
Implementation wise, it seems that it would actually be quite easy, I would just need to add in the backend a view operation like the reshape [one](https://github.com/arogozhnikov/einops/blob/a6e93530ec2dce44f473e6065fad4d8236cda4f3/einops/_backends.py#L439).
In term of interface it could either be
### 1 - rearrange_view
```python
from einops import rearrange_view
rearrange_view(images, 'b h w c -> b h w c')
```
### 2 - view as a param
```python
from einops import rearrange
rearrange(images, 'b h w c -> b h w c', view=True)
```
## Downside
only Pytorch does support view. Numpy and jax does not support view, only reshape.
So if this feature is too pytorch specific I can understand that it is too specific. Wdyt ?
## Alternative
If I could change the backend on the fly I could create a new backend that call view instead of reshape.
Unfortunately right now there is a direct mapping between tensor type and backend and I cannot change the backend myself.
Second feature request, beeing able to pass a different backend in any einops operation
example
```python
from einops import rearrange
from einops._backend import TorchBackend
class TorchViewBackend(TorchBackend):
def reshape(self, x, shape):
return x.view(shape)
rearrange(images, 'b h w c -> b h w c', backend=TorchViewBackend)
```
Thanks in advance :pray:
(happy to open a PR if needed) | closed | 2023-12-06T10:37:44Z | 2023-12-07T20:40:54Z | https://github.com/arogozhnikov/einops/issues/296 | [
"feature suggestion"
] | samsja | 3 |
microsoft/unilm | nlp | 1,095 | 请问你们会提供beit 3 的预训练代码嘛 | 请问你们会提供beit 3的预训练代码嘛,啥时候会提供呢
| open | 2023-05-17T06:48:48Z | 2023-05-24T03:22:28Z | https://github.com/microsoft/unilm/issues/1095 | [] | cbigeyes | 0 |
pytest-dev/pytest-html | pytest | 822 | Result table column value is not selectable (4.x) | Since v4.x, each column value (text) is no longer selectable. It opens the log area instead. This is very inconvenient especially when I want to copy the test ID value.
Please consider changing column values selectable again, or add an option to switch back to the previous behavior at least. | open | 2024-07-23T22:01:44Z | 2024-07-23T22:01:44Z | https://github.com/pytest-dev/pytest-html/issues/822 | [] | yugokato | 0 |
dhaitz/mplcyberpunk | matplotlib | 4 | grid toggle side effect | Using the theme toggles the grid, not really a bug, but something users should be aware of when putting in those lines to make the effects work.
Put differently, I used the style and wondered where my grid had gone. | closed | 2020-04-06T11:44:55Z | 2021-07-29T17:13:54Z | https://github.com/dhaitz/mplcyberpunk/issues/4 | [] | BMaxV | 2 |
jina-ai/serve | machine-learning | 5,939 | Endgame | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**Describe how you solve it**
<!-- copy past your code/pull request link -->
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. --> | closed | 2023-06-30T01:04:55Z | 2023-06-30T05:42:03Z | https://github.com/jina-ai/serve/issues/5939 | [] | VUAdapp | 0 |
PrefectHQ/prefect | data-science | 17,186 | prefect server alembic migrations are failing to run with SQLite 3.49.1 | ### Bug summary
Using latest Prefect ephemeral storage with sqlite 3.49.1 results in alembic migration failing due to
```python
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such column: "Debug Print Notification" - should this be a string literal in single-quotes?
[SQL:
DELETE FROM block_type WHERE name = "Debug Print Notification"
]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
21:00:27.537 | ERROR | uvicorn.error - Application startup failed. Exiting.
```
Seems related to this line: https://github.com/PrefectHQ/prefect/blob/d8b0ade72a6a28bf404124d9036dd0887e8b806c/src/prefect/server/database/_migrations/versions/sqlite/2022_07_07_111208_061c7e518b40_removes_debugprintnotification_block_.py#L21
### Version info
```Text
Prefect 3.2.4
```
### Additional context
Sqlite3 pkg - just became stricter with quoting literal strings, however some of the older alembic migrations prefect server uses seems to not escaping quotes properly. | closed | 2025-02-19T02:48:46Z | 2025-02-20T18:30:22Z | https://github.com/PrefectHQ/prefect/issues/17186 | [
"bug"
] | pvaezi | 4 |
mwaskom/seaborn | data-visualization | 3,529 | How do I save it as a SVG format, when using p.on(ax).show() in seaborn 0.13 | 
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.load_dataset('diamonds')
sns.set('paper',style='ticks',font_scale=1.8)
plt.rcParams['font.sans-serif']='Arial' #设置字体,必须放在sns.set后面。否则会被sns.set更新
plt.rcParams['svg.fonttype'] = 'none' #保存svg时,字体为文本而不是路径,放在sns.set前面后面都可以
fig,ax=plt.subplots(figsize=(4,4))
sns.despine()
so.Plot(data, x="bill_length_mm", y="bill_depth_mm").layout(size=(3, 3))\
.add(so.Dot(), color="species").label(x="a", y="b",title="c")\
.add(so.Line(color="black"),so.PolyFit(), y="bill_depth_mm", label="depth").on(ax).show()
#p.on(ax).show()
THANK YOU | closed | 2023-10-18T13:36:55Z | 2023-10-18T18:47:43Z | https://github.com/mwaskom/seaborn/issues/3529 | [] | z626093820 | 3 |
jmcnamara/XlsxWriter | pandas | 720 | Deprecation notice for Python 2.7 (and 3.5) support. Target July 2021 | I have just added the following notice to the [Changes](https://xlsxwriter.readthedocs.io/changes.html) page of the XlsxWriter docs:
> **Deprecation Notice**: Python 2.7 reached the end of its life on January 1st,
2020 and is no longer being supported in the community. XlsxWriter support for
Python 2.7 will end by mid-year 2021 (probably in July 2021). No new features
or fixes for Python 2.7 will be added to XlsxWriter after that date/release.
If anyone has any concerns about this please raise them now.
| closed | 2020-05-29T17:22:45Z | 2021-08-10T11:20:51Z | https://github.com/jmcnamara/XlsxWriter/issues/720 | [
"awaiting user feedback"
] | jmcnamara | 18 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,162 | train.py: error: unrecognized arguments: --epoch_count | Hi, I'm trying to continue training with this options:
--continue_train --epoch_count 200
But I'm getting "train.py: error: unrecognized arguments: --epoch_count"
Is there anything wrong with that?
Thanks | closed | 2020-10-10T12:35:14Z | 2020-10-10T18:24:56Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1162 | [] | smithee771 | 2 |
gradio-app/gradio | python | 10,676 | Support for Logarithmic Scale in Slider | - [X] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
I want to use a slider with a logarithmic scale, but currently, the steps can only be constant.
**Describe the solution you'd like**
Add support for a slider with a logarithmic scale or allow custom step sizes.
**Additional context**
Related issue: https://github.com/embeddings-benchmark/mteb/issues/2149
| open | 2025-02-25T20:20:38Z | 2025-02-25T22:17:38Z | https://github.com/gradio-app/gradio/issues/10676 | [
"enhancement"
] | Samoed | 0 |
widgetti/solara | fastapi | 104 | DataFrame Widget: Navigation controls not visible on wide dataframes in Jupyter Lab | It seems the navigation controls are remaining under the rightmost portion of the dataframe, even when the frame cannot be displayed completely due to it wide width. This leaves the navigation controls not accessible. Also, there is no scrollbar to view the rest of the dataframe columns. Here is some sample code to demo the problem:
```python
import solara
import pandas as pd
import plotly
df = plotly.data.iris()
for col in range(60):
df[str(col)] = str(col + 1)
@solara.component
def Page():
solara.DataFrame(df, items_per_page=5)
``` | closed | 2023-05-20T14:44:04Z | 2023-05-22T17:15:37Z | https://github.com/widgetti/solara/issues/104 | [] | babazaroni | 2 |
andy-landy/traceback_with_variables | jupyter | 25 | Is "NoReturn" the proper annotation for global_print_exc()? | The `NoReturn` type indicates the function either never terminates or always throws an exception:
From [PEP-484](https://www.python.org/dev/peps/pep-0484/#the-noreturn-type):
> The typing module provides a special type NoReturn to annotate functions that never return normally. For example, a function that unconditionally raises an exception..
By having it set to `NoReturn`, all the code after the call to `global_print_exc()` is "dimmed" in type-aware IDEs like VS Code. (For example, `NoReturn` makes sense for annotating `sys.exit()` with the visual indication that none of the code after it will ever execute.)
Since the function is simply setting `sys.excepthook`, shouldn't the return type simply be `None`, or am I missing something?
| closed | 2022-10-06T13:16:23Z | 2024-10-31T17:38:38Z | https://github.com/andy-landy/traceback_with_variables/issues/25 | [] | eddyg | 1 |
ibis-project/ibis | pandas | 10,021 | feat: unify streaming and batch OVER window | ### Is your feature request related to a problem?
Ibis has a streaming specific over window:
```
over_window_streaming = bid_table.filter(_ is not None)[_.price.mean().over(range=(-ibis.interval(seconds=10), 0), order_by=_.datetime).name("avg_price")]
```
and a batch OVER window:
```
over_window_batch = bid_table.filter(_ is not None).mutate(avg_price=_.price.mean().over(rows=(-2,0), order_by=_.datetime))
```
which generates different expression trees. The syntax is only different on minor details (mutate + time range vs row range). Are there opportunities for converging the two APIs to avoid user confusions. (I am putting myself in the users shoes).
This issue is originally surfaced in https://github.com/ibis-project/ibis-substrait/issues/1117
### What is the motivation behind your request?
Improve stream/batch unification on Ibis
### Describe the solution you'd like
I'd like to see a single unified API for both batch and streaming OVER window, with backend specific rewrite logic into expressions that can work across.
### What version of ibis are you running?
9.0.0
### What backend(s) are you using, if any?
DuckDB, Flink
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | closed | 2024-09-04T22:17:25Z | 2024-09-05T18:45:28Z | https://github.com/ibis-project/ibis/issues/10021 | [
"feature"
] | zhenzhongxu | 4 |
tensorpack/tensorpack | tensorflow | 855 | How does tensorpack compare to the tf.estimators? | I can see a lot of similarities between the tensorpack's trainer and its callbacks and `tf.estimators`/`tf.train` with all of its hooks. What are some benefits of using tensorpack instead of tensorflow API in case of this inversion of control? Why aren't `Callback` and `tf.train.SessionRunHook` compatible?
| closed | 2018-08-06T23:19:47Z | 2019-02-17T19:29:06Z | https://github.com/tensorpack/tensorpack/issues/855 | [
"usage"
] | pkubik | 6 |
plotly/dash-core-components | dash | 666 | Error installing dash-core-components in R | Hey, Attempting to install dash-core-components in R I got this error. Any help will be appreciated.
```
>remotes::install_github("plotly/dash-core-components")
Downloading GitHub repo plotly/dash-core-components@master
✔ checking for file ‘/tmp/Rtmp3t2YC5/remotes1be9200e50f2/plotly-dash-core-components-474f196/DESCRIPTION’ ...
─ preparing ‘dashCoreComponents’:
E checking DESCRIPTION meta-information ...
Malformed package version.
See section 'The DESCRIPTION file' in the 'Writing R Extensions'
manual.
Error: Failed to install 'dashCoreComponents' from GitHub:
System command error, exit status: 1, stdout + stderr:
E> * checking for file ‘/tmp/Rtmp3t2YC5/remotes1be9200e50f2/plotly-dash-core-components-474f196/DESCRIPTION’ ... OK
E> * preparing ‘dashCoreComponents’:
E> * checking DESCRIPTION meta-information ... ERROR
E> Malformed package version.
E>
E> See section 'The DESCRIPTION file' in the 'Writing R Extensions'
E> manual.
E>
```
My session Info:
```
>sessionInfo()
R version 3.6.1 (2019-07-05)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.3 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/atlas/libblas.so.3.10.3
LAPACK: /usr/lib/x86_64-linux-gnu/atlas/liblapack.so.3.10.3
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] Rcpp_1.0.2 rstudioapi_0.10 magrittr_1.5 usethis_1.5.1 devtools_2.2.1 pkgload_1.0.2
[7] R6_2.4.0 rlang_0.4.0 tools_3.6.1 pkgbuild_1.0.5 sessioninfo_1.1.1 cli_1.1.0
[13] withr_2.1.2 ellipsis_0.3.0 remotes_2.1.0 yaml_2.2.0 assertthat_0.2.1 digest_0.6.21
[19] rprojroot_1.3-2 crayon_1.3.4 processx_3.4.1 callr_3.3.1 fs_1.3.1 ps_1.3.0
[25] curl_4.2 testthat_2.2.1 memoise_1.1.0 glue_1.3.1.9000 compiler_3.6.1 desc_1.2.0
[31] backports_1.1.4 prettyunits_1.0.2
``` | closed | 2019-10-01T14:47:04Z | 2023-08-17T23:22:24Z | https://github.com/plotly/dash-core-components/issues/666 | [] | Ebedthan | 7 |
PokemonGoF/PokemonGo-Bot | automation | 5,959 | inventory was not initialized | I run the bot on OS X and while it works with one of my accounts it crashes on startup when I use my other account (where I have over 900 items in my inventory). This is the output:
[2017-03-09 13:10:30] [PokemonGoBot] [INFO] Login procedure started.
[2017-03-09 13:10:36] [PokemonGoBot] [INFO] Login successful.
[2017-03-09 13:10:36] [PokemonGoBot] [INFO]
_inventory was not initialized
_inventory was not initialized
[2017-03-09 13:10:38] [ cli] [INFO]
[2017-03-09 13:10:38] [ cli] [INFO] Ran for 0:00:08
[2017-03-09 13:10:38] [ cli] [INFO] Total XP Earned: 0 Average: 0.00/h
[2017-03-09 13:10:38] [ cli] [INFO] Travelled 0.00km
[2017-03-09 13:10:38] [ cli] [INFO] Visited 0 stops
[2017-03-09 13:10:38] [ cli] [INFO] Encountered 0 pokemon, 0 caught, 0 released, 0 evolved, 0 never seen before ()
[2017-03-09 13:10:38] [ cli] [INFO] Threw 0 pokeballs
[2017-03-09 13:10:38] [ cli] [INFO] Earned 0 Stardust
[2017-03-09 13:10:38] [ cli] [INFO] Hatched eggs 0
[2017-03-09 13:10:38] [ cli] [INFO]
[2017-03-09 13:10:38] [ cli] [INFO] Highest CP Pokemon:
[2017-03-09 13:10:38] [ cli] [INFO] Most Perfect Pokemon:
Traceback (most recent call last):
File "pokecli.py", line 865, in <module>
main()
File "pokecli.py", line 195, in main
bot = start_bot(bot, config)
File "pokecli.py", line 147, in start_bot
bot.start()
File "/Users/user/Desktop/PokemonGo-Bot-master/pokemongo_bot/__init__.py", line 149, in start
init_inventory(self)
File "/Users/user/Desktop/PokemonGo-Bot-master/pokemongo_bot/inventory.py", line 1422, in init_inventory
_inventory = Inventory(bot)
File "/Users/user/Desktop/PokemonGo-Bot-master/pokemongo_bot/inventory.py", line 1266, in __init__
self.refresh()
File "/Users/user/Desktop/PokemonGo-Bot-master/pokemongo_bot/inventory.py", line 1276, in refresh
i.refresh(inventory)
File "/Users/user/Desktop/PokemonGo-Bot-master/pokemongo_bot/inventory.py", line 75, in refresh
self._data = self.retrieve_data(inventory)
File "/Users/user/Desktop/PokemonGo-Bot-master/pokemongo_bot/inventory.py", line 71, in retrieve_data
ret[key] = self.parse(item)
File "/Users/user/Desktop/PokemonGo-Bot-master/pokemongo_bot/inventory.py", line 256, in parse
return Item(item_id, item_count)
File "/Users/user/Desktop/PokemonGo-Bot-master/pokemongo_bot/inventory.py", line 190, in __init__
self.name = Items.name_for(self.id)
File "/Users/user/Desktop/PokemonGo-Bot-master/pokemongo_bot/inventory.py", line 283, in name_for
return cls.STATIC_DATA[str(item_id)]
KeyError: '1101'
Thu Mar 9 13:10:38 EET 2017 Pokebot Stopped.
| closed | 2017-03-09T11:23:24Z | 2017-03-27T08:46:07Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5959 | [] | mrsm44 | 3 |
ets-labs/python-dependency-injector | flask | 726 | Compatibility Issue with Pydantic 2 | The major version of `Pydantic` was recently released and lost backward compatibility. In particular, the `.from_pydantic` method stopped working for `providers.Configuration` due to the fact that the `BaseSettings` class now needs to be imported from a new package called `pydantic-settings`.
```toml
[tool.poetry.dependencies]
python = "^3.11"
pydantic = "^2.0.3"
pydantic-settings = "^2.0.2"
```
```python
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home/pentusha/projects/kyt.service-template/app/__init__.py", line 4, in <module>
from app.core.containers import app_container
File "/home/pentusha/projects/kyt.service-template/app/core/containers.py", line 80, in <module>
app_container = AppContainer()
^^^^^^^^^^^^^^
File "src/dependency_injector/containers.pyx", line 742, in dependency_injector.containers.DeclarativeContainer.__new__
File "src/dependency_injector/containers.pyx", line 393, in dependency_injector.containers.DynamicContainer.load_config
File "src/dependency_injector/providers.pyx", line 2106, in dependency_injector.providers.Configuration.load
File "src/dependency_injector/providers.pyx", line 2381, in dependency_injector.providers.Configuration.from_pydantic
File "/home/pentusha/.cache/pypoetry/virtualenvs/kyt-service-template-cl7ERtKI-py3.11/lib/python3.11/site-packages/pydantic/__init__.py", line 207, in __getattr__
return _getattr_migration(attr_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pentusha/.cache/pypoetry/virtualenvs/kyt-service-template-cl7ERtKI-py3.11/lib/python3.11/site-packages/pydantic/_migration.py", line 288, in wrapper
raise PydanticImportError(
pydantic.errors.PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.0.3/migration/#basesettings-has-moved-to-pydantic-settings for more details.
```
| closed | 2023-07-17T11:20:43Z | 2024-12-08T00:56:37Z | https://github.com/ets-labs/python-dependency-injector/issues/726 | [
"feature"
] | Pentusha | 8 |
jupyterlab/jupyter-ai | jupyter | 974 | Install option for individual model providers instead of all | When installing `jupyter-ai` I'd like to be able to install specific classes of model providers without having to install all, e.g.:
`uv pip install jupyter-ai[anthropic]`
Instead of `uv pip install jupyter-ai[all]`
Note I'm assuming this doesn't already exist (I tried and got errors).
I'd be happy to help with this if possible (caveat I don't have much experience with this kind of thing so there would be a learning curve). It does seem like keeping track of all these optional dependencies could be a maintenance headache, so if the maintainers don't like this idea I get it. 😄
Related to #958
| open | 2024-09-03T12:00:16Z | 2024-11-27T19:37:23Z | https://github.com/jupyterlab/jupyter-ai/issues/974 | [
"enhancement"
] | EricThomson | 1 |
mlflow/mlflow | machine-learning | 14,707 | [SETUP-BUG] Azure in MLFLOW | ### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
- **MLflow installed from (source or binary)**:
- **MLflow version (run ``mlflow --version``)**:
- **Python version**:
### Code to reproduce issue
mlflow==2.20.2
### Describe the problem
I want to use azure model for mlflow. previously, I used openai, that is working correctly.
Here, I changed into Azure model.
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = ""
os.environ["OPENAI_DEPLOYMENT_NAME"] = "gpt-4o-mini"
os.environ["OPENAI_API_VERSION"] = "2024-05-01-preview"
os.environ["OPENAI_API_KEY"] = ""
but, I get the null values while evaluating.
dataframe
2025/02/24 15:28:47 INFO mlflow.models.evaluation.default_evaluator: Testing metrics on first row...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 9.34it/s]
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\fromnumeric.py:3504: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\_methods.py:129: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\fromnumeric.py:3787: RuntimeWarning: Degrees of freedom <= 0 for slice
return _methods._var(a, axis=axis, dtype=dtype, out=out, ddof=ddof,
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\_methods.py:163: RuntimeWarning: invalid value encountered in divide
arrmean = um.true_divide(arrmean, div, out=arrmean,
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\_methods.py:198: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
### Other info / logs
but, I get the null values while evaluating.
dataframe
2025/02/24 15:28:47 INFO mlflow.models.evaluation.default_evaluator: Testing metrics on first row...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 9.34it/s]
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\fromnumeric.py:3504: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\_methods.py:129: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\fromnumeric.py:3787: RuntimeWarning: Degrees of freedom <= 0 for slice
return _methods._var(a, axis=axis, dtype=dtype, out=out, ddof=ddof,
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\_methods.py:163: RuntimeWarning: invalid value encountered in divide
arrmean = um.true_divide(arrmean, div, out=arrmean,
C:\Users\Rohith\Envs\repoai\lib\site-packages\numpy\core\_methods.py:198: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
| open | 2025-02-24T10:14:05Z | 2025-03-11T16:28:14Z | https://github.com/mlflow/mlflow/issues/14707 | [
"bug"
] | RohithDAces | 3 |
deezer/spleeter | deep-learning | 342 | [Bug] name your bug |
## Description
Used in both cases: Spleet gui
result:
Spleeter works fine on Win7, but produces this, on Win10:
## Step to reproduce
Installed:
python-3.8.2.exe
Miniconda3-latest-Windows-x86_64.exe
then (without errors):
pip install spleeter
conda install numba
## Output
Informationen über das Aufrufen von JIT-Debuggen
anstelle dieses Dialogfelds finden Sie am Ende dieser Meldung.
************** Ausnahmetext **************
System.Security.SecurityException: Der angeforderte Registrierungszugriff ist unzulässig.
bei System.ThrowHelper.ThrowSecurityException(ExceptionResource resource)
bei Microsoft.Win32.RegistryKey.OpenSubKey(String name, Boolean writable)
bei System.Environment.SetEnvironmentVariable(String variable, String value, EnvironmentVariableTarget target)
bei spleetGUI.Form1.addtopath()
bei spleetGUI.Form1.InstallFFMPEG()
bei spleetGUI.Form1.button1_Click(Object sender, EventArgs e)
bei System.Windows.Forms.Control.OnClick(EventArgs e)
bei System.Windows.Forms.Button.OnClick(EventArgs e)
bei System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
bei System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
bei System.Windows.Forms.Control.WndProc(Message& m)
bei System.Windows.Forms.ButtonBase.WndProc(Message& m)
bei System.Windows.Forms.Button.WndProc(Message& m)
bei System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
bei System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
bei System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
Die Zone der Assembly, bei der ein Fehler aufgetreten ist:
MyComputer
************** Geladene Assemblys **************
mscorlib
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1063.1 built by: NETFXREL3STAGE.
CodeBase: file:///C:/Windows/Microsoft.NET/Framework/v4.0.30319/mscorlib.dll.
spleetGUI
Assembly-Version: 1.0.0.0.
Win32-Version: 1.0.0.0.
CodeBase: file:///C:/OSTRIP/%5BTOOLS%5D/SpleetGUI.v2/SpleetGUI.exe.
System.Windows.Forms
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms/v4.0_4.0.0.0__b77a5c561934e089/System.Windows.Forms.dll.
System
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System/v4.0_4.0.0.0__b77a5c561934e089/System.dll.
System.Drawing
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1068.2 built by: NETFXREL3STAGE.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Drawing/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll.
Accessibility
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/Accessibility/v4.0_4.0.0.0__b03f5f7f11d50a3a/Accessibility.dll.
mscorlib.resources
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/mscorlib.resources/v4.0_4.0.0.0_de_b77a5c561934e089/mscorlib.resources.dll.
System.Windows.Forms.resources
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms.resources/v4.0_4.0.0.0_de_b77a5c561934e089/System.Windows.Forms.resources.dll.
************** JIT-Debuggen **************
Um das JIT-Debuggen (Just-In-Time) zu aktivieren, muss in der
Konfigurationsdatei der Anwendung oder des Computers
(machine.config) der jitDebugging-Wert im Abschnitt system.windows.forms festgelegt werden.
Die Anwendung muss mit aktiviertem Debuggen kompiliert werden.
Zum Beispiel:
<configuration>
<system.windows.forms jitDebugging="true" />
</configuration>
Wenn das JIT-Debuggen aktiviert ist, werden alle nicht behandelten
Ausnahmen an den JIT-Debugger gesendet, der auf dem
Computer registriert ist, und nicht in diesem Dialogfeld behandelt.
## Environment
Firewall: disabled.
Host file: untouched from stock windows 10
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10|
| Installation type | Conda / pip |
| RAM available | 4Go |
| Hardware spec | Fujitsu Q702, GPU: Intel HD Graphics 4000, Intel(R) i3-3217U1.80Ghz |
## Additional context
 | closed | 2020-04-25T18:33:39Z | 2020-04-27T08:32:31Z | https://github.com/deezer/spleeter/issues/342 | [
"bug",
"invalid"
] | Ry3yr | 0 |
huggingface/transformers | pytorch | 36,783 | Throw messages in text-generation task with deepseek r1 with PEFTModel | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.49.0
- Platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.29.3
- Safetensors version: 0.5.3
- Accelerate version: 1.3.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 0
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- deepspeed_config: {'deepspeed_config_file': '/opt/config/train_config.json', 'zero3_init_flag': True}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: 0.16.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@ArthurZucker @Rocketknight1 @muellerzr
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import pipeline, AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer
from peft import PeftModel
ADAPTER_PATH = "./output/adapter/mnc_adapter"
BASE_PATH = "./output/model"
BNB_CONFG = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
)
# input
text = "Who is a Elon Musk?"
model = AutoModelForCausalLM.from_pretrained(
BASE_PATH,
quantization_config=BNB_CONFG,
torch_dtype=torch.float16,
device_map = 'auto',
)
tokenizer = AutoTokenizer.from_pretrained(BASE_PATH)
lora_model = PeftModel.from_pretrained(
model,
ADAPTER_PATH,
quantization_config=BNB_CONFG,
torch_dtype=torch.float16,
device_map = 'auto',
)
default_generator = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
torch_dtype=torch.float16
)
print(f"this is base model result: {default_generator(text)}")
lora_generator = pipeline(
task="text-generation",
model=lora_model,
tokenizer=tokenizer,
device_map="auto",
torch_dtype=torch.float16
)
print(f"this is lora model result: {lora_generator(text)}")
```
1. execute `lora_generator(text)`
2. output warning messages with followings
3. With my debugging, `transformers/pipelines/base.py` that section was problems
```python
def check_model_type(self, supported_models: Union[List[str], dict]):
"""
Check if the model class is in supported by the pipeline.
Args:
supported_models (`List[str]` or `dict`):
The list of models supported by the pipeline, or a dictionary with model class values.
"""
if not isinstance(supported_models, list): # Create from a model mapping
supported_models_names = []
for _, model_name in supported_models.items():
# Mapping can now contain tuples of models for the same configuration.
if isinstance(model_name, tuple):
supported_models_names.extend(list(model_name))
else:
supported_models_names.append(model_name)
if hasattr(supported_models, "_model_mapping"):
for _, model in supported_models._model_mapping._extra_content.items():
if isinstance(model_name, tuple):
supported_models_names.extend([m.__name__ for m in model])
else:
supported_models_names.append(model.__name__)
supported_models = supported_models_names
if self.model.__class__.__name__ not in supported_models:
logger.error(
f"The model '{self.model.__class__.__name__}' is not supported for {self.task}. Supported models are"
f" {supported_models}."
)
```
### Expected behavior
without unsupported models message.
This error might be occured the deepseek model was not in `supported_models` List
* The pipeline was successfully worked, but I wanna remove this annoying message
```python
python hug_inference.py
/root/workspace/lora_test/.venv/lib/python3.10/site-packages/transformers/quantizers/auto.py:206: UserWarning: You passed `quantization_config` or equivalent parameters to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` from the model will be used.
warnings.warn(warning_msg)
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:07<00:00, 1.12it/s]
Device set to use cuda:0
/root/workspace/lora_test/.venv/lib/python3.10/site-packages/bitsandbytes/nn/modules.py:451: UserWarning: Input type into Linear4bit is torch.float16, but bnb_4bit_compute_dtype=torch.float32 (default). This will lead to slow inference or training speed.
warnings.warn(
this is base model result: [{'generated_text': "Who is a Elon Musk? Well, he's a business magnate, investor, and entrepreneur. He's known for his ambitious"}]
Device set to use cuda:0
The model 'PeftModel' is not supported for text-generation. Supported models are ['AriaTextForCausalLM', 'BambaForCausalLM', 'BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'LlamaForCausalLM', 'CodeGenForCausalLM', 'CohereForCausalLM', 'Cohere2ForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'DbrxForCausalLM', 'DiffLlamaForCausalLM', 'ElectraForCausalLM', 'Emu3ForCausalLM', 'ErnieForCausalLM', 'FalconForCausalLM', 'FalconMambaForCausalLM', 'FuyuForCausalLM', 'GemmaForCausalLM', 'Gemma2ForCausalLM', 'GitForCausalLM', 'GlmForCausalLM', 'GotOcr2ForConditionalGeneration', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'GraniteForCausalLM', 'GraniteMoeForCausalLM', 'GraniteMoeSharedForCausalLM', 'HeliumForCausalLM', 'JambaForCausalLM', 'JetMoeForCausalLM', 'LlamaForCausalLM', 'MambaForCausalLM', 'Mamba2ForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'MllamaForCausalLM', 'MoshiForCausalLM', 'MptForCausalLM', 'MusicgenForCausalLM', 'MusicgenMelodyForCausalLM', 'MvpForCausalLM', 'NemotronForCausalLM', 'OlmoForCausalLM', 'Olmo2ForCausalLM', 'OlmoeForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PersimmonForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'PhimoeForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RecurrentGemmaForCausalLM', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'WhisperForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM', 'ZambaForCausalLM', 'Zamba2ForCausalLM'].
this is lora model result: [{'generated_text': "Who is a Elon Musk? I mean, I know he's a business magnate or something, but what has he actually done"}]
``` | open | 2025-03-18T04:54:56Z | 2025-03-21T16:17:18Z | https://github.com/huggingface/transformers/issues/36783 | [
"bug"
] | falconlee236 | 9 |
scikit-optimize/scikit-optimize | scikit-learn | 913 | Integer Dimension ignores log-uniform prior | It seems like specifying `prior="log-uniform"` in an `Integer` dimension has no effect in the optimization process. Here is some code to reproduce the issue:
```python
# Tested using scikit-optimize 0.7.4
from skopt import gp_minimize
from skopt.space.space import Integer
def fopt_test(x):
return x[0]**2 + x[1]**2
# Optimize in uniform space
dimensions = [
Integer(1, 100, prior="uniform"),
Integer(1, 100, prior="uniform")
]
res = gp_minimize(fopt_test, dimensions, n_calls=51, n_random_starts=10, verbose=False, random_state=123)
# Optimize in logarithm space
log_dimensions = [
Integer(1, 100, prior="log-uniform"),
Integer(1, 100, prior="log-uniform")
]
res_log = gp_minimize(fopt_test, log_dimensions, n_calls=51, n_random_starts=10, verbose=False, random_state=123)
# Compare
res.x_iters == res_log.x_iters # returns True
```
The same code using Real dimensions does produce an observable difference in the points chosen during the optimization. | open | 2020-06-10T16:18:05Z | 2020-06-10T16:18:05Z | https://github.com/scikit-optimize/scikit-optimize/issues/913 | [] | albarji | 0 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 276 | miner gives zero weights | https://github.com/KevinMusgrave/pytorch-metric-learning/blob/19133d928cdc0aa4d1b49ddf30d5f9c81198649b/src/pytorch_metric_learning/miners/distance_weighted_miner.py#L44
I was using this distance-weighted miner, sometimes it fails because the returned weights are all zeros. Printing out all the tensors inside, I found that the log_weights, after exponentiation are almost all zeros except for the diagonal one, whose distance were cutoff to 0.5, the distances of other entries in the distance matrix after clamp are 0.7-0.9.
After `weights = torch.exp(log_weights - torch.max(log_weights[~inf_or_nan]))`, the weights matrix becomes almost all zeros except for the diagonal but the diagonal are the same input points so they will be masked out. ` weights * mask` will lead to a matrix with all 0s. This happens under several different scenarios. | closed | 2021-02-05T16:04:45Z | 2021-02-12T16:17:31Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/276 | [
"bug"
] | z1w | 12 |
keras-team/keras | deep-learning | 20,848 | Keras3 with JAX backend results in AttributeError: 'jaxlib.xla_extension.ArrayImpl' | Hi,
I am using JAX as the backend with Keras 3. I followed the guide at https://keras.io/guides/custom_train_step_in_jax/, but encountered the error "AttributeError: 'jaxlib.xla_extension.ArrayImpl' " when trying to save the model after training.
To reproduce the issue please use the script below:
```
import os
# This guide can only be run with the JAX backend.
os.environ["KERAS_BACKEND"] = "jax"
import jax
import keras
import numpy as np
class CustomModel(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.loss_tracker = keras.metrics.Mean(name="loss")
self.mae_metric = keras.metrics.MeanAbsoluteError(name="mae")
self.loss_fn = keras.losses.MeanSquaredError()
def compute_loss_and_updates(
self,
trainable_variables,
non_trainable_variables,
x,
y,
training=False,
):
y_pred, non_trainable_variables = self.stateless_call(
trainable_variables,
non_trainable_variables,
x,
training=training,
)
loss = self.loss_fn(y, y_pred)
return loss, (y_pred, non_trainable_variables)
def train_step(self, state, data):
(
trainable_variables,
non_trainable_variables,
optimizer_variables,
metrics_variables,
) = state
x, y = data
# Get the gradient function.
grad_fn = jax.value_and_grad(self.compute_loss_and_updates, has_aux=True)
# Compute the gradients.
(loss, (y_pred, non_trainable_variables)), grads = grad_fn(
trainable_variables,
non_trainable_variables,
x,
y,
training=True,
)
# Update trainable variables and optimizer variables.
(
trainable_variables,
optimizer_variables,
) = self.optimizer.stateless_apply(
optimizer_variables, grads, trainable_variables
)
# Update metrics.
loss_tracker_vars = metrics_variables[: len(self.loss_tracker.variables)]
mae_metric_vars = metrics_variables[len(self.loss_tracker.variables) :]
loss_tracker_vars = self.loss_tracker.stateless_update_state(
loss_tracker_vars, loss
)
mae_metric_vars = self.mae_metric.stateless_update_state(
mae_metric_vars, y, y_pred
)
logs = {}
logs[self.loss_tracker.name] = self.loss_tracker.stateless_result(
loss_tracker_vars
)
logs[self.mae_metric.name] = self.mae_metric.stateless_result(mae_metric_vars)
new_metrics_vars = loss_tracker_vars + mae_metric_vars
# Return metric logs and updated state variables.
state = (
trainable_variables,
non_trainable_variables,
optimizer_variables,
new_metrics_vars,
)
return logs, state
@property
def metrics(self):
# We list our `Metric` objects here so that `reset_states()` can be
# called automatically at the start of each epoch
# or at the start of `evaluate()`.
return [self.loss_tracker, self.mae_metric]
def test_step(self, state, data):
# Unpack the data.
x, y = data
(
trainable_variables,
non_trainable_variables,
metrics_variables,
) = state
# Compute predictions and loss.
y_pred, non_trainable_variables = self.stateless_call(
trainable_variables,
non_trainable_variables,
x,
training=False,
)
loss = self.compute_loss(x, y, y_pred)
# Update metrics.
new_metrics_vars = []
for metric in self.metrics:
this_metric_vars = metrics_variables[
len(new_metrics_vars) : len(new_metrics_vars) + len(metric.variables)
]
if metric.name == "loss":
this_metric_vars = metric.stateless_update_state(this_metric_vars, loss)
else:
this_metric_vars = metric.stateless_update_state(
this_metric_vars, y, y_pred
)
logs = metric.stateless_result(this_metric_vars)
new_metrics_vars += this_metric_vars
# Return metric logs and updated state variables.
state = (
trainable_variables,
non_trainable_variables,
new_metrics_vars,
)
return logs, state
# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
# We don't pass a loss or metrics here.
model.compile(optimizer="adam")
# Just use `fit` as usual -- you can use callbacks, etc.
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=5)
model.evaluate(x, y)
model.export("./best.pbuf")
```
Error:
```
Traceback (most recent call last):
File "/home/perfuser/shailesh/openfl_jax_latest_3_feb/jax_demo.py", line 153, in <module>
model.evaluate(x, y)
File "/home/perfuser/shailesh/jax_3feb/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^
AttributeError: 'jaxlib.xla_extension.ArrayImpl' object has no attribute 'items'. Did you mean: 'item'?
```
``` pip freeze ```
```
absl-py==2.1.0
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
click==8.1.8
cloudpickle==3.1.1
cryptography==44.0.0
dynaconf==3.2.7
flatten-json==0.1.14
grpcio==1.65.5
h5py==3.12.1
idna==3.10
jax==0.4.38
jaxlib==0.4.38
joblib==1.4.2
keras==3.8.0
markdown-it-py==3.0.0
mdurl==0.1.2
ml_dtypes==0.5.1
namex==0.0.8
numpy==2.2.2
openfl @ file:///home/perfuser/shailesh/openfl_jax_latest_3_feb
opt_einsum==3.4.0
optree==0.14.0
packaging==24.2
pandas==2.2.3
protobuf==5.29.3
psutil==6.1.1
pycparser==2.22
Pygments==2.19.1
python-dateutil==2.9.0.post0
pytz==2025.1
PyYAML==6.0.2
requests==2.32.3
rich==13.9.4
scikit-learn==1.6.1
scipy==1.15.1
six==1.17.0
tensorboardX==2.6.2.2
threadpoolctl==3.5.0
tqdm==4.67.1
typing_extensions==4.12.2
tzdata==2025.1
urllib3==2.3.0
```
| closed | 2025-02-03T10:33:09Z | 2025-02-04T06:03:46Z | https://github.com/keras-team/keras/issues/20848 | [
"type:Bug",
"backend:jax"
] | tanwarsh | 3 |
deepfakes/faceswap | machine-learning | 633 | Tools GUI does not work | **Describe the bug**
Looks like there is no tool GUI at all so either option description is misleading or there something is bugged
**To Reproduce**
run python .\tools.py gui
**Expected behavior**
I should see tools GUI I guess
**Screenshots**
BTW. tools.py mentions that there is GUI option, which results with:
02/27/2019 20:46:38 INFO Log level set to: INFO
02/27/2019 20:46:40 ERROR Got Exception on main handler:
Traceback (most recent call last):
File "D:\faceswap\lib\cli.py", line 88, in execute_script
script = self.import_script()
File "D:\faceswap\lib\cli.py", line 34, in import_script
module = import_module(mod)
File "D:\faceswap\locvirtual\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'tools.gui'
02/27/2019 20:46:40 CRITICAL An unexpected crash has occurred.
**Desktop (please complete the following information):**
Windows 10
hash a71fd4234848c746cde1554e2a73c64416972401
**Additional context**
Looks like there is no GUI in tools contrary to "scripts" used by faceswap.py
D:\faceswap\lib\cli.py
is trying to get gui from tools like:
src = "tools" if cmd == "tools.py" else "scripts"
| closed | 2019-02-27T19:53:07Z | 2019-03-09T14:05:22Z | https://github.com/deepfakes/faceswap/issues/633 | [] | berniejerom | 2 |
davidsandberg/facenet | computer-vision | 290 | Python3 or Python2? | Hi,i have a question
Python3 or Python2?
which version of Python should i use?
Thanks!!! | closed | 2017-05-24T01:22:53Z | 2017-05-30T01:43:36Z | https://github.com/davidsandberg/facenet/issues/290 | [] | luckyboysmith | 1 |
plotly/plotly.py | plotly | 4,620 | Add Animation Support for go.Sankey Plots Similar to go.Scatter and go.Bar | **Description**
I would like to request the addition of animation support for go.Sankey plots, similar to the functionality available in go.Scatter and go.Bar.
**Expected Outcome**
The go.Sankey plot should have a parameter in the layout for animations, similar to the updatemenus feature. An example of the desired configuration is shown below:
```
"updatemenus": [{
"type": "buttons",
"buttons": [{
"label": "Your Label",
"method": "animate",
"args": [/* Animation arguments here */]
}]
}]
```
This feature would allow users to add animations to go.Sankey plots, enhancing their interactivity and visual appeal. | open | 2024-05-31T16:45:23Z | 2024-08-13T13:14:54Z | https://github.com/plotly/plotly.py/issues/4620 | [
"feature",
"P3"
] | andre996 | 1 |
joke2k/django-environ | django | 458 | Invalid line: ÿþsecret_key= unknown error | when configuring my SECRET_KEY = os.environ['secret_key'] in the setting file i get
Invalid line: ÿþsecret_key=
has anyone had this problem? | closed | 2023-03-29T20:21:37Z | 2023-03-29T20:25:26Z | https://github.com/joke2k/django-environ/issues/458 | [] | eyalbi | 0 |
tensorpack/tensorpack | tensorflow | 691 | how to use LMDBData() | I want to know how to use LMDBData('/path/to/ILSVRC-train.lmdb', shuffle=False)
The imagenet dataflow is written as: link
if isTrain:
ds = dataset.ILSVRC12(datadir, name, shuffle=True)
ds = AugmentImageComponent(ds, augmentors, copy=False)
ds = PrefetchDataZMQ(ds, cpu)
ds = BatchData(ds, batch_size, remainder=False)
I use the following code to replace:
ds = LMDBData('/path/to/ILSVRC-train.lmdb', shuffle=False)
ds = BatchData(ds, 256, use_list=True)
but,when I run the alexnet-dorefa.py , it happen some error:
NameError:global name 'LMDBData' is not defined.
So ,I want to know whether missing some file . | closed | 2018-03-08T13:12:26Z | 2019-03-11T07:47:46Z | https://github.com/tensorpack/tensorpack/issues/691 | [
"usage"
] | liuxiaowei199345 | 4 |
LibreTranslate/LibreTranslate | api | 369 | Error while downloading language models | When I run the `./install_models.py` script it downloads some models but crashes after.
Also I am using a raspberry pi (RPI OS)
```
Updating language models
Found 58 models
Downloading Arabic → English (1.0) ...
Downloading Azerbaijani → English (1.5) ...
Downloading Catalan → English (1.7) ...
Downloading Chinese → English (1.7) ...
Downloading Czech → English (1.5) ...
Downloading Danish → English (1.3) ...
Downloading Dutch → English (1.4) ...
Downloading English → Arabic (1.0) ...
Downloading English → Azerbaijani (1.5) ...
Downloading English → Catalan (1.7) ...
Downloading English → Chinese (1.7) ...
Downloading English → Czech (1.5) ...
Downloading English → Danish (1.3) ...
Downloading English → Dutch (1.4) ...
Downloading English → Esperanto (1.5) ...
Downloading English → Finnish (1.5) ...
Downloading English → French (1.0) ...
Downloading English → German (1.0) ...
Downloading English → Greek (1.5) ...
Downloading English → Hebrew (1.5) ...
Downloading English → Hindi (1.1) ...
Downloading English → Hungarian (1.5) ...
Downloading English → Indonesian (1.2) ...
Downloading English → Irish (1.1) ...
Downloading English → Italian (1.0) ...
Downloading English → Japanese (1.1) ...
Traceback (most recent call last):
File "/home/vaggos/vaggos_hdd/LibreTranslate/./install_models.py", line 12, in <module>
check_and_install_models(force=True, load_only_lang_codes=lang_codes)
File "/home/vaggos/vaggos_hdd/LibreTranslate/app/init.py", line 53, in check_and_install_models
package.install_from_path(download_path)
File "/home/vaggos/.local/lib/python3.9/site-packages/argostranslate/package.py", line 183, in install_from_path
raise Exception("Not a valid Argos Model (must be a zip archive)")
Exception: Not a valid Argos Model (must be a zip archive)
``` | open | 2022-12-28T09:49:45Z | 2022-12-28T18:59:43Z | https://github.com/LibreTranslate/LibreTranslate/issues/369 | [
"possible bug"
] | vaggos-thanos | 0 |
streamlit/streamlit | machine-learning | 10,538 | Make `OAUTH2_CALLBACK_ENDPOINT` in `streamlit.web.server.server` configurable | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Currently the OATH2_CALLBACK_ENDPOINT is hardcoded to `/oauth2callback`. For a project I'm working on I want to be able to set this to `/oauth/microsoft/callback`.
**Proposed solution**
It would be nice if I can set this with a configuration variable, e.g.:
```toml
[server]
oauth2_callback_endpoint = "/oauth/microsoft/callback"
```
**Workaround**
I was able to get this to work by creating a custom version of `streamlit/__main__.py` like this:
```python
import streamlit.web.server.server as server
server.OAUTH2_CALLBACK_ENDPOINT = "/oauth/microsoft/callback"
if __name__ == "__main__":
from streamlit.web.cli import main
main()
```
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | open | 2025-02-27T10:10:49Z | 2025-03-19T17:24:42Z | https://github.com/streamlit/streamlit/issues/10538 | [
"type:enhancement",
"feature:authentication"
] | Riezebos | 2 |
biolab/orange3 | pandas | 6,741 | Add Report capability for Orange-Spectroscopy Widgets | **What's your use case?**
I am using the report functionality to build a short summary of my analysis work and I would like to include figures of Spectra in them. I also want to capture preprocessing done in the 'Preprocess Spectra' Widget. Most, if not all, base orange widgets support adding their summaries to a report page, but it is not so for orange-spectroscopy.
This would be helpful to have all relevant information for my report in one place, as opposed to _most_ of it in the report, and some images saved separately that a reader may have to look up on their own.
**What's your proposed solution?**
Add the Report button functionality from the base widgets to the orange-spectroscopy widgets.
Visualization focused widgets would save the current canvas plot, akin to the Scatter Plot widget.
Operations focused widgets (Preprocess Spectra, Peak Fit, etc.) would save text of their selected parameters, perhaps in addition to the plot canvas.
**Are there any alternative solutions?**
Add support to import images, pdfs, into the orange reports window. This would allow the addition of any relevant information into reports at least through user generated screenshots.
Bonus points to allow reordering of elements in the reports window.
| closed | 2024-02-19T19:39:31Z | 2024-02-23T14:04:06Z | https://github.com/biolab/orange3/issues/6741 | [] | AdamOpps | 1 |
xlwings/xlwings | automation | 2,511 | sample code to copy data from a table in webpage into a spreadsheet? | Do we have sample code to sample code to copy data from a table in webpage into a spreadsheet? Even better if it is with selenium.
Thanks! | open | 2024-09-08T14:50:45Z | 2024-09-08T20:11:03Z | https://github.com/xlwings/xlwings/issues/2511 | [] | jerronl | 1 |
FactoryBoy/factory_boy | sqlalchemy | 193 | debug print (django.py, line 197) | print("Returning file with filename=%r, contents=%r" % (filename, content))
| closed | 2015-03-27T12:47:15Z | 2015-03-27T12:52:31Z | https://github.com/FactoryBoy/factory_boy/issues/193 | [] | kwist-sgr | 1 |
QuivrHQ/quivr | api | 3,568 | Retrieval + generation eval: run quivr RAG on dataset questions | We should take all questions in the reference dataset, and perform RAG using a given retrieval/generation workflow.
CRAG uses the following prompt, which allows for three different types of answer:
1. Full answer
2. answer of the type "I don’t know"
3. answer of the type "invalid question"
```python
""" You are given a Question, References and
the time when it was asked in the Pacific Time Zone (PT), referred to as "Query Time". The query
time is formatted as "mm/dd/yyyy, hh:mm:ss PT". The references may or may not help answer the
question. Your task is to answer the question in as few words as possible.
Please follow these guidelines when formulating your answer:
1. If the question contains a false premise or assumption, answer “invalid question”.
2. If you are uncertain or don’t know the answer, respond with “I don’t know”.
### Question
{query}
### Query Time
{query_time}
### References
{references}
17
### Answer
"""
``` | closed | 2025-01-28T15:17:03Z | 2025-02-19T08:52:04Z | https://github.com/QuivrHQ/quivr/issues/3568 | [] | jacopo-chevallard | 1 |
google-research/bert | nlp | 873 | WWM for Multilingual | Creating an issue to track Whole Word Masking for Multilingual.
The [WWM update](https://github.com/google-research/bert/commit/0fce551b55caabcfba52c61e18f34b541aef186a) came in Q2, but the [multilingual model](https://github.com/google-research/bert/blob/master/multilingual.md) is still from Q4 2018.
In #841 another user requested it for Hindi. If only for environmental reasons it would be ideal to have an official one for all supported languages. | open | 2019-10-08T13:14:53Z | 2019-10-08T13:17:50Z | https://github.com/google-research/bert/issues/873 | [] | bittlingmayer | 0 |
piskvorky/gensim | data-science | 2,983 | track training loss while using doc2vec issue. | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I am trying to track training loss using doc2vec algorithm. And it failed. Is there a way to track training loss in doc2vec?
Also, I didnt find any documentation related to performing early stopping while do2vec training phase?
the similarity score is varying a lot based on epochs, and I want to stop training when it has reached optimal capacity with callbacks. I have used keras, it has earlystopping feature. Not sure how to do it using gensim models.
Any response is appreciated. Thank you!
#### Steps/code/corpus to reproduce
```
class EpochLogger(CallbackAny2Vec):
'''Callback to log information about training'''
def __init__(self):
self.epoch = 0
def on_epoch_begin(self, model):
print("Epoch #{} start".format(self.epoch))
def on_epoch_end(self, model):
print("Epoch #{} end".format(self.epoch))
self.epoch += 1
epoch_logger = EpochLogger()
class LossLogger(CallbackAny2Vec):
'''Output loss at each epoch'''
def __init__(self):
self.epoch = 1
self.losses = []
def on_epoch_begin(self, model):
print(f'Epoch: {self.epoch}', end='\t')
def on_epoch_end(self, model):
loss = model.get_latest_training_loss()
self.losses.append(loss)
print(f' Loss: {loss}')
self.epoch += 1
loss_logger = LossLogger()
def train_model(data, ids, destination, alpha):
print('\tTagging data .. ')
tagged_data = [TaggedDocument(words=word_tokenize(str(_d).lower()), tags=[str(ids[i])]) for i, _d in enumerate(data)]
print('\tPreparing model with the following parameters: epochs = {}, vector_size = {}, alpha = {} .. '.
format(max_epochs, vec_size, alpha))
model = Doc2Vec(vector_size=vec_size,
workers=cores//2,
alpha=alpha, # initial learning rate
min_count=2, # Ignore words having a total frequency below this
dm_mean=1, # take mean of of word2vec and doc2vec
dm=1,
callbacks=[epoch_logger, loss_logger]) # PV-DM over PV-DBOW
model.build_vocab(tagged_data, keep_raw_vocab=False, progress_per=100000)
```
#### Versions
Please provide the output of:
```
2017 4673
Tagging data ..
Preparing model with the following parameters: epochs = 50, vector_size = 100, alpha = 0.01 ..
Beginning model training ..
Iteration 0
Learning Rate = 0.01
Epoch #0 start
Epoch: 1 Epoch #0 end
Traceback (most recent call last):
loss = model.get_latest_training_loss()
AttributeError: 'Doc2Vec' object has no attribute 'get_latest_training_loss'
```
| open | 2020-10-18T14:18:58Z | 2021-06-28T17:57:34Z | https://github.com/piskvorky/gensim/issues/2983 | [] | skwolvie | 5 |
christabor/flask_jsondash | flask | 78 | Consider export json config option | For saving the raw config field. This is simply a downloadable option. A simple route w/ content-disposition would solve the problem.
| closed | 2016-12-07T18:09:17Z | 2017-03-02T20:53:14Z | https://github.com/christabor/flask_jsondash/issues/78 | [
"enhancement",
"new feature"
] | christabor | 0 |
huggingface/diffusers | deep-learning | 10,180 | Can't load multiple loras when using Flux Control LoRA | ### Describe the bug
I was trying out the FluxControlPipeline with the Control LoRA introduced in #9999 , but had issues loading in multiple loras.
For example, if I load the depth lora first and then the 8-step lora, it errors on the 8-step lora, and if I load the 8-step lora first and then the depth lora, it errors when loading the depth lora.
### Reproduction
```
from diffusers import FluxControlPipeline
from huggingface_hub import hf_hub_download
import torch
control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora")
control_pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"))
```
### Logs
```shell
AttributeError Traceback (most recent call last)
Cell In[6], line 8
5 control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
7 control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora")
----> 8 control_pipe.load_lora_weights(
9 hf_hub_download(
10 "ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"
11 ),
12 adapter_name="HyperFlux",
13 )
File ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:1856, in FluxLoraLoaderMixin.load_lora_weights(self, pretrained_model_name_or_path_or_dict, adapter_name, **kwargs)
1849 transformer_norm_state_dict = {
1850 k: state_dict.pop(k)
1851 for k in list(state_dict.keys())
1852 if "transformer." in k and any(norm_key in k for norm_key in self._control_lora_supported_norm_keys)
1853 }
1855 transformer = getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer
-> 1856 has_param_with_expanded_shape = self._maybe_expand_transformer_param_shape_or_error_(
1857 transformer, transformer_lora_state_dict, transformer_norm_state_dict
1858 )
1860 if has_param_with_expanded_shape:
1861 logger.info(
1862 "The LoRA weights contain parameters that have different shapes that expected by the transformer. "
1863 "As a result, the state_dict of the transformer has been expanded to match the LoRA parameter shapes. "
1864 "To get a comprehensive list of parameter names that were modified, enable debug logging."
1865 )
File ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:2316, in FluxLoraLoaderMixin._maybe_expand_transformer_param_shape_or_error_(cls, transformer, lora_state_dict, norm_state_dict, prefix)
2314 if isinstance(module, torch.nn.Linear):
2315 module_weight = module.weight.data
-> 2316 module_bias = module.bias.data if hasattr(module, "bias") else None
2317 bias = module_bias is not None
2319 lora_A_weight_name = f"{name}.lora_A.weight"
AttributeError: 'NoneType' object has no attribute 'data'
```
### System Info
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.12
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.5
- Transformers version: 4.47.0
- Accelerate version: 1.2.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA H100 80GB HBM3, 81559 MiB
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@a-r-r-o-w @sayakpaul | closed | 2024-12-10T21:40:24Z | 2024-12-20T09:00:33Z | https://github.com/huggingface/diffusers/issues/10180 | [
"bug",
"help wanted",
"lora"
] | jonathanyin12 | 11 |
ultralytics/yolov5 | pytorch | 12,429 | Multi-node multi-GPU training wont run after loading images | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello,
Thank you for taking the time to read my post. I have trying to run training on a supercomputer using torch.distributed.run for a multi-node, multi-GPU setup for over 130,000 images with 1536x2048 resolution. It seems that I am having an issue with the nodes communicating with each other to actually start the training. In this example, I have 2 nodes, 1 GPU-per-node. I use the following bash script with SLURM commands to request the resources necessary for this job:
----------------------------------------------------------------------------------------------------------
#!/bin/bash
#SBATCH --job-name=yolov5_training
#SBATCH --partition=xeon-g6-volta
#SBATCH --output=./jobs/train%A.out
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:volta:1
#SBATCH --exclusive
#Load necessary modules
source /etc/profile
module load anaconda/2023a-pytorch cuda/11.8
srun --nodes=$SLURM_NNODES --ntasks-per-node=$SLURM_NTASKS_PER_NODE bash -c '
#Get the total number of nodes allocated
N=$(scontrol show hostname | wc -l)
#Get the hostname of the current node
current_node=$(hostname)
#Assign node_rank based on the current node
if [ "$current_node" = "$(scontrol show hostnames | head -n1)" ]; then
node_rank=0 # Set node_rank to 0 for the master node
else
# Determine the node rank for non-master nodes
node_rank=$(($(scontrol show hostnames | grep -n "$current_node" | cut -d":" -f1) - 1))
fi
#Print the node_rank for each node
echo "Node $current_node has rank $node_rank"
#Set the master address and port only for the first task (index 0)
if [ $node_rank -eq 0 ]; then
MASTER_ADDR=$(hostname -I)
rm -f shared_ip.sh
echo "export MASTER_ADDR=$MASTER_ADDR" > shared_ip.sh
else
# For other tasks, wait for a short duration to ensure the master has set the variable
sleep 5
fi
#Wait for the master to set the variable
while [ ! -f shared_ip.sh ]; do
sleep 5
done
#Source the shared file to set the MASTER_ADDR variable
source shared_ip.sh
echo "MASTER_ADDR="$MASTER_ADDR
MY_ADDR=$(hostname -I)
echo "MY_ADDRESS="$MY_ADDR
MASTER_PORT=43829
echo python -m torch.distributed.run \
--nproc_per_node $SLURM_NTASKS_PER_NODE \
--nnodes $SLURM_NNODES \
--node_rank $node_rank \
--master_addr "$MASTER_ADDR" \
--master_port $MASTER_PORT \
train.py --data training.yaml --weights yolov5s.pt --img 2048 --project 'runs/train/11-15'
echo "Begin Training: Node "$node_rank
python -m torch.distributed.run \
--nproc_per_node $SLURM_NTASKS_PER_NODE \
--nnodes $SLURM_NNODES \
--node_rank $node_rank \
--master_addr "$MASTER_ADDR" \
--master_port $MASTER_PORT \
train.py --data training.yaml --weights yolov5s.pt --img 2048 --project 'runs/train/11-15'
----------------------------------------------------------------------------------------------------------
In this bash script, I allocate the resources, retrieve the Master Address for the master node and share it with the secondary node. Here are the outputs for each node:
----------------------------------------------------------------------------------------------------------
Node 0:
Node d-8-3-2 has rank 0
MASTER_ADDR=172.31.130.37
MY_ADDRESS=172.31.130.37
python -m torch.distributed.run --nproc_per_node 1 --nnodes 2 --node_rank 0 --master_addr 172.31.130.37 --master_port 43829 train.py --data training.yaml --weights yolov5s.pt --img 2048 --project runs/train/11-15
Begin Training: Node 0
Node 1:
Node d-8-4-1 has rank 1
MASTER_ADDR=172.31.130.37
MY_ADDRESS=172.31.130.38
python -m torch.distributed.run --nproc_per_node 1 --nnodes 2 --node_rank 1 --master_addr 172.31.130.37 --master_port 43829 train.py --data training.yaml --weights yolov5s.pt --img 2048 --project runs/train/11-15
Begin Training: Node 1
----------------------------------------------------------------------------------------------------------
Everything seems to be correct so far. The master address is being shared correctly and the node ranks are displayed properly. The master node then outputs the training (weights, data, epochs, etc.) and loads in the images.
----------------------------------------------------------------------------------------------------------
train: weights=yolov5s.pt, cfg=, data=training.yaml, hyp=data/hyps/hyp.scratch-low.yaml, epochs=100, batch_size=16, imgsz=2048, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train/11-15, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
github: skipping check (offline), for updates see https://github.com/ultralytics/yolov5
YOLOv5 🚀 v7.0-232-g1c60c53 Python-3.9.16 torch-2.0.0+cu117 CUDA:0 (Tesla V100-PCIE-32GB, 32501MiB)
hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 🚀 runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs/train/11-15', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=1
from n params module arguments
0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 2 115712 models.common.C3 [128, 128, 2]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 3 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 1182720 models.common.C3 [512, 512, 1]
9 -1 1 656896 models.common.SPPF [512, 512, 5]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model summary: 214 layers, 7022326 parameters, 7022326 gradients, 15.9 GFLOPs
Transferred 343/349 items from yolov5s.pt
AMP: checks passed ✅
optimizer: SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.0005), 60 bias
train: Scanning /data1/groups/arclab_optnav/yolov5/training/labels/train2023/trial-11-02-23/povray_images.cache... 133669 images, 44 backgrounds, 0 cotrain: Scanning /data1/groups/arclab_optnav/yolov5/training/labels/train2023/trial-11-02-23/povray_images.cache... 133669 images, 44 backgrounds, 0 corrupt: 100%|██████████| 133713/133713 [00:00<?, ?it/s]
train: Scanning /data1/groups/arclab_optnav/yolov5/training/labels/train2023/trial-11-02-23/povray_images.cache... 133669 images, 44 backgrounds, 0 cotrain: Scanning /data1/groups/arclab_optnav/yolov5/training/labels/train2023/trial-11-02-23/povray_images.cache... 133669 images, 44 backgrounds, 0 corrupt: 100%|██████████| 133713/133713 [00:00<?, ?it/s]
val: Scanning /data1/groups/arclab_optnav/yolov5/training/labels/validation/trial-11-02-23.cache... 32550 images, 13121 backgrounds, 0 corrupt: 100%|█val: Scanning /data1/groups/arclab_optnav/yolov5/training/labels/validation/trial-11-02-23.cache... 32550 images, 13121 backgrounds, 0 corrupt: 100%|██████████| 45671/45671 [00:00<?, ?it/s]
AutoAnchor: 4.03 anchors/target, 0.995 Best Possible Recall (BPR). Current anchors are a good fit to dataset ✅
Plotting labels to runs/train/11-15/exp7/labels.jpg...
----------------------------------------------------------------------------------------------------------
One thing I'll ask real quick: Given that we are doing multi-gpu training and that the master node is the only machine outputting information, should it show all the GPUs? It only shows one for the output. Also, I noticed that it outputs the scanning for train 4x and val 2x. Is that correct?
Anyway, the error occurs after all of that due to a time out:
----------------------------------------------------------------------------------------------------------
[E ProcessGroupNCCL.cpp:828] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=4, OpType=ALLGATHER, Timeout(ms)=1800000) ran for 1800081 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:455] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[E ProcessGroupNCCL.cpp:460] To avoid data inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=4, OpType=ALLGATHER, Timeout(ms)=1800000) ran for 1800081 milliseconds before timing out.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 95313) of binary: /state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/bin/python
[E ProcessGroupNCCL.cpp:455] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[E ProcessGroupNCCL.cpp:460] To avoid data inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
what(): NCCL error: remote process exited or there was a network error, NCCL version 2.14.3
ncclRemoteError: A call failed possibly due to a network error or a remote process exiting prematurely.
Last error:
NET/IB : Got completion from peer 172.31.130.38<42217> with error 12, opcode 32753, len 0, vendor err 129
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 95884) of binary: /state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/bin/python
Traceback (most recent call last):
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/run.py", line 798, in <module>
Traceback (most recent call last):
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/run.py", line 798, in <module>
main()
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
main()
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/run.py", line 794, in main
return f(*args, **kwargs)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run
run(args)
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
elastic_launch(
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
return launch_agent(self._config, self._entrypoint, list(args))
File "/state/partition1/llgrid/pkg/anaconda/anaconda3-2023a-pytorch/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
======================================================
train.py FAILED
------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-11-15_23:13:49
host : d-8-3-2.supercloud.mit.edu
rank : 0 (local_rank: 0)
exitcode : -6 (pid: 95884)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 95884
======================================================
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
======================================================
train.py FAILED
------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-11-15_23:09:34
host : d-8-4-1.supercloud.mit.edu
rank : 1 (local_rank: 0)
exitcode : -6 (pid: 95313)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 95313
======================================================
srun: error: d-8-4-1: task 1: Exited with exit code 1
srun: error: d-8-3-2: task 0: Exited with exit code 1
----------------------------------------------------------------------------------------------------------
Would somebody be able to assist me in figuring out the issue? My guess is that the nodes are not correctly communicating with each other. I have been struggling with this for weeks now :( Training on a single node works no problem, but multi-node seems to be an issue.
----------------------------------------------------------------------------------------------------------
### Additional
_No response_ | closed | 2023-11-25T20:46:20Z | 2024-01-21T00:23:40Z | https://github.com/ultralytics/yolov5/issues/12429 | [
"question",
"Stale"
] | Tmkilduff | 8 |
ultralytics/yolov5 | pytorch | 12,723 | Retraining yolov5 for additional data | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I have trained a yolov5 model using the pre trained weight for my custom dataset.It contains approx 13000 images for training and validation and total 8 classes are there.But for one class ,more data needed to be added to generalize prediction of various cases. But how can i retrain the model to fit with newly added data without retraining it from start.It takes around half an hour to complete one epoch.
### Additional
_No response_ | closed | 2024-02-09T07:21:18Z | 2024-03-22T00:20:02Z | https://github.com/ultralytics/yolov5/issues/12723 | [
"question",
"Stale"
] | humairaneha | 2 |
tensorly/tensorly | numpy | 465 | fetch_indian_pines broken | fetch_indian_pines (and therefore it's test) fail, e.g. https://github.com/tensorly/tensorly/actions/runs/3660881883/jobs/6198153331
This seems to be due to the server hosting the data not supporting Open SSL 3
https://stackoverflow.com/questions/71603314/ssl-error-unsafe-legacy-renegotiation-disabled
# Todo
- [x] Remove test skip when fixed #465 | closed | 2022-12-11T00:06:13Z | 2023-01-15T00:38:09Z | https://github.com/tensorly/tensorly/issues/465 | [] | JeanKossaifi | 4 |
wkentaro/labelme | computer-vision | 968 | [BUG] Deleting a point sometimes deletes the adjacent point instead | **Describe the bug**
When deleting a point by pressing shift and clicking on the point, the adjacent point gets auto-selected. Moving the mouse even a little causes the adjacent point to snap to the cursor position. This effectively behaves as if the adjacent point was deleted and not the one that was clicked on.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a polygon'
2. Press and hold the shift key
3. Click on a point to delete it. The point will be deleted. Note how the adjacent point gets auto-selected.
4. While keeping the shift key held down, move the mouse a little bit. Note how the adjacent point was moved to the current position.
**Expected behavior**
After deleting a point, all points should remain deselected and not snap to the mouse when it is moved.
**Desktop (please complete the following information):**
- OS: Windows 10 64-bit
- Labelme Version: 4.6.0 | open | 2021-12-25T22:30:46Z | 2022-09-26T14:36:56Z | https://github.com/wkentaro/labelme/issues/968 | [
"issue::bug",
"priority: medium"
] | armenforget | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.