repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
dfm/corner.py | data-visualization | 155 | Overplot_lines and overplot_points do not deal with reversed corner plots | Hi there,
If you create a reversed corner plot using `reverse=true`, the helpful and new `overplot_` functions do not account for this and merrily put their output in the non-existent axes in the transpose of the graph.
A simple MWE of the problem would be:
```python3
#Taken from the docs
import corner
import numpy as np
ndim, nsamples = 4, 50000
np.random.seed(1234)
data1 = np.random.randn(ndim * 4 * nsamples // 5).reshape([4 * nsamples // 5, ndim])
mean = 4*np.random.rand(ndim)
data2 = (mean[None, :] + np.random.randn(ndim * nsamples // 5).reshape([nsamples // 5, ndim]))
samples = np.vstack([data1, data2])
value1 = mean
# This is the empirical mean of the sample:
value2 = np.mean(samples, axis=0)
###################################################
# Make the base corner plot -- but with reverse=True!
###################################################
figure = corner.corner(samples,reverse=True)
corner.overplot_lines(figure, value1, color="C1")
corner.overplot_points(figure, value1[None], marker="s", color="C1")
corner.overplot_lines(figure, value2, color="C2")
corner.overplot_points(figure, value2[None], marker="s", color="C2")
```
This produces the obviously wrong result:

I believe that it is quite easy to fix this by adding a boolean `kwarg` to both `overplot_lines` and `overplot_points` with `reverse=False` by default, and altering the loop if it is `True`. I could easily submit a PR to fix this if it would be helpful. However, it may be possible to get information from the plot itself in `overplot_*` and see if it was created with `reverse=True`, but that is a bit more involved and I am not completely sure how general it would be -- thoughts welcome! | closed | 2021-03-17T21:41:51Z | 2021-03-18T17:20:39Z | https://github.com/dfm/corner.py/issues/155 | [] | NeutralKaon | 2 |
dynaconf/dynaconf | fastapi | 1,149 | [RFC] Remove upperfy() | On 4.0, Schema will be truth, so we must follow the casing defined on schema.
- `find_correct_casing` will fetch the proper casing from schema
- settings sources will be case insensitive so both `FOO` and `foo` will feed the same `foo:str` on the schema. | open | 2024-07-07T14:47:13Z | 2024-07-08T18:38:24Z | https://github.com/dynaconf/dynaconf/issues/1149 | [
"Not a Bug",
"RFC",
"typed_dynaconf",
"4.0-breaking-change"
] | rochacbruno | 0 |
inventree/InvenTree | django | 9,030 | Logging in creates Permission Denied | ### Please verify that this bug has NOT been raised before.
- [x] I checked and didn't find a similar issue
### Describe the bug*
I have Inventree installed on Ubuntu 22.4, using the one line installer. Yesterday I updated Inventree from 16.5 to 17.4. I logged in, worked productively, experienced no problems. Late afternoon, a user reported login failure. Upon trying to log in, the login screen persisted without showing any messages. When he used a bad password, the login screen showed a message that password was not correct. I went to the admin section and changed his password. Side note I also removed a few users who were no longer using Inventree. Subsequently, I was logged out of Inventree. When I tried to login with my username and password (super user) or as the admin I was seeing the same issue where the system would not let me in. This morning I checked and could not login, with the same results. I then updated inventree to V17.5. and cleared the last 24 hours of my cookies and cache. Now I login and then get a blank screen and a tab message that says Permission Denied. I have tried both edge and chrome and getting the same message. I reviewed the install log and did not see any errors except the collect plugins failed. I am now not sure what should be the next step. Thank for any pointers on what to try next.
### Steps to Reproduce
Login
### Expected behaviour
To login
### Deployment Method
- [ ] Docker
- [ ] Package
- [x] Bare metal
- [ ] Other - added info in Steps to Reproduce
### Version Information
V0.17.5
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
```shell
inventree@inventree-VM40B:~$ sudo apt-get reinstall inventree
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 1 not upgraded.
Need to get 0 B/74.3 MB of archives.
After this operation, 0 B of additional disk space will be used.
(Reading database ... 222347 files and directories currently installed.)
Preparing to unpack .../inventree_0.17.5-1738551257.0f9bddbc.focal_amd64.deb ...
# PRI01| Running preinstall script - start - Tue Feb 4 07:24:08 AM EST 2025
# PRI02| Clearing precompiled files
# PRI03| Running preinstall script - done - Tue Feb 4 07:24:10 AM EST 2025
# PRI01| Running preinstall script - start - Tue Feb 4 07:24:10 AM EST 2025
# PRI02| Clearing precompiled files
# PRI03| Running preinstall script - done - Tue Feb 4 07:24:11 AM EST 2025
Unpacking inventree (0.17.5-1738551257.0f9bddbc.focal) over (0.17.5-1738551257.0f9bddbc.focal) ...
Setting up inventree (0.17.5-1738551257.0f9bddbc.focal) ...
# POI01| Running postinstall script - start - Tue Feb 4 07:24:34 AM EST 2025
# POI01| Importing functions
# POI01| Functions imported
# POI03| Setting base environment variables
# POI03| Using existing config file: /etc/inventree/config.yaml
# POI03| Installing requirements
# POI03| Installed requirements
# POI03| Collected environment variables:
# POI03| INVENTREE_MEDIA_ROOT=/opt/inventree/data/media
# POI03| INVENTREE_STATIC_ROOT=/opt/inventree/data/static
# POI03| INVENTREE_BACKUP_DIR=/opt/inventree/data/backup
# POI03| INVENTREE_PLUGINS_ENABLED=true
# POI03| INVENTREE_PLUGIN_FILE=/etc/inventree/plugins.txt
# POI03| INVENTREE_SECRET_KEY_FILE=/etc/inventree/secret_key.txt
# POI03| INVENTREE_DB_ENGINE=sqlite3
# POI03| INVENTREE_DB_NAME=/opt/inventree/data/database.sqlite3
# POI03| INVENTREE_DB_USER=sampleuser
# POI03| INVENTREE_DB_HOST=samplehost
# POI03| INVENTREE_DB_PORT=sampleport
# POI04| Running in docker: no
# POI05| Using init command: systemctl
# POI06| Getting the IP address of the server via web service
# POI06| IP address is 64.4.107.154
# POI07| Python environment already present
# POI07| Found earlier used version: /opt/inventree/env/bin/python
# POI07| Using python command: /opt/inventree/env/bin/python
# POI08| Checking if update checks are needed
# POI08| Running upgrade
# POI08| Old version is: 0.17.5-1738551257.0f9bddbc.focal | 17 - updating to 0.17.5-1738551257.0f9bddbc.focal | 17
# POI09| Setting up python environment
Requirement already satisfied: invoke in ./env/lib/python3.9/site-packages (2.2.0)
Requirement already satisfied: wheel in ./env/lib/python3.9/site-packages (0.42.0)
# POI09| Stopping nginx
# POI09| Stopped nginx
# POI09| Setting up nginx to /etc/nginx/sites-enabled/inventree.conf
# POI09| Starting nginx
# POI09| Started nginx
# POI09| (Re)creating init scripts
Nothing to do.
Nothing to do.
# POI09| Enabling InvenTree on boot
# POI09| Enabled InvenTree on boot
# POI10| Admin data already exists - skipping
# POI11| Stopping InvenTree
# POI11| Stopped InvenTree
# POI12| Updating InvenTree
Requirement already satisfied: wheel in ./env/lib/python3.9/site-packages (0.42.0)
# POI12| u | Updating InvenTree installation...
# POI12| u | Installing required python packages from '/opt/inventree/src/backend/requirements.txt'
# POI12| u | Requirement already satisfied: pip in ./env/lib/python3.9/site-packages (25.0)
# POI12| u | Requirement already satisfied: setuptools in ./env/lib/python3.9/site-packages (75.6.0)
# POI12| u | Collecting setuptools
# POI12| u | Downloading setuptools-75.8.0-py3-none-any.whl.metadata (6.7 kB)
# POI12| u | Downloading setuptools-75.8.0-py3-none-any.whl (1.2 MB)
# POI12| u | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 8.7 MB/s eta 0:00:00
# POI12| u | Installing collected packages: setuptools
# POI12| u | Attempting uninstall: setuptools
# POI12| u | Found existing installation: setuptools 75.6.0
# POI12| u | Uninstalling setuptools-75.6.0:
# POI12| u | Successfully uninstalled setuptools-75.6.0
# POI12| u | Successfully installed setuptools-75.8.0
# POI12| u | Requirement already satisfied: asgiref==3.8.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 3)) (3.8.1)
# POI12| u | Requirement already satisfied: async-timeout==5.0.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 11)) (5.0.1)
# POI12| u | Requirement already satisfied: attrs==24.2.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 15)) (24.2.0)
# POI12| u | Requirement already satisfied: babel==2.16.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 21)) (2.16.0)
# POI12| u | Requirement already satisfied: bleach==6.2.0 in ./env/lib/python3.9/site-packages (from bleach[css]==6.2.0->-r /opt/inventree/src/backend/requirements.txt (line 25)) (6.2.0)
# POI12| u | Requirement already satisfied: brotli==1.1.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 29)) (1.1.0)
# POI12| u | Requirement already satisfied: certifi==2024.8.30 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 156)) (2024.8.30)
# POI12| u | Requirement already satisfied: cffi==1.17.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 162)) (1.17.1)
# POI12| u | Requirement already satisfied: charset-normalizer==3.4.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 233)) (3.4.0)
# POI12| u | Requirement already satisfied: coreapi==2.3.3 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 340)) (2.3.3)
# POI12| u | Requirement already satisfied: coreschema==0.0.4 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 344)) (0.0.4)
# POI12| u | Requirement already satisfied: cryptography==43.0.3 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 348)) (43.0.3)
# POI12| u | Requirement already satisfied: cssselect2==0.7.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 379)) (0.7.0)
# POI12| u | Requirement already satisfied: defusedxml==0.7.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 383)) (0.7.1)
# POI12| u | Requirement already satisfied: deprecated==1.2.15 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 389)) (1.2.15)
# POI12| u | Requirement already satisfied: diff-match-patch==20241021 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 397)) (20241021)
# POI12| u | Requirement already satisfied: dj-rest-auth==7.0.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 401)) (7.0.0)
# POI12| u | Requirement already satisfied: django==4.2.17 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 404)) (4.2.17)
# POI12| u | Requirement already satisfied: django-allauth==65.2.0 in ./env/lib/python3.9/site-packages (from django-allauth[openid,saml]==65.2.0->-r /opt/inventree/src/backend/requirements.txt (line 440)) (65.2.0)
# POI12| u | Requirement already satisfied: django-allauth-2fa==0.11.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 445)) (0.11.1)
# POI12| u | Requirement already satisfied: django-cleanup==9.0.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 449)) (9.0.0)
# POI12| u | Requirement already satisfied: django-cors-headers==4.6.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 453)) (4.6.0)
# POI12| u | Requirement already satisfied: django-crispy-forms==1.14.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 457)) (1.14.0)
# POI12| u | Requirement already satisfied: django-dbbackup==4.2.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 461)) (4.2.1)
# POI12| u | Requirement already satisfied: django-error-report-2==0.4.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 465)) (0.4.2)
# POI12| u | Requirement already satisfied: django-filter==24.3 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 469)) (24.3)
# POI12| u | Requirement already satisfied: django-flags==5.0.13 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 473)) (5.0.13)
# POI12| u | Requirement already satisfied: django-formtools==2.5.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 477)) (2.5.1)
# POI12| u | Requirement already satisfied: django-ical==1.9.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 481)) (1.9.2)
# POI12| u | Requirement already satisfied: django-import-export==3.3.9 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 485)) (3.3.9)
# POI12| u | Requirement already satisfied: django-ipware==7.0.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 489)) (7.0.1)
# POI12| u | Requirement already satisfied: django-js-asset==2.2.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 493)) (2.2.0)
# POI12| u | Requirement already satisfied: django-maintenance-mode==0.21.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 497)) (0.21.1)
# POI12| u | Requirement already satisfied: django-markdownify==0.9.5 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 501)) (0.9.5)
# POI12| u | Requirement already satisfied: django-money==3.2.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 505)) (3.2.0)
# POI12| u | Requirement already satisfied: django-mptt==0.16.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 509)) (0.16.0)
# POI12| u | Requirement already satisfied: django-otp==1.5.4 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 513)) (1.5.4)
# POI12| u | Requirement already satisfied: django-picklefield==3.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 517)) (3.2)
# POI12| u | Requirement already satisfied: django-q-sentry==0.1.6 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 521)) (0.1.6)
# POI12| u | Requirement already satisfied: django-q2==1.7.4 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 524)) (1.7.4)
# POI12| u | Requirement already satisfied: django-recurrence==1.11.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 528)) (1.11.1)
# POI12| u | Requirement already satisfied: django-redis==5.4.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 532)) (5.4.0)
# POI12| u | Requirement already satisfied: django-sesame==3.2.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 536)) (3.2.2)
# POI12| u | Requirement already satisfied: django-sql-utils==0.7.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 540)) (0.7.0)
# POI12| u | Requirement already satisfied: django-sslserver==0.22 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 544)) (0.22)
# POI12| u | Requirement already satisfied: django-stdimage==6.0.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 547)) (6.0.2)
# POI12| u | Requirement already satisfied: django-structlog==8.1.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 551)) (8.1.0)
# POI12| u | Requirement already satisfied: django-taggit==6.1.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 555)) (6.1.0)
# POI12| u | Requirement already satisfied: django-user-sessions==2.0.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 559)) (2.0.0)
# POI12| u | Requirement already satisfied: django-weasyprint==2.3.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 563)) (2.3.0)
# POI12| u | Requirement already satisfied: django-xforwardedfor-middleware==2.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 567)) (2.0)
# POI12| u | Requirement already satisfied: djangorestframework==3.14.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 570)) (3.14.0)
# POI12| u | Requirement already satisfied: djangorestframework-simplejwt==5.3.1 in ./env/lib/python3.9/site-packages (from djangorestframework-simplejwt[crypto]==5.3.1->-r /opt/inventree/src/backend/requirements.txt (line 578)) (5.3.1)
# POI12| u | Requirement already satisfied: drf-spectacular==0.27.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 582)) (0.27.2)
# POI12| u | Requirement already satisfied: dulwich==0.22.6 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 586)) (0.22.6)
# POI12| u | Requirement already satisfied: et-xmlfile==2.0.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 630)) (2.0.0)
# POI12| u | Requirement already satisfied: feedparser==6.0.11 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 634)) (6.0.11)
# POI12| u | Requirement already satisfied: flexcache==0.3 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 638)) (0.3)
# POI12| u | Requirement already satisfied: flexparser==0.4 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 642)) (0.4)
# POI12| u | Requirement already satisfied: fonttools==4.55.0 in ./env/lib/python3.9/site-packages (from fonttools[woff]==4.55.0->-r /opt/inventree/src/backend/requirements.txt (line 646)) (4.55.0)
# POI12| u | Requirement already satisfied: googleapis-common-protos==1.66.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 698)) (1.66.0)
# POI12| u | Requirement already satisfied: grpcio==1.68.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 704)) (1.68.0)
# POI12| u | Requirement already satisfied: gunicorn==23.0.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 763)) (23.0.0)
# POI12| u | Requirement already satisfied: html5lib==1.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 767)) (1.1)
# POI12| u | Requirement already satisfied: icalendar==6.1.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 771)) (6.1.0)
# POI12| u | Requirement already satisfied: idna==3.10 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 775)) (3.10)
# POI12| u | Requirement already satisfied: importlib-metadata==8.5.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 779)) (8.5.0)
# POI12| u | Requirement already satisfied: inflection==0.5.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 786)) (0.5.1)
# POI12| u | Requirement already satisfied: isodate==0.7.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 790)) (0.7.2)
# POI12| u | Requirement already satisfied: itypes==1.2.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 794)) (1.2.0)
# POI12| u | Requirement already satisfied: jinja2==3.1.4 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 798)) (3.1.4)
# POI12| u | Requirement already satisfied: jsonschema==4.23.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 802)) (4.23.0)
# POI12| u | Requirement already satisfied: jsonschema-specifications==2024.10.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 806)) (2024.10.1)
# POI12| u | Requirement already satisfied: lxml==5.3.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 810)) (5.3.0)
# POI12| u | Requirement already satisfied: markdown==3.7 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 952)) (3.7)
# POI12| u | Requirement already satisfied: markuppy==1.14 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 956)) (1.14)
# POI12| u | Requirement already satisfied: markupsafe==3.0.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 959)) (3.0.2)
# POI12| u | Requirement already satisfied: odfpy==1.4.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1022)) (1.4.1)
# POI12| u | Requirement already satisfied: openpyxl==3.1.5 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1025)) (3.1.5)
# POI12| u | Requirement already satisfied: opentelemetry-api==1.28.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1029)) (1.28.2)
# POI12| u | Requirement already satisfied: opentelemetry-exporter-otlp==1.28.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1043)) (1.28.2)
# POI12| u | Requirement already satisfied: opentelemetry-exporter-otlp-proto-common==1.28.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1047)) (1.28.2)
# POI12| u | Requirement already satisfied: opentelemetry-exporter-otlp-proto-grpc==1.28.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1053)) (1.28.2)
# POI12| u | Requirement already satisfied: opentelemetry-exporter-otlp-proto-http==1.28.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1057)) (1.28.2)
# POI12| u | Requirement already satisfied: opentelemetry-instrumentation==0.49b2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1061)) (0.49b2)
# POI12| u | Requirement already satisfied: opentelemetry-instrumentation-django==0.49b2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1069)) (0.49b2)
# POI12| u | Requirement already satisfied: opentelemetry-instrumentation-redis==0.49b2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1073)) (0.49b2)
# POI12| u | Requirement already satisfied: opentelemetry-instrumentation-requests==0.49b2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1077)) (0.49b2)
# POI12| u | Requirement already satisfied: opentelemetry-instrumentation-wsgi==0.49b2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1081)) (0.49b2)
# POI12| u | Requirement already satisfied: opentelemetry-proto==1.28.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1085)) (1.28.2)
# POI12| u | Requirement already satisfied: opentelemetry-sdk==1.28.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1092)) (1.28.2)
# POI12| u | Requirement already satisfied: opentelemetry-semantic-conventions==0.49b2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1099)) (0.49b2)
# POI12| u | Requirement already satisfied: opentelemetry-util-http==0.49b2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1109)) (0.49b2)
# POI12| u | Requirement already satisfied: packaging==24.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1116)) (24.2)
# POI12| u | Requirement already satisfied: pdf2image==1.17.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1122)) (1.17.0)
# POI12| u | Requirement already satisfied: pillow==11.0.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1126)) (11.0.0)
# POI12| u | Requirement already satisfied: pint==0.24.4 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1209)) (0.24.4)
# POI12| u | Requirement already satisfied: pip-licenses==5.0.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1213)) (5.0.0)
# POI12| u | Requirement already satisfied: platformdirs==4.3.6 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1217)) (4.3.6)
# POI12| u | Requirement already satisfied: prettytable==3.12.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1221)) (3.12.0)
# POI12| u | Requirement already satisfied: protobuf==5.28.3 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1225)) (5.28.3)
# POI12| u | Requirement already satisfied: py-moneyed==3.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1240)) (3.0)
# POI12| u | Requirement already satisfied: pycparser==2.22 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1244)) (2.22)
# POI12| u | Requirement already satisfied: pydyf==0.10.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1248)) (0.10.0)
# POI12| u | Requirement already satisfied: pyjwt==2.10.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1254)) (2.10.1)
# POI12| u | Requirement already satisfied: pyphen==0.17.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1258)) (0.17.0)
# POI12| u | Requirement already satisfied: python-barcode==0.15.1 in ./env/lib/python3.9/site-packages (from python-barcode[images]==0.15.1->-r /opt/inventree/src/backend/requirements.txt (line 1262)) (0.15.1)
# POI12| u | Requirement already satisfied: python-dateutil==2.9.0.post0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1266)) (2.9.0.post0)
# POI12| u | Requirement already satisfied: python-dotenv==1.0.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1272)) (1.0.1)
# POI12| u | Requirement already satisfied: python-fsutil==0.14.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1276)) (0.14.1)
# POI12| u | Requirement already satisfied: python-ipware==3.0.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1280)) (3.0.0)
# POI12| u | Requirement already satisfied: python3-openid==3.2.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1284)) (3.2.0)
# POI12| u | Requirement already satisfied: python3-saml==1.16.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1288)) (1.16.0)
# POI12| u | Requirement already satisfied: pytz==2024.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1293)) (2024.2)
# POI12| u | Requirement already satisfied: pyyaml==6.0.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1299)) (6.0.2)
# POI12| u | Requirement already satisfied: qrcode==8.0 in ./env/lib/python3.9/site-packages (from qrcode[pil]==8.0->-r /opt/inventree/src/backend/requirements.txt (line 1357)) (8.0)
# POI12| u | Requirement already satisfied: rapidfuzz==3.10.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1363)) (3.10.1)
# POI12| u | Requirement already satisfied: redis==5.2.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1453)) (5.2.0)
# POI12| u | Requirement already satisfied: referencing==0.35.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1457)) (0.35.1)
# POI12| u | Requirement already satisfied: requests==2.32.3 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1463)) (2.32.3)
# POI12| u | Requirement already satisfied: rpds-py==0.21.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1469)) (0.21.0)
# POI12| u | Requirement already satisfied: sentry-sdk==2.19.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1563)) (2.19.0)
# POI12| u | Collecting setuptools==75.6.0 (from -r /opt/inventree/src/backend/requirements.txt (line 1569))
# POI12| u | Downloading setuptools-75.6.0-py3-none-any.whl (1.2 MB)
# POI12| u | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 8.2 MB/s eta 0:00:00
# POI12| u | Requirement already satisfied: sgmllib3k==1.0.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1575)) (1.0.0)
# POI12| u | Requirement already satisfied: six==1.16.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1578)) (1.16.0)
# POI12| u | Requirement already satisfied: sqlparse==0.5.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1584)) (0.5.2)
# POI12| u | Requirement already satisfied: structlog==24.4.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1590)) (24.4.0)
# POI12| u | Requirement already satisfied: tablib==3.5.0 in ./env/lib/python3.9/site-packages (from tablib[html,ods,xls,xlsx,yaml]==3.5.0->-r /opt/inventree/src/backend/requirements.txt (line 1594)) (3.5.0)
# POI12| u | Requirement already satisfied: tinycss2==1.4.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1600)) (1.4.0)
# POI12| u | Requirement already satisfied: tomli==2.1.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1607)) (2.1.0)
# POI12| u | Requirement already satisfied: typing-extensions==4.12.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1611)) (4.12.2)
# POI12| u |2025-02-04 07:25:36,613 ERROR {'event': 'Invalid number for INVENTREE_DB_PORT: sampleport', 'timestamp': '2025-02-04T12:25:36.613180Z', 'logger': 'inventree', 'level': 'error', 'exception': 'Traceback (most recent call last):\n File "/opt/inventree/src/backend/InvenTree/InvenTree/settings.py", line 670, in <module>\n env_var = int(env_var)\nValueError: invalid literal for int() with base 10: \'sampleport\''}
2025-02-04 07:25:36,652 WARNING {'event': 'No SITE_URL specified. Some features may not work correctly', 'timestamp': '2025-02-04T12:25:36.652199Z', 'logger': 'inventree', 'level': 'warning'}
2025-02-04 07:25:36,652 WARNING {'event': 'Specify a SITE_URL in the configuration file or via an environment variable', 'timestamp': '2025-02-04T12:25:36.652421Z', 'logger': 'inventree', 'level': 'warning'}
2025-02-04 07:25:36,652 ERROR {'event': 'No CSRF_TRUSTED_ORIGINS specified. Please provide a list of trusted origins, or specify INVENTREE_SITE_URL', 'timestamp': '2025-02-04T12:25:36.652690Z', 'logger': 'inventree', 'level': 'error'}
Requirement already satisfied: tzdata==2024.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1622)) (2024.2)
# POI12| u | Requirement already satisfied: uritemplate==4.1.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1626)) (4.1.1)
# POI12| u | Requirement already satisfied: urllib3==2.2.3 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1632)) (2.2.3)
# POI12| u | Requirement already satisfied: wcwidth==0.2.13 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1639)) (0.2.13)
# POI12| u | Requirement already satisfied: weasyprint==62.3 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1643)) (62.3)
# POI12| u | Requirement already satisfied: webencodings==0.5.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1649)) (0.5.1)
# POI12| u | Requirement already satisfied: whitenoise==6.8.2 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1657)) (6.8.2)
# POI12| u | Requirement already satisfied: wrapt==1.17.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1661)) (1.17.0)
# POI12| u | Requirement already satisfied: xlrd==2.0.1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1731)) (2.0.1)
# POI12| u | Requirement already satisfied: xlwt==1.3.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1735)) (1.3.0)
# POI12| u | Requirement already satisfied: xmlsec==1.3.14 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1739)) (1.3.14)
# POI12| u | Requirement already satisfied: zipp==3.21.0 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1799)) (3.21.0)
# POI12| u | Requirement already satisfied: zopfli==0.2.3.post1 in ./env/lib/python3.9/site-packages (from -r /opt/inventree/src/backend/requirements.txt (line 1803)) (0.2.3.post1)
# POI12| u | Installing collected packages: setuptools
# POI12| u | Attempting uninstall: setuptools
# POI12| u | Found existing installation: setuptools 75.8.0
# POI12| u | Uninstalling setuptools-75.8.0:
# POI12| u | Successfully uninstalled setuptools-75.8.0
# POI12| u | Successfully installed setuptools-75.6.0
# POI12| u | Installing plugin packages from '/etc/inventree/plugins.txt'
# POI12| u | Requirement already satisfied: inventree-kicad-plugin in ./env/lib/python3.9/site-packages (from -r /etc/inventree/plugins.txt (line 3)) (1.4.3)
# POI12| u | Python version 3.9.21 - /opt/inventree/env/bin/python3
# POI12| u | ERROR: InvenTree command failed: 'python3 manage.py collectplugins'
# POI12| u | - Refer to the error messages in the log above for more information
# POI12| Set permissions for data dir and media: /opt/inventree/data
# POI14| Site URL already set - skipping
# POI15| Starting InvenTree
# POI15| Started InvenTree
# POI16| Printing Final message
####################################################################################
This InvenTree install uses nginx, the settings for the webserver can be found in
/etc/nginx/sites-enabled/inventree.conf
Try opening InvenTree with either
http://localhost/ or http://64.4.107.154/
Admin user data:
Email:
Username:
Password:
####################################################################################
# POI17| Running postinstall script - done - Tue Feb 4 07:25:37 AM EST 2025
``` | closed | 2025-02-04T12:52:28Z | 2025-02-07T19:00:24Z | https://github.com/inventree/InvenTree/issues/9030 | [
"question",
"setup"
] | SparkeyinVA | 5 |
dadadel/pyment | numpy | 80 | Add option for output to STDOUT | It seems that there is currently no option to output to STDOUT. That would be a nice feature in my opinion, since my usual workflow with tools like this is to first get a quick preview of the proposed changes on STDOUT (usually piped to `less`) and, if satisfied, apply the changes in-place. Of course I can do that using the generated `.patch` file, but it's more cumbersome. So I'd love to have a CLI parameter to redirect the output to STDOUT. | closed | 2019-06-20T09:57:20Z | 2019-11-19T20:36:43Z | https://github.com/dadadel/pyment/issues/80 | [] | torfsen | 3 |
mwaskom/seaborn | data-visualization | 3,105 | Typo in Seaborn Documentation | There is a small typo in the following [page](https://seaborn.pydata.org/generated/seaborn.pairplot.html) of documentation. The typo is in this section at the top of the page:
> By default, this function will create a grid of Axes such that each numeric variable in data will **by** shared across the y-axes across a single row and the x-axes across a single column. The diagonal plots are treated differently: a univariate distribution plot is drawn to show the marginal distribution of the data in each column.
| closed | 2022-10-22T13:04:48Z | 2022-10-22T13:17:57Z | https://github.com/mwaskom/seaborn/issues/3105 | [] | msharara1998 | 1 |
HumanSignal/labelImg | deep-learning | 839 | No module named 'libs.resources' | <!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
- **OS:**
Archlinux amd64
- **PyQt version:**
5.14.1 | closed | 2022-01-23T05:51:26Z | 2022-01-28T06:19:42Z | https://github.com/HumanSignal/labelImg/issues/839 | [] | dmzlingyin | 3 |
sinaptik-ai/pandas-ai | pandas | 569 | Remove OpenAI as required dependency | ### 🚀 The feature
We are using [JupyterLite](https://jupyterlite.readthedocs.io/), a Pyodide based Jupyter notebook that runs entirely in the browser. Due to various reasons, the OpenAI package doesn't work in this environment. It would be great if we can install and import `pandasai` without this as a required dependency.
### Motivation, pitch
This could enable PandasAI to be run in browser based Python environments.
### Alternatives
Hot patch PandasAI / OpenAI.
### Additional context
_No response_ | closed | 2023-09-17T20:04:11Z | 2024-06-01T00:21:48Z | https://github.com/sinaptik-ai/pandas-ai/issues/569 | [] | andeplane | 0 |
alirezamika/autoscraper | web-scraping | 60 | Scraping skips item | Hello,
I am really delighted with this fantastic module. However I want to signal a lillte annoying bug:
1. I am trying to scrape wsj.com.
2. In the wanted list I put the left column 1st article title and left colum 1st article summary.
3. The results list does not include the head (up center) article summary but it includes its title.
4. Number of list elements is 33 where it should be 34 (17 titles + 17 summaries).
I do suspect that since the head summary is align/centered and the other artcile summaries are align/left this might cause this skipping.
Thank You indeed, all te best
Pierre-Emmanuel FEGA | closed | 2021-04-25T09:25:32Z | 2022-07-17T17:11:08Z | https://github.com/alirezamika/autoscraper/issues/60 | [] | zepef | 1 |
pyeventsourcing/eventsourcing | sqlalchemy | 19 | 1.10 release? | Any chance you could release the 1.10 version to PyPI? 1.09 doesn't seem to work with MySQL and SQLAlchemy.
`sqlalchemy.exc.CompileError: (in table 'stored_events', column 'event_id'): VARCHAR requires a length on dialect mysql`
This is already fixed in the main branch.
| closed | 2016-10-05T10:30:07Z | 2016-10-05T20:56:31Z | https://github.com/pyeventsourcing/eventsourcing/issues/19 | [] | rogoman | 3 |
nolar/kopf | asyncio | 944 | CTRL C - delete resource is stuck | ### Keywords
_No response_
### Problem
Hello,
I have a basic example
```
#!/usr/bin/env python3
import kopf
import kubernetes.config as k8s_config
import kubernetes.client as k8s_client
import logging
text_analyzer_crd = k8s_client.V1CustomResourceDefinition(
api_version="apiextensions.k8s.io/v1",
kind="CustomResourceDefinition",
metadata=k8s_client.V1ObjectMeta(name="textanalyzers.operators.mytest.it", namespace="mytest"),
spec=k8s_client.V1CustomResourceDefinitionSpec(
...
)
try:
k8s_config.load_kube_config()
except k8s_config.ConfigException:
k8s_config.load_incluster_config()
api_instance = k8s_client.ApiextensionsV1Api()
logging.info("Trying CRD install")
try:
api_instance.create_custom_resource_definition(text_analyzer_crd)
except k8s_client.rest.ApiException as e:
if e.status == 409:
logging.info("CRD already exists")
else:
raise e
@kopf.on.create('operators.mytest.it', 'v1', 'textanalyzers')
def on_create(spec, name, namespace, logger, **kwargs):
logging.info("CREATE")
@kopf.on.update('operators.mytest.it', 'v1', 'textanalyzers')
def on_update(spec, name, namespace, logger, **kwargs):
logging.info("UPDATE")
@kopf.on.delete('operators.mytest.it', 'v1', 'textanalyzers')
def on_delete(spec, name, namespace, logger, **kwargs):
logging.info("DELETE")
```
After `$ kopf run text-analyzer.py -n mytest`, I can create/update/delete resources and everything works as expected (log appears). After stopping kopf (CTRL-C) if I delete a resource, the command hangs even with --force
```
...
[2022-08-07 15:16:32,073] kopf._core.reactor.r [INFO ] Signal SIGINT is received. Operator is stopping.
```
```
$ k delete textanalyzers.operators.mytest.it test-analyzer-1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
textanalyzer.operators.mytest.it "test-analyzer-1" force deleted
```
If I restart kopf the command terminates.
```
[2022-08-07 15:31:25,078] root [INFO ] DELETE - AAA BBB CCC
[2022-08-07 15:31:25,078] kopf.objects [INFO ] [mytest/test-analyzer-1] Handler 'delete' succeeded.
[2022-08-07 15:31:25,079] kopf.objects [INFO ] [mytest/test-analyzer-1] Deletion is processed: 1 succeeded; 0
```
Is there a way to exclude/detatch the operator from resources when it is killed?
Riccardo | closed | 2022-08-07T13:32:49Z | 2022-08-08T12:28:17Z | https://github.com/nolar/kopf/issues/944 | [
"question"
] | ric79 | 2 |
xorbitsai/xorbits | numpy | 333 | BUG: Cudf 'Series' object has no attribute 'agg' | Note that the issue tracker is NOT the place for general support. For
discussions about development, questions about usage, or any general questions,
contact us on https://discuss.xorbits.io/.
Cudf 22.10.0 'Series' object has no attribute 'agg'
Test this during running TPCH Q22 on GPU.
Reprofuce
```python
import cudf
a = cudf.Series([1,2,3,4])
a.agg('sum')
``` | closed | 2023-04-06T06:56:06Z | 2023-04-16T08:18:30Z | https://github.com/xorbitsai/xorbits/issues/333 | [
"bug",
"gpu"
] | ChengjieLi28 | 0 |
skforecast/skforecast | scikit-learn | 631 | Feature request: Add ability to skip steps in backtesting | My current understanding is that in all the backtesters the forecast origin moves forward by one time step for each fold of backtesting. I think it would be helpful if users could set the forecast origin to move forward by N steps rather than just one step for each fold. This can help reduce the time for backtesting.
Apologies if this feature is already available, but from reading the [docs](https://skforecast.org/0.11.0/api/model_selection_multiseries.html#skforecast.model_selection_multiseries.model_selection_multiseries.backtesting_forecaster_multiseries) I'm unsure whether it is.
Thank you!
Kishan | open | 2024-01-27T14:20:33Z | 2024-06-18T08:08:45Z | https://github.com/skforecast/skforecast/issues/631 | [] | KishManani | 7 |
keras-team/keras | machine-learning | 20,497 | Will Keras 3 support intel gpu with oneapi | Dear Keras Team,
The latest version of [pytorch 2.5](https://pytorch.org/blog/pytorch2-5/) already supports Intel GPU. Users can use pytorch on Intel GPU with [minimum code change.](https://pytorch.org/docs/main/notes/get_start_xpu.html).
```
# CUDA CODE
tensor = torch.tensor([1.0, 2.0]).to("cuda")
# CODE for Intel GPU
tensor = torch.tensor([1.0, 2.0]).to("xpu")
```
Will Keras team follow up to support [Intel OneAPI](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpu/2-5.html) ecosystem? Thank you very much! | closed | 2024-11-15T10:03:57Z | 2024-11-17T08:11:12Z | https://github.com/keras-team/keras/issues/20497 | [] | HaroldYin1024 | 7 |
deepspeedai/DeepSpeed | machine-learning | 7,031 | Fix - Update DeepSpeed to be PEP517 compliant, update to `pyproject.toml` | We need to update DeepSpeed to be compliant with [PEP 517](https://peps.python.org/pep-0517/) as well as for handling of [future changes to pip](https://github.com/pypa/pip/issues/11457). | open | 2025-02-13T16:59:26Z | 2025-02-14T16:52:59Z | https://github.com/deepspeedai/DeepSpeed/issues/7031 | [
"build",
"install"
] | loadams | 0 |
serengil/deepface | deep-learning | 730 | Confirm that deepface/tests/dataset/img1.jpg exists | closed | 2023-04-25T13:49:02Z | 2023-04-25T14:43:28Z | https://github.com/serengil/deepface/issues/730 | [] | T1m3x | 0 | |
Farama-Foundation/Gymnasium | api | 1,160 | [Bug Report] pip remove ale-py==0.8.1 | ### Describe the bug
I'm attempting to install the release version of Gymnasium using pypi , which requires ale-py==0.8.1. However, when I try to install it using:
`pip install ale-py==0.8.1`
I receive the following error:
```
ERROR: Could not find a version that satisfies the requirement ale-py==0.8.1 (from versions: 0.9.0, 0.9.1)
ERROR: No matching distribution found for ale-py==0.8.1
```
I'm aware that Gymnasium version 1 is in development, but at the moment, installing the release (pypi) version v0.29.1 seems to be problematic due to this issue.
Is there a workaround or solution available to resolve this?
### Code example
_No response_
### System info
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2024-09-13T17:08:46Z | 2024-09-13T18:17:40Z | https://github.com/Farama-Foundation/Gymnasium/issues/1160 | [
"bug"
] | Hananel-Hazan | 2 |
satwikkansal/wtfpython | python | 317 | Image for tricky strings is not visible in dark mode | https://github.com/satwikkansal/wtfpython#-strings-can-be-tricky-sometimes
Please, invert the following image to white color for dark mode users:

| closed | 2023-09-14T10:41:14Z | 2024-10-16T08:59:17Z | https://github.com/satwikkansal/wtfpython/issues/317 | [] | dimaklepikov | 0 |
wandb/wandb | tensorflow | 8,942 | [Bug]: log_params flag in the wandb_callback() of the lightgbm integration is not working | ### Describe the bug
<!--- Describe your issue here --->
the `log_params` looks not to be passed to the `_WandbCallback` constructor, I think the following fix is needed
```diff
- return _WandbCallback(define_metric)
+ return_WandbCallback(log_params, define_metric)
```
See:
https://github.com/wandb/wandb/blob/4e78d9dfa4a3e86d968d89a75a7eac8eaa6e03c2/wandb/integration/lightgbm/__init__.py#L155-L185 | closed | 2024-11-25T05:41:39Z | 2024-12-03T22:31:52Z | https://github.com/wandb/wandb/issues/8942 | [
"ty:bug",
"a:app"
] | i-aki-y | 3 |
MagicStack/asyncpg | asyncio | 581 | Cannot pass a list or tuple! | ```
mylist = [2,3,4,1]
conn.fetch("SELECT * FROM test WHERE ID IN ($1)", mylist)
```
Error: `asyncpg.exceptions.DataError: invalid input for query argument $1: [2,3,4,1] (an integer is required (got type list))` | closed | 2020-05-31T13:27:18Z | 2020-06-08T16:16:35Z | https://github.com/MagicStack/asyncpg/issues/581 | [] | jzr-supove | 2 |
graphistry/pygraphistry | pandas | 539 | databricks: when calling register using SSO, databricks does not display the HTML link to the SSO login | **Describe the bug**
When calling register using SSO, databricks does not display the HTML link to the SSO login until after the timer has expired, which prevents the user from clicking the link. Only the text of the auth_url is presented.
**To Reproduce**
```
graphistry.register(api=3, protocol="https", server="...", is_sso_login=True)
```
**Expected behavior**
The html link should appear while timer is running, not after expiration
**Screenshots**

update May 08:
Should be able to run in every one of these scenarios:
- [ ] notebook author is able to run notebook, then convert to dashboard, and present dashboard, and hit update
- [ ] notebook author, clears state and output from notebook and then view and run dashboard
notebook is shared with another user, and all the following permissions should also work:
- [ ] `can run` user2 runs the notebook and dashboard from notebook author's workspace
- [ ] `can manage` user2 runs the notebook and dashboard from notebook author's workspace
- [ ] `can view` user2 clones the notebook into their own workspace, effectively creating a copy of the notebook and they should be able to run notebook and dashboard.
| open | 2024-01-16T20:32:42Z | 2024-05-08T15:08:32Z | https://github.com/graphistry/pygraphistry/issues/539 | [
"bug"
] | DataBoyTX | 0 |
tflearn/tflearn | data-science | 1,125 | dask example not working | Running the first few lines of https://github.com/tflearn/tflearn/blob/master/examples/basics/use_dask.py gives me an error in line 29:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-13-220cc54080f8> in <module>
1 X = da.from_array(np.asarray(X), chunks=(1000, 1000, 1000, 1000))
----> 2 Y = da.from_array(np.asarray(Y), chunks=(1000, 1000, 1000, 1000))
3 X_test = da.from_array(np.asarray(X_test), chunks=(1000, 1000, 1000, 1000))
4 Y_test = da.from_array(np.asarray(Y_test), chunks=(1000, 1000, 1000, 1000))
/opt/conda/lib/python3.7/site-packages/dask/array/core.py in from_array(x, chunks, name, lock, asarray, fancy, getitem)
2110 x = np.array(x)
2111
-> 2112 chunks = normalize_chunks(chunks, x.shape, dtype=x.dtype)
2113 if name in (None, True):
2114 token = tokenize(x, chunks)
/opt/conda/lib/python3.7/site-packages/dask/array/core.py in normalize_chunks(chunks, shape, limit, dtype, previous_chunks)
1895 raise ValueError(
1896 "Chunks and shape must be of the same length/dimension. "
-> 1897 "Got chunks=%s, shape=%s" % (chunks, shape))
1898 if -1 in chunks:
1899 chunks = tuple(s if c == -1 else c for c, s in zip(chunks, shape))
ValueError: Chunks and shape must be of the same length/dimension. Got chunks=(1000, 1000, 1000, 1000), shape=(50000, 10)
```
Also: I had to fix the `to_categorical()` calls in my code, that were also not working.
```(X, Y), (X_test, Y_test) = cifar10.load_data()
classes = da.unique(Y).compute()
Y = to_categorical(Y, nb_classes=len(classes))
Y_test = to_categorical(Y_test, nb_classes=len(classes))
```
| open | 2019-04-05T13:36:05Z | 2019-04-05T13:36:05Z | https://github.com/tflearn/tflearn/issues/1125 | [] | r0f1 | 0 |
pyro-ppl/numpyro | numpy | 1,379 | [Feature Request] Show params in `render_model()` | Hi @fehiepsi and team,
Prof. @nipunbatra explained the [issue](https://github.com/pyro-ppl/pyro/issues/3023) (in `pyro`) of rendering params in `pyro.render_model()` and then I added this feature in `pyro` ([PR](https://github.com/pyro-ppl/pyro/pull/3039)). I would like to add the same feature in `numpyro`.
I have tried to add this feature for `numpyro` and tested it on some examples. Consider the following model,
```python
def model(data):
m = numpyro.param("m", jnp.array(0))
sd = numpyro.param("sd", jnp.array([1]),constraint=constraints.positive)
lambd = numpyro.sample("lambda",dist.Normal(m,sd))
with numpyro.plate("N", len(data)):
numpyro.sample("obs", dist.Exponential(lambd), obs=data)
```
I have added `sample_params` and `params_constraint` keys in dictionary returned by `contrib.render.get_model_relations()`. I added Provenance Tracking for params in `get_model_relations` to add values in `sample_param`.
```python
numpyro.contrib.render.get_model_relations(model,model_args=(data,))
```

```python
data = jnp.ones(10)
numpyro.render_model(model, model_args=(data,), render_distributions=True))
```

I have added optional argument `render_params` in `render_model()` to show params in the plot. Also, `render_distributions=True` shows the constraints of params with distributions of the sample.
```python
numpyro.render_model(model, model_args=(data,), render_params=True, render_distributions=True)
```

**@fehiepsi, Can you please review this? if it meets your expectaion, then I would like to create a PR for the same.**
| closed | 2022-03-28T10:49:40Z | 2022-04-06T14:54:28Z | https://github.com/pyro-ppl/numpyro/issues/1379 | [
"enhancement"
] | karm-patel | 1 |
Gozargah/Marzban | api | 763 | Connection errors in nodes | how to fix node connecting issues?
Any other ideas? | closed | 2024-01-26T12:14:01Z | 2024-06-18T07:53:10Z | https://github.com/Gozargah/Marzban/issues/763 | [] | ImMohammad20000 | 4 |
google-research/bert | nlp | 523 | Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary | I am using bert-serving to deploy tf-serving: https://github.com/bigboNed3/bert_serving,eporting model work fine,however inference doesn't work,anybody knows how to fix this problem:
Traceback (most recent call last):
File "request_queryclassifier_client.py", line 183, in <module>
tf.app.run()
File "/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "request_queryclassifier_client.py", line 167, in main
result = stub.Predict(request, 1000.0) # 10 secs timeout
File "/data/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 533, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/data/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: bert/encoder/Cast = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _output_shapes=[[?,1,16]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](bert/encoder/Reshape). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bert/encoder/Cast = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _output_shapes=[[?,1,16]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](bert/encoder/Reshape)]]"
debug_error_string = "{"created":"@1553569490.204528585","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: bert/encoder/Cast = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _output_shapes=[[?,1,16]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](bert/encoder/Reshape). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).\n\t [[Node: bert/encoder/Cast = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _output_shapes=[[?,1,16]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](bert/encoder/Reshape)]]","grpc_status":3}"
ps:the versions of tensorflow are both 1.12.0 in exporting model phase and inference phase. | open | 2019-03-26T03:45:06Z | 2020-11-13T08:50:11Z | https://github.com/google-research/bert/issues/523 | [] | iiapache | 5 |
miguelgrinberg/Flask-Migrate | flask | 359 | ImportError when creating the migration repository | Hello!
Could you help me,please?
I am trying to create the migration repository. Flask-Migrate and Flask-SQLAlchemy were installed to venv without any problems. But when I write in command window the command 'flask db init', the Import Error raised. I can't understand why. I checked the code from __init__.py and config.py. Everything seems to be OK. The structure of the project also seems to be OK like in the course.
I will copy here the screen part with Import Error.
And here is the code of config.py and __init__.py.
config.py is located in microblog
__init__.py is in app

[config.py]
```python
import os
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
SECRET_KEY = os.environ.get('SECRET_KEY') or 'you-will-never-guess'
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') or \
'sqlite:///' + os.path.join(basedir, 'app.db')
SQLALCHEMY_TRACK_MODIFICATIONS = False
```
[__init__.py]
```python
from flask import Flask
from config import Сonfig
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
app = Flask(__name__)
app.config.from_object("Config")
db = SQLAlchemy(app)
migrate = Migrate(app, db)
from app import routes, models
```
| closed | 2020-08-08T11:00:37Z | 2020-10-09T18:47:26Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/359 | [
"question"
] | PavelDymov | 3 |
mwaskom/seaborn | data-visualization | 3,239 | [Feature Request] sns.histplot() automatically supports discreted axis labeling. | # Background / related function desc
As the doc saids:
When both x and y are assigned, a bivariate histogram is computed and shown as a heatmap:
```python
sns.histplot(penguins, x="bill_depth_mm", y="body_mass_g")
```

---
# The requested feature is that
**If the provided `x=` was a `Pandas Categorical DType`, well cause It's a cat dtype, so what I expected `seaborn` could treat it as discrete variable instead of continuous variable.**
Recalls that the `Pandas Categorical DType` it could be a `numeric cat` like `Categories (5, int64): [1 < 10 < 100 < 1000 < 10000]`, or `str cat` like `Categories (3, object): ['0', '10ms', '20ms', '30ms']`.
- If it's `str cat`, the `histplot` gives the expected result.

- However, if `num cat`, and it just follows the numberic axis labeling.

Indeed, may be we can work around by using `log_scale=(True)`, but what if the `num cat` is not log-scale cat? Like `[1 < 200 < 1000< 1200]`.
# Currently best workaround I found
For those `num cat` dtype columns in dataframe, just convert to `str cat` first then apply histplot().
```python
df[col] = df[col].astype(str).astype('category')
sns.histplot(df, x=x, y=col)
``` | closed | 2023-01-29T07:53:32Z | 2023-02-06T17:29:56Z | https://github.com/mwaskom/seaborn/issues/3239 | [] | kaimo455 | 4 |
viewflow/viewflow | django | 31 | Transaction context | with activation.transaction_context:
done()
| closed | 2014-04-03T11:53:47Z | 2014-05-12T06:58:55Z | https://github.com/viewflow/viewflow/issues/31 | [
"request/enhancement"
] | kmmbvnr | 2 |
JaidedAI/EasyOCR | pytorch | 358 | easyocr.recognize is significantly slower when given several boxes to estimate, rather than running it several time with one box each time | Hello,
Thank you for this tool, it is great. I want to build on top of it, and execution time is a matter of importance for me (even on CPU).
I don't know if it's a bug or a *feature*, but I've noticed that `easyocr.recognize` is significantly slower when called once and given `n` boxes to estimate the text in, rather than called `n` times with one box each time.
### How to reproduce ###
1/ Download

Run
```python
import easyocr
reader = Reader(['ja'], gpu = False)
image = "path/to/example.png"
result = reader.detect(image)
print(result)
# returns ([[117, 471, -3, 37], [120, 596, 32, 64]], [])
```
Then run
```python
import time
L = result[0]
start_time = time.time()
estimate = reader.recognize(filename,
horizontal_list=L,
free_list=[]
)
print(estimate)
print("--- %s seconds ---" % (time.time() - start_time))
```
and
```python
start_time = time.time()
for box in L:
estimate = reader.recognize(filename,
horizontal_list=[box],
free_list=[]
)
print(estimate)
print("--- %s seconds ---" % (time.time() - start_time))
```
**For me the first one takes ~3.45s, and the second one takes ~2.95s, i.e. about 15% faster.**
Is it an expected behavior?
Many thanks! | closed | 2021-01-28T20:50:10Z | 2021-02-22T01:04:12Z | https://github.com/JaidedAI/EasyOCR/issues/358 | [] | fxmarty | 4 |
vitalik/django-ninja | rest-api | 479 | How do I perform google social authentication? | I need to implement a social login for my api with django ninja, but i don't find any example for this.
what is the best approach? | open | 2022-06-22T01:38:17Z | 2024-07-06T21:19:51Z | https://github.com/vitalik/django-ninja/issues/479 | [] | Jorncg | 24 |
InstaPy/InstaPy | automation | 6,813 | from clarifai.rest import ClarifaiApp, Workflow ModuleNotFoundError: No module named 'clarifai.rest' | Can anyone fix this
File "C:\Users\Christian\Desktop\Instapy\instagram.py", line 2, in <module>
from instapy import InstaPy
File "C:\Users\Christian\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\__init__.py", line 6, in <module>
from .instapy import InstaPy
File "C:\Users\Christian\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\instapy.py", line 35, in <module>
from .clarifai_util import check_image
File "C:\Users\Christian\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\clarifai_util.py", line 3, in <module>
from clarifai.rest import ClarifaiApp, Workflow
ModuleNotFoundError: No module named 'clarifai.rest' | open | 2024-06-06T01:43:20Z | 2024-10-01T13:29:47Z | https://github.com/InstaPy/InstaPy/issues/6813 | [] | FlowCodee | 3 |
scrapy/scrapy | web-scraping | 6,017 | handling exceptions from the previous middleware in process_exception | ## Summary
I hope that in the middleware's preprocessing_ The exception method can handle exceptions thrown by the previous middleware.
## Motivation
I am in the process of the first middleware_ An exception was thrown in the response method, and I want to record these exceptions in the next middleware, but the next middleware cannot capture these exceptions.
I am a beginner, and I don't know if there is a problem with my writing style or if there is a better solution? Please help me!
This is sample code:
```python
class MyMiddleware1:
def process_response(self, request, response, spider):
# I will handle some unexpected request results here uniformly,
# and then throw an error to prevent my spider from processing this result.
if len(response.body) < 10000:
raise Exception('Bad data!')
return response
class MyMiddleware2:
def _track(self, msg):
pass
def process_exception(self, request, exception, spider):
# Then I hope to uniformly record these errors in the next layer of middleware.
# But this middleware will not receive the previous middleware process_ Exceptions generated in response.
self._track(str(type(exception)))
```
settings.py
```python
DOWNLOADER_MIDDLEWARES = {
"scrapy_demo.middlewares.MyMiddleware1": 99,
"scrapy_demo.middlewares.MyMiddleware2": 98,
}
```
| closed | 2023-08-17T17:21:47Z | 2023-08-18T15:29:40Z | https://github.com/scrapy/scrapy/issues/6017 | [] | cscxj | 1 |
python-visualization/folium | data-visualization | 1,473 | Create a folium map with a pop up window appearing when clicking a shapefile feature (converted to GeoJSON) | #### Problem description
I'm trying to create a folium map that will have the following features:
- A layer will be created from a shapefile including multiple polygon features.
- When clicking on a single feature a pop up window will appear that will either show a clickable link directing to a locally saved image file or will view the image.
How is this possible (if it is even possible at all)?
The image paths are saved in a specific column of the shapefile (one image per feature), as showed in the following picture.

#### My code is as follows:
```python
fid_gdf = geopandas.read_file(r'J:\P268\ypol\DamageCostCalcs\LandUses\LandUses.shp')
m = folium.Map(location=[37.8895, 23.7345], zoom_start=14)
fid = folium.GeoJson(
data=fid_gdf,
name='FID_',
popup = folium.GeoJsonPopup(
fields=['FID_','CLASS','AREA','PATH'],
aliases=['FID','Class','Area (sq.m)', 'Path']
)
).add_to(m)
m
```
which leads to the following result (where i would like the path shown to be clickable):

| closed | 2021-04-28T12:39:06Z | 2022-11-18T13:53:30Z | https://github.com/python-visualization/folium/issues/1473 | [] | pdim21 | 3 |
QingdaoU/OnlineJudge | django | 258 | 编译器出错该怎么解决 | Compile Error
g++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-5/README.Bugs> for instructions. | closed | 2019-08-28T15:53:31Z | 2019-09-22T06:55:56Z | https://github.com/QingdaoU/OnlineJudge/issues/258 | [] | Akvicor | 15 |
pydata/xarray | pandas | 9,928 | DataTree attribute-like access bug | ### What happened?
I have a datatree with a dataset in a group. I tried to filter this dataset and store in the datatree. The dataset showed the filtered dataset and the unfiltered dataset, depending on the method to get the group. `dt.variable1` or `dt['variable1']`.
### What did you expect to happen?
I expect that both datasets are equal.
### Minimal Complete Verifiable Example
```Python
Note the difference when `dt.variable1` and `dt['variable1']` are printed. When the datatree is exported the value of `dt['variable1']` is used.
import numpy as np
import xarray as xr
ds1 = xr.Dataset(
dict(
variable1=xr.DataArray(
np.random.randn(10),
dims="x",
coords=dict(x=np.linspace(1, 10, num=10)),
)
)
)
dt = xr.DataTree(ds1)
print("Datatree with 1 dataset (variable1)")
print(dt)
# filter on x
dt.variable1 = dt.variable1.sel(x=slice(1, 5))
print("\nPrint sliced dataset variable 1 using 'dt.variable1'")
print(dt.variable1)
print("\nPrint sliced dataset variable 1 using 'dt['variable1']'")
print(dt["variable1"])
print(
"\nThe whole datatree how it is stored (see variable1, which is not sliced)"
)
print(dt)
Datatree with 1 dataset (variable1)
<xarray.DataTree>
Group: /
Dimensions: (x: 10)
Coordinates:
* x (x) float64 80B 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
Data variables:
variable1 (x) float64 80B -0.03606 -0.5738 -1.331 ... -0.3654 1.669 0.9354
Print sliced dataset variable 1 using 'dt.variable1'
<xarray.DataArray 'variable1' (x: 5)> Size: 40B
array([-0.03605657, -0.57384656, -1.3312692 , 1.0832746 , -0.07725834])
Coordinates:
* x (x) float64 40B 1.0 2.0 3.0 4.0 5.0
Print sliced dataset variable 1 using 'dt['variable1']'
<xarray.DataArray 'variable1' (x: 10)> Size: 80B
array([-0.03605657, -0.57384656, -1.3312692 , 1.0832746 , -0.07725834,
-0.45822326, -1.32865998, -0.36541352, 1.66870598, 0.93536491])
Coordinates:
* x (x) float64 80B 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
The whole datatree how it is stored (see variable1, which is not sliced)
<xarray.DataTree>
Group: /
Dimensions: (x: 10)
Coordinates:
* x (x) float64 80B 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
Data variables:
variable1 (x) float64 80B -0.03606 -0.5738 -1.331 ... -0.3654 1.669 0.9354
>>> import xarray
>>> xarray.__version__
'2025.1.0'
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.13.1 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 17:02:46) [MSC v.1929 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 11
machine: AMD64
processor: Intel64 Family 6 Model 186 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('English_United States', '1252')
libhdf5: None
libnetcdf: None
xarray: 2025.1.0
pandas: 2.2.3
numpy: 2.2.1
scipy: None
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
zarr: None
cftime: None
nc_time_axis: None
iris: None
bottleneck: None
dask: None
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: None
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 75.1.0
pip: 24.2
conda: None
pytest: None
mypy: None
IPython: None
sphinx: None
</details>
| open | 2025-01-07T13:33:28Z | 2025-01-09T01:12:30Z | https://github.com/pydata/xarray/issues/9928 | [
"bug",
"topic-DataTree"
] | pepijnvtol | 9 |
learning-at-home/hivemind | asyncio | 199 | Add mixed precision support to ExpertBackend/RemoteExpert | Right now, we don't fully utilize the Tensor Core capabilities of modern NVIDIA GPUs due to making all server-side computations in mixed precision. It might be possible to switch to PyTorch native mixed precision here and improve the performance | open | 2021-03-29T07:24:01Z | 2021-03-29T07:24:01Z | https://github.com/learning-at-home/hivemind/issues/199 | [
"enhancement"
] | mryab | 0 |
coqui-ai/TTS | python | 4,039 | [Bug] inference with mix arabic and english words (or persian and english words) | ### Describe the bug
Consider you have a long text which is arabic or persian. But you have some english words in it. what should you do in inference?
### To Reproduce
Consider you have a long text which is Arabic or Persian. But you have some English words in it. what should you do in inference?
### Expected behavior
generate english in high quality but enver happend
### Logs
_No response_
### Environment
```shell
latest version
```
### Additional context
_No response_ | closed | 2024-10-28T08:11:14Z | 2024-12-28T11:58:20Z | https://github.com/coqui-ai/TTS/issues/4039 | [
"bug",
"wontfix"
] | cod3r0k | 1 |
ghtmtt/DataPlotly | plotly | 126 | Provide sample dataset in plugin | A small dataset to play with could be interesting to provide to the users | closed | 2019-07-06T17:02:55Z | 2019-10-11T13:00:12Z | https://github.com/ghtmtt/DataPlotly/issues/126 | [
"enhancement"
] | ghtmtt | 1 |
joeyespo/grip | flask | 262 | Internal Server Error, HTTPSConnectionPool Error | ```
grip -b
* Using personal access token
* Running on http://localhost:6419/ (Press CTRL+C to quit)
* Error: could not retrieve styles: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /joeyespo/grip (Caused by SSLError(SSLError(1, u'[SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590)'),))
[2018-02-24 10:22:23,435] ERROR in app: Exception on / [GET]
Traceback (most recent call last):
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/Cellar/grip/4.4.0/libexec/lib/python2.7/site-packages/grip/app.py", line 174, in _render_page
content = self.renderer.render(text, self.auth)
File "/usr/local/Cellar/grip/4.4.0/libexec/lib/python2.7/site-packages/grip/renderers.py", line 77, in render
r = requests.post(url, headers=headers, data=data, auth=auth)
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/usr/local/Cellar/grip/4.4.0/libexec/vendor/lib/python2.7/site-packages/requests/adapters.py", line 506, in send
raise SSLError(e, request=request)
SSLError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /markdown/raw (Caused by SSLError(SSLError(1, u'[SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590)'),))
127.0.0.1 - - [24/Feb/2018 10:22:23] "GET / HTTP/1.1" 500 -
```
The message in browser,
```
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
```
I think the connection to Github has no problem.

## version
```
grip -V
Grip 4.4.0
``` | closed | 2018-02-24T02:27:19Z | 2018-03-21T19:33:52Z | https://github.com/joeyespo/grip/issues/262 | [
"question"
] | adoyle-h | 8 |
litestar-org/litestar | asyncio | 3,785 | Bug: `return_dto` with optional nested `Struct` field raises 500 error when no default exists | ### Description
I read through every open bug and couldn't find something directly comparable (though in general it seems like nested objects in DTOs is a recurring source of issues).
The crux is that I don't want to provide a default for the `None` case of a field because I want to explicitly set it, even if it's `None`. That works fine if the field is a simple type like `str` but when it's a nested type like a `Struct` of its own then an error appears stating `TypeError: Missing required argument` _even when the argument is provided_.
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import Litestar, get
from litestar.dto import MsgspecDTO
from msgspec import Struct
class Nested(Struct):
item: str
class IWorkOut(Struct):
foo: Nested | None = None
@get('/works', return_dto=MsgspecDTO[IWorkOut])
async def works() -> IWorkOut:
return IWorkOut(foo=None)
class IFail(Struct):
# the only difference here vs `IWorkOut` above is not including `= None`
# also note that changing `Nested` to a simple type like `str` here makes the error go away
foo: Nested | None
@get('/fails', return_dto=MsgspecDTO[IFail])
async def fails() -> IFail:
return IFail(foo=None)
app = Litestar([works, fails])
Worth noting: I'm reporting this here and not with Msgspec because just running `IFail(foo=None)` by itself works correctly. The issue appears to only occur because of the `return_dto`.
```
### Steps to reproduce
```bash
1. Run the app, I use uvicorn: `uvicorn main:app`
2. `GET /works` and it correctly returns a 200 with `{"foo": null}`
3. `GET /fails` and it returns a 500 with `{"status_code":500,"detail":"Internal Server Error"}`
4. See error logging below
```
### Screenshots
_No response_
### Logs
```bash
INFO: 127.0.0.1:49410 - "GET /fails HTTP/1.1" 500 Internal Server Error
ERROR - 2024-10-12 21:39:00,401 - root - example - Missing required argument 'foo'
Traceback (most recent call last):
File "[...]/litestar/middleware/_internal/exceptions/middleware.py", line 159, in __call__
await self.app(scope, receive, capture_response_started)
File "[...]/litestar/_asgi/asgi_router.py", line 100, in __call__
await asgi_app(scope, receive, send)
File "[...]/litestar/routes/http.py", line 80, in handle
response = await self._get_response_for_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/litestar/routes/http.py", line 132, in _get_response_for_request
return await self._call_handler_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/litestar/routes/http.py", line 156, in _call_handler_function
response: ASGIApp = await route_handler.to_response(app=scope["app"], data=response_data, request=request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/litestar/handlers/http_handlers/base.py", line 555, in to_response
data = return_dto_type(request).data_to_encodable_type(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/litestar/dto/base_dto.py", line 119, in data_to_encodable_type
return backend.encode_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/litestar/dto/_codegen_backend.py", line 161, in encode_data
return cast("LitestarEncodableType", self._encode_data(data))
^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 33, in func
TypeError: Missing required argument 'foo'
```
### Litestar Version
2.12.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-10-13T02:45:53Z | 2025-03-20T15:54:58Z | https://github.com/litestar-org/litestar/issues/3785 | [
"Bug :bug:"
] | bdoms | 0 |
Yorko/mlcourse.ai | scikit-learn | 654 | mlcourse.ai is down | Hello, is https://mlcourse.ai down? | closed | 2020-02-24T11:30:12Z | 2020-02-24T11:34:50Z | https://github.com/Yorko/mlcourse.ai/issues/654 | [] | vitvakatu | 3 |
cupy/cupy | numpy | 8,451 | BUG: `cupyx.scipy.special.gammainc`: returns finite results with NaN input | ### Description
`cupyx.scipy.special.chdtr` sometimes returns finite results when one of the arguments is NaN. As one might expect, there are similar cases in `chdtrc`, `gammainc`, and `gammaincc`.
### To Reproduce
```py
import cupy as cp
from cupyx.scipy import special
arg = cp.asarray([-cp.inf, -1., 0., 1., cp.inf, cp.nan])
print(special.gammainc(arg, cp.nan)) # [nan nan nan nan 0. nan]
print(special.gammainc(cp.nan, arg)) # [nan nan 0. nan 1. nan]
```
### Installation
Conda-Forge (`conda install ...`)
### Environment
```
OS : Windows-11-10.0.22631-SP0
Python Version : 3.12.3
CuPy Version : 13.2.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 2.0.0
SciPy Version : 1.15.0.dev0+1322.e584f78
Cython Build Version : 0.29.37
Cython Runtime Version : 3.0.10
CUDA Root : None
nvcc PATH : None
CUDA Build Version : 12050
CUDA Driver Version : 12040
CUDA Runtime Version : 12050 (linked to CuPy) / RuntimeError("CuPy failed to load cudart64_12.dll: FileNotFoundError: Could not find module 'cudart64_12.dll' (or one of its dependencies). Try using the full path with constructor syntax.") (locally installed)
cuBLAS Version : (available)
cuFFT Version : 11203
cuRAND Version : 10306
cuSOLVER Version : (11, 6, 2)
cuSPARSE Version : (available)
NVRTC Version : (12, 5)
Thrust Version : 200200
CUB Build Version : 200200
Jitify Build Version : <unknown>
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce RTX 2060 SUPER
Device 0 Compute Capability : 75
Device 0 PCI Bus ID : 0000:01:00.0
```
### Additional Information
scipy/scipy#21317 | open | 2024-08-04T13:35:22Z | 2025-02-06T05:59:52Z | https://github.com/cupy/cupy/issues/8451 | [
"contribution welcome",
"cat:numpy-compat"
] | mdhaber | 2 |
kizniche/Mycodo | automation | 489 | All measurements returned failed CRC | ## Mycodo Issue Report:
- 6.1.2
#### Problem Description
"All measurements returned failed CRC" Shows up in Daemon log for all 3 sensors. Wanna know why. No problem per se. Sensor readings are ok for now. (3 x AM2315 sensors)
- I use fairly long cables (12 meter is the longest) I started with unshielded UTP cat5 with pullup resistors 10k. These gave me errors after some time (NO DATA) After some googling I changed the setup to s-ftp (shielded and foil) CAT6 to counter EMI and 4.7k pullup to decrease the raise time to counter bus capacitance.
### Errors
2018-06-05 14:41:18,879 - mycodo.inputs.am2315_2 - ERROR - All measurements returned failed CRC
2018-06-05 14:46:14,199 - mycodo.inputs.am2315_1 - ERROR - All measurements returned failed CRC
2018-06-05 14:48:18,784 - mycodo.inputs.am2315_2 - ERROR - All measurements returned failed CRC
2018-06-05 15:04:59,157 - mycodo.inputs.am2315_1 - ERROR - All measurements returned failed CRC
2018-06-05 15:05:59,183 - mycodo.inputs.am2315_1 - ERROR - All measurements returned failed CRC
2018-06-05 15:21:25,631 - mycodo.inputs.am2315_3 - ERROR - All measurements returned failed CRC
2018-06-05 15:27:48,884 - mycodo.inputs.am2315_2 - ERROR - All measurements returned failed CRC
2018-06-05 15:30:48,821 - mycodo.inputs.am2315_2 - ERROR - All measurements returned failed CRC
| closed | 2018-06-05T13:40:21Z | 2018-10-13T14:50:02Z | https://github.com/kizniche/Mycodo/issues/489 | [] | Gossen1 | 6 |
aleju/imgaug | deep-learning | 46 | Rotate Affine augment Keypoints | Hi,
I am using sequential augment with only rotate affine transform with -30, 30 range.
Then I wanted to augment keypoints. I did this with the point (100, 100). But, the augmented point is not in the correct position. I ran the key point augmentation for other sequential non-affine augmentations and they seemed to work fine.
I used the same visualization technique in the README.md with the "draw_on_image" method
Can you please help me with this. | closed | 2017-07-11T13:10:30Z | 2018-12-04T03:49:39Z | https://github.com/aleju/imgaug/issues/46 | [] | araghava92 | 9 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,896 | same issue, fixed? | same issue, fixed?
_最初由 Sensanko52123 在 https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16882#issuecomment-2725201568 发布_
It can be opened now, but it’s not completely working. Currently, I can only open it using the following method:
1. Open Command Prompt as Administrator (press Win + R, type cmd, and then press Ctrl + Shift + Enter).
2. Type cd and navigate to the folder where your sd-web-ui is located.
3. Then type webui-user.bat to launch the program.
I can only open it this way at the moment; clicking directly on the file causes an error.
---
This is the error log when trying to start directly:
venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Couldn't determine Stable Diffusion XL's hash: 45c443b316737a4ab6e40413d7794a7f5657c19f, attempting autofix...
Fetching all contents for Stable Diffusion XL
warning: safe.directory ''C:/Users/jerem/OneDrive/桌面/工具/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'' not absolute
warning: safe.directory ''C:/Users/jerem/OneDrive/桌面/工具/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'' not absolute
warning: safe.directory ''C:/Users/jerem/OneDrive/桌面/工具/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'' not absolute
warning: safe.directory ''C:/Users/jerem/OneDrive/桌面/工具/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'' not absolute
warning: safe.directory ''C:/Users/jerem/OneDrive/桌面/工具/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'' not absolute
fatal: detected dubious ownership in repository at 'C:/stable-diffusion-webui/repositories/generative-models'
'C:/stable-diffusion-webui/repositories/generative-models' is owned by:
BUILTIN/Administrators (S-1-5-32-544)
but the current user is:
DESKTOP-9PK31L6/jerem (S-1-5-21-2914250175-1236065574-3379173521-1001)
To add an exception for this directory, call:
git config --global --add safe.directory C:/stable-diffusion-webui/repositories/generative-models
Traceback (most recent call last):
File "C:\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "C:\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
File "C:\stable-diffusion-webui\modules\launch_utils.py", line 413, in prepare_environment
git_clone(stable_diffusion_xl_repo, repo_dir('generative-models'), "Stable Diffusion XL", stable_diffusion_xl_commit_hash)
File "C:\stable-diffusion-webui\modules\launch_utils.py", line 178, in git_clone
current_hash = run_git(dir, name, 'rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}", live=False).strip()
File "C:\stable-diffusion-webui\modules\launch_utils.py", line 166, in run_git
git_fix_workspace(dir, name)
File "C:\stable-diffusion-webui\modules\launch_utils.py", line 153, in git_fix_workspace
run(f'"{git}" -C "{dir}" fetch --refetch --no-auto-gc', f"Fetching all contents for {name}", f"Couldn't fetch {name}", live=True)
File "C:\stable-diffusion-webui\modules\launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't fetch Stable Diffusion XL.
Command: "git" -C "C:\stable-diffusion-webui\repositories\generative-models" fetch --refetch --no-auto-gc
Error code: 128
Press any key to continue... | open | 2025-03-16T04:20:54Z | 2025-03-18T17:57:45Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16896 | [] | jeremt001 | 1 |
kymatio/kymatio | numpy | 79 | Sphynx gallery | Hi,
Those files :
mnist.py|synthetic.py|real_signal.py|scattering3d_qm7.py
do not pass in sphynx gallery because there is no doc string. I'm willing to removing the warning also | closed | 2018-10-30T14:44:24Z | 2018-11-01T16:26:13Z | https://github.com/kymatio/kymatio/issues/79 | [
"bug"
] | edouardoyallon | 4 |
paperless-ngx/paperless-ngx | django | 7,516 | Tasks stuck in queue | ### Description
A few days after starting the webserver it stops processing all incoming tasks, everything stucks in the queue. After I restart the webserver container all queued tasks are processed.
### Steps to reproduce
No idea
### Webserver logs
```bash
no related logs
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.4
### Host OS
Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-101-generic x86_64)
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.11.4",
"server_os": "Linux-5.15.0-101-generic-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 103705931776,
"available": 65932038144
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0011_remove_mailrule_assign_tag_squashed_0024_alter_mailrule_name_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-08-22T22:31:43.030938+02:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2024-08-22T20:15:06.884425Z",
"classifier_error": null
}
}
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-08-22T20:19:04Z | 2024-08-22T20:59:22Z | https://github.com/paperless-ngx/paperless-ngx/issues/7516 | [
"not a bug"
] | thumDer | 2 |
scikit-learn-contrib/metric-learn | scikit-learn | 287 | MarginLoss produces incorrect result when both num_pos_pairs and num_neg_pairs are 0 | It may happen, that late in the training process both positive and negative distances in all triplets in some batch fall below thresholds. In such case MarginLoss produces very large loss (like 997187911680.0). This breaks computation of batch statistics, such as a mean loss per batch, which I include in my code.
I looked into MarginLoss code and this is because when pair_count (number of active pairs) becomes zero, total loss is equal to beta_reg_loss divided by 1e-16. Which produces a very large number. This is done by below piece of code:
```
pair_count = self.num_pos_pairs + self.num_neg_pairs
return (torch.sum(pos_loss + neg_loss) + beta_reg_loss) / (pair_count + 1e-16)
```
When pair_count is zero, I think it's more logical to return loss equal to zero. Not some very large number. Maybe MarginLoss code could be amended to something like:
```
return (torch.sum(pos_loss + neg_loss) + beta_reg_loss) / (max(pair_count,1))
```
This would prevent returning very large loss, when pair_count is zero. Or maybe more appropriate would be:
```
if pair_count >= 1:
return (torch.sum(pos_loss + neg_loss) + beta_reg_loss) / pair_count
else:
return ...grad enabled tensor set to zero...
```
| closed | 2020-05-14T14:04:14Z | 2020-05-14T15:57:25Z | https://github.com/scikit-learn-contrib/metric-learn/issues/287 | [] | jac99 | 2 |
Josh-XT/AGiXT | automation | 859 | command "google search" not show | ### Description
When I set GOOGLE_API_KEY , "google search" command not show in command setting list
### Steps to Reproduce the Bug
1. Click Agent Setting Page
2. Add new Agent "xxx"
3. set GOOGLE_API_KEY in Setting Page
4. find "google search" command
### Expected Behavior
"google search " in setting command list, can select
### Operating System
- [ ] Linux
- [ ] Microsoft Windows
- [X] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [X] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [X] Using docker compose
- [ ] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | closed | 2023-07-25T10:16:34Z | 2023-08-25T00:55:09Z | https://github.com/Josh-XT/AGiXT/issues/859 | [
"type | report | bug",
"needs triage"
] | ychy00001 | 1 |
ultralytics/ultralytics | machine-learning | 18,739 | How can I add a loss function, other than the bbox and classification loss, to the YOLOv8 model? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I have redesigned the backbone of YOLOv8, and for this, I need to add a specific loss function to a certain layer of the backbone. How should I handle this?
### Additional
_No response_ | open | 2025-01-17T14:59:21Z | 2025-03-24T14:42:56Z | https://github.com/ultralytics/ultralytics/issues/18739 | [
"enhancement",
"question"
] | CC-1997 | 27 |
developmentseed/lonboard | data-visualization | 275 | Trouble Displaying PathLayer | I'm trying to display some LineStrings and having an issue where the map appears but there's nothing on it.
```
import geopandas as gpd
import shapely.geometry
from lonboard import Map, PathLayer
gdf = gpd.GeoDataFrame(
{
"geometry": [
shapely.geometry.LineString(shapely.geometry.box(0.0, 0.0, 5.0, 5.0).exterior),
shapely.geometry.LineString(shapely.geometry.box(40., 40., 45., 45.).exterior)
],
"id": [0, 1],
}
)
layer = PathLayer.from_geopandas(
gdf,
get_color=[255, 0, 0],
)
map_ = Map(layers=[layer])
map_
```
will display a map zoomed to the right bounds, but there are no lines displayed. I've tried making simpler lines rather than box exteriors and other color/alpha values but haven't gotten anything to display correctly.
<img width="1241" alt="Screenshot 2023-11-29 at 4 48 51 PM" src="https://github.com/developmentseed/lonboard/assets/1214350/76b9b561-515c-445f-933c-a63adee5fa2b">
If I create polygons it works fine:
```
import geopandas as gpd
import shapely.geometry
from lonboard import Map, SolidPolygonLayer
gdf = gpd.GeoDataFrame(
{
"geometry": [
shapely.geometry.box(0.0, 0.0, 5.0, 5.0),
shapely.geometry.box(40., 40., 45., 45.),
],
"id": [0, 1],
}
)
layer = SolidPolygonLayer.from_geopandas(
gdf,
get_fill_color=[255, 0, 0],
)
map_ = Map(layers=[layer])
map_
```
<img width="1253" alt="Screenshot 2023-11-29 at 4 47 25 PM" src="https://github.com/developmentseed/lonboard/assets/1214350/57cfc11b-f242-4552-9285-781cd29a3705">
```
>>> import lonboard
>>> lonboard.__version__
'0.4.2'
``` | closed | 2023-11-29T21:51:47Z | 2023-11-30T01:46:44Z | https://github.com/developmentseed/lonboard/issues/275 | [] | jwass | 6 |
scrapy/scrapy | python | 6,204 | PyPy tests fail | Some of the `tests/test_feedexport.py::::BatchDeliveriesTest` tests fail on both PyPy envs, with "still running at 120.0 secs". | closed | 2024-01-12T12:41:43Z | 2024-01-12T17:50:42Z | https://github.com/scrapy/scrapy/issues/6204 | [
"bug",
"CI"
] | wRAR | 1 |
nidhaloff/igel | scikit-learn | 77 | Add CNN support |
### Description
We want igel to support CNNs. @Prasanna28Devadiga will work on this ;)
My suggestion would be just to write CNN in the algorithm field in order to use a CNN model. Here is an example to illustrate what I mean:
```
model:
type: classification
algorithm: CNN # here just CNN insteead of NeuralNetClassifier. Everyone will recognize this because sklearn supports no CNN models.
```
Anyone is free to join and share ideas in the comments or even better [here is a separate discussion for this issue](https://github.com/nidhaloff/igel/discussions/76). | closed | 2021-08-22T09:30:11Z | 2021-09-11T19:22:23Z | https://github.com/nidhaloff/igel/issues/77 | [
"enhancement",
"contribution",
"feature",
"discussion"
] | nidhaloff | 2 |
nalepae/pandarallel | pandas | 141 | ValueError: Number of processes must be at least 1 | [6887] Failed to execute script sbp
Traceback (most recent call last):
File "sbp.py", line 565, in <module>
File "pandarallel/pandarallel.py", line 447, in closure
File "multiprocessing/context.py", line 119, in Pool
File "multiprocessing/pool.py", line 169, in __init__
ValueError: Number of processes must be at least 1 | closed | 2021-04-23T07:45:14Z | 2024-02-16T08:07:36Z | https://github.com/nalepae/pandarallel/issues/141 | [] | ztsweet | 11 |
Esri/arcgis-python-api | jupyter | 2,054 | Error in spatial.to_featurelayer: Temp zip file being used by another process | *Describe the bug*
When using spatial.to_featurelayer on a moderately large (62,000 rows, 93 columns) spatially enabled data frame, I'm getting the following error, and the feature layer is never created.
```
C:\Users\aheinlei\work\mi-flora-scripting\to_featurelayer_bug_demo.py:9: DtypeWarning: Columns (16,17,19,20,21,22,23,26,30,31,32,52,54,57,61) have mixed types. Specify dtype option on import or set low_memory=False.
df = pd.read_csv('demo_data_62k.csv')
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2023.3.2\plugins\python\helpers\pydev\pydevd.py", line 1534, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\JetBrains\PyCharm 2023.3.2\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\Users\aheinlei\work\mi-flora-scripting\to_featurelayer_bug_demo.py", line 109, in <module>
fl = spatial_df.spatial.to_featurelayer(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\aheinlei\AppData\Local\ESRI\conda\envs\miflora_3_3_1\Lib\site-packages\arcgis\features\geo\_accessor.py", line 2912, in to_featurelayer
result = content.import_data(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\aheinlei\AppData\Local\ESRI\conda\envs\miflora_3_3_1\Lib\site-packages\arcgis\gis\__init__.py", line 8591, in import_data
return _cm_helper.import_as_item(self._gis, df, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\aheinlei\AppData\Local\ESRI\conda\envs\miflora_3_3_1\Lib\site-packages\arcgis\gis\_impl\_content_manager\_import_data.py", line 247, in import_as_item
file_item, new_item = _create_file_item(gis, df, file_type, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\aheinlei\AppData\Local\ESRI\conda\envs\miflora_3_3_1\Lib\site-packages\arcgis\gis\_impl\_content_manager\_import_data.py", line 156, in _create_file_item
os.remove(file)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\aheinlei\\AppData\\Local\\Temp\\a694de98\\a86051.zip'
python-BaseException
```
*To Reproduce*
Run the python script atteched below (file extension changed to .txt to allow uploading). I'd rather not post the data file here publicly, so if someone from ESRI takes a look at this, feel free to contact me and I'll email it. I've tried making a dummy file to post here instead of the actual one, but nothing I've generated so far has been able to cause this error. Also note that the file originally had 220k+ lines; I'm able to reduce the error with a subset of this file that's 62k lines or more, but once I start making files smaller than that, the error doesn't occur. This is independent of which portion of the records I use to create the smaller files.
[to_featurelayer_bug_demo.txt](https://github.com/user-attachments/files/17049597/to_featurelayer_bug_demo.txt)
*Expected behavior*
I expect to_featurelayer() to create my feature layer without throwing an error.
*Platform (please complete the following information):*
- OS: Windows
- Python API Version 2.3.1
*Additional context*
This error started occurring after Python API version 2.0.1. (Until I recently upgraded to ArcGIS Pro 3.3.1, I was specifically installing Python API version 2.0.1 instead of more recent versions because it would successfully run my script without throwing this error.) | closed | 2024-09-18T19:55:31Z | 2025-01-08T07:54:17Z | https://github.com/Esri/arcgis-python-api/issues/2054 | [
"bug"
] | roelofsaj | 2 |
bmoscon/cryptofeed | asyncio | 291 | Finer grained resets in response to sequence number misses on Coinbase | **Is your feature request related to a problem? Please describe.**
Currently, a reset and book snapshot is triggered for *all* pairs in a Coinbase feed if *any* of the pairs encounters a missing sequence number. Instead, what might be better is to only trigger the reset and subsequent REST book snapshot request for the pair(s) that had sequence number misses.
**Describe the solution you'd like**
In coinbase.py, change `__reset()` and `_book_snapshot()` and the code that takes action on missing sequence numbers to work on a per-pair basis for missing sequence numbers.
**Describe alternatives you've considered**
This could be coded around by adding one Coinbase feed per pair to the feedhandler, or splitting pairs up into many feeds (e.g. 3 pairs per feed). But, this has downsides: it will open some number of websocket connections to Coinbase per pair, which may be too resource intensive if many pairs need to be subscribed to. Nonetheless, it is a valid solution. However, I don't see any downsides to the suggested improvement above.
**Additional context**
One potential consideration might be that sequence number misses are correlated in time, and upon missing one sequence number in one pair, others will be missed in other coins as well; therefore, resetting and requesting snapshots for all pairs in a feed would be the safe decision. This will need to be tested after any coding is done for this improvement.
I'm willing to do testing and coding on this. | closed | 2020-09-01T03:10:39Z | 2020-12-30T02:41:30Z | https://github.com/bmoscon/cryptofeed/issues/291 | [
"help wanted",
"Feature Request"
] | zahnz | 3 |
xinntao/Real-ESRGAN | pytorch | 330 | 关于输出的图片全为黑色的问题 | **一、运行环境:**
操作系统:Windows 11 专业版 21H2
处理器:Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz 2.90 GHz
显卡信息:NVIDIA GeForce RTX 1660Ti
```
(gfpgan) > nvidia-smi
Sat May 14 16:53:18 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 512.59 Driver Version: 512.59 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 Off | N/A |
| 34% 30C P8 18W / 130W | 509MiB / 6144MiB | 2% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 576 C+G ...n1h2txyewy\SearchHost.exe N/A |
| 0 N/A N/A 1552 C+G C:\Windows\System32\dwm.exe N/A |
| 0 N/A N/A 3100 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 3376 C+G ...artMenuExperienceHost.exe N/A |
| 0 N/A N/A 3896 C+G ...8bbwe\WindowsTerminal.exe N/A |
| 0 N/A N/A 4120 C+G C:\Windows\System32\dwm.exe N/A |
| 0 N/A N/A 5608 C+G ...y\ShellExperienceHost.exe N/A |
| 0 N/A N/A 6516 C+G ...dows\System32\LogonUI.exe N/A |
| 0 N/A N/A 6636 C+G ...ge\Application\msedge.exe N/A |
| 0 N/A N/A 7324 C+G ...2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 8444 C+G ...210.39\msedgewebview2.exe N/A |
| 0 N/A N/A 10088 C+G ...ows\System32\WUDFHost.exe N/A |
| 0 N/A N/A 10160 C+G ...lPanel\SystemSettings.exe N/A |
| 0 N/A N/A 10424 C+G ...ekyb3d8bbwe\YourPhone.exe N/A |
| 0 N/A N/A 12744 C+G ...210.39\msedgewebview2.exe N/A |
+-----------------------------------------------------------------------------+
```
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_Mar__8_18:36:24_Pacific_Standard_Time_2022
Cuda compilation tools, release 11.6, V11.6.124
Build cuda_11.6.r11.6/compiler.31057947_0
```
**二、相关软件**
Anaconda 4.10.3
Python 3.8.13
pytorch 1.11.0
**三、使用过程**
最开始是使用 [GFPGAN](https://github.com/TencentARC/GFPGAN) 项目,按教程安装好,用示例的命令(`python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2`)进行测试,脸部分割、优化正常,但是输出的 restored_imgs 只有脸部,背景全为黑色。
关闭`-bg_upsampler`(指定一个不存在的smapler)后,restored_imgs里的图片输出正常,脸部得到优化,但背景还是模糊的(符合预期,因为没有使用背景优化),确定为realesrgan运行不正常。
下载了[Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN)的绿色版,功能正常,不会出现图片为黑色的情况。
克隆此项目的代码,按教程安装,用示例的命令(`python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance`)进行测试,输出的图片全为黑色。
为排除操作系统问题,在wsl2下安装Ubuntu 20.04.4 LTS,再次进行上面的测试,问题一样。
搜索了issues,没有发现相关的问题,希望能得到帮助。
非常感谢,这个几个项目太棒了!
| closed | 2022-05-14T09:09:34Z | 2023-09-26T03:07:13Z | https://github.com/xinntao/Real-ESRGAN/issues/330 | [] | rayeesoft | 10 |
Johnserf-Seed/TikTokDownload | api | 263 | [BUG]今天开始闪退了,新旧版本都闪退 | **描述出现的错误**
今天Tiktoktool主程序开始频繁闪退,有的时候一打开就闪退,有的时候解析完用户名就闪退,有的时候则是下载几条之后闪退。
**bug复现**
复现这次行为的步骤:
1.ini文件粘贴用户主页链接
2.打开主程序
3.随机高频闪退无提示信息
同时会出现这样的错误信息:

**桌面(请填写以下信息):**
-操作系统:windows10 64bit
-vpn代理开启关闭都试过,都会闪退。
-版本1.3.0.33
**附文**
在此处添加有关此问题的文字。
[2022-12-06_231936.log](https://github.com/Johnserf-Seed/TikTokDownload/files/10167744/2022-12-06_231936.log)
[2022-12-06_231931.log](https://github.com/Johnserf-Seed/TikTokDownload/files/10167745/2022-12-06_231931.log)
| closed | 2022-12-06T14:42:06Z | 2023-01-14T12:14:29Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/263 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | zhiben5201 | 4 |
jupyterhub/repo2docker | jupyter | 783 | Utility to print which buildpack would be triggered | <!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Proposed change
<!-- Use this section to describe the feature you'd like to be added. -->
Following on from https://github.com/jupyter/repo2docker/pull/635#issuecomment-488094383: it would be useful to have a new script called `repo2docker-detect` that takes the same argument as `repo2docker` does and prints out the name of the picked build pack.
### Alternative options
<!-- Use this section to describe alternative options and why you've decided on the proposed feature above. -->
Users can already parse/grep the output of `repo2docker <somerepo>` for the build pack name. However it is useful to not have to grep through the output and instead of a dedicated tool.
### Who would use this feature?
<!-- Describe the audience for this feature. This information will affect who chooses to work on the feature with you. -->
People who want to detect if a repo contains configuration files that trigger a build so that they can either build that image or use a default image to run the repo.
### How much effort will adding it take?
<!-- Try to estimate how much work adding this feature will require. This information will affect who chooses to work on the feature with you. -->
Adding a new entry point, some tests and documentation is probably half a day to a days work.
### Who can do this work?
<!-- What skills are needed? Who can be recruited to add this feature? This information will affect who chooses to work on the feature with you. -->
To work on this you need knowledge of Python. | open | 2019-09-07T12:26:08Z | 2019-09-07T12:26:08Z | https://github.com/jupyterhub/repo2docker/issues/783 | [
"needs: discussion"
] | betatim | 0 |
akfamily/akshare | data-science | 5,346 | 请增加获取北证50数据接口 |
经过阅读文档、查看原网页,发现只能获取上证指数和深证成指两个重要指数指标,没有获取北证50的接口
在网页[连接](https://quote.eastmoney.com/center/hszs.html)中可以看到

上证指数、深证成指和北证50是排在最前面的三个指数
烦请添加北证50的获取接口,谢谢
如已有接口,请指出。 | closed | 2024-11-18T16:19:17Z | 2024-11-19T12:26:25Z | https://github.com/akfamily/akshare/issues/5346 | [
"bug"
] | simon-yanghl | 2 |
newpanjing/simpleui | django | 354 | 错误的css背景样式 | **bug描述**
* 在使用django-ckeditor时,发现后台各弹出面板背景存在问题,如图所示
* 
**重现步骤**
** repeat step **
1. 安装django-editor插件
2. models内使用ckeditor_uploader.fields.RichTextUploadingField, 并将该模型注册入admin
3. 在后台编辑对象,点击打开编辑器的图片上传面板,鼠标放上去
**错误原因**
** error reason**
`static/admin/simpleui-x/theme/green.css`第49-52行
https://github.com/newpanjing/simpleui/blob/master/simpleui/static/admin/simpleui-x/theme/green.css#L49-L52
**修复方法**
** repair method**
针对性修复方案:注入以下代码
```css
.cke_reset_all tbody tr:hover td,
.cke_reset_all tbody tr:hover th {
background-color: white !important;
}
```
通用修复方法:css加入类名选择
| closed | 2021-03-30T12:39:01Z | 2021-05-11T08:50:55Z | https://github.com/newpanjing/simpleui/issues/354 | [
"bug"
] | cxgreat2014 | 1 |
saulpw/visidata | pandas | 2,523 | Precision adjustments on date columns | I'm new to the `alt`,`-`/`+` command for the precision of shown/formatted data.
Unfortunately, when I derive a column from `created_utc`, which is a string of epoch seconds, `+/-` does weird things...
1. `=date(created_utc)` seems to work but `alt+-` turns it back into seconds with ".00" precision. I can't find much documentation on this built-in `date` function, is there a list of them?
2. If I use `=dt.datetime.fromtimestamp(float(created_utc))` the type for the new column is blank in the column sheet. If I `alt+-` I get "TypeError: float() argument must be a string or a real number, not 'datetime.datetime'". If I convert it to date type within vd via `@` I get a column of ".1f"s. (It does work if immediately upon creation I remember to set the new column type to `@`.)
https://asciinema.org/a/xoUKGRryAEotb1BeoklpR6gBM
```csv
created_utc,title
1295471619,[SSPSF] says thanks!
1295471580,Introducing: Bad Influence Puppy
1295470777,[TLTNP] couldn't care less
1295470700,[AGOC] red light district
1295469892,[PPP] Relax!
1295469265,hypochondriac hello kitty cupcake
1295468961,[HHKC] saw a movie
1295468543,There's one in every town.
1295468031,[IT] hate it when i do this
1295467821,[DHPF] just got his report card
1295467664,[UAA] whatcha doin' there?
1295465577,[TLTNP] says fuck it
1295465440,unsolicited advice algae
1295465204,analysis at a glance omnipresent cloud
1295460404,Procreation has its benefits
1295453664,[HHKC] pins and needles
1295452894,[TLTNP] is bored
1295452829,shameless self promotion stick figure
1295452795,i guess i'm shameless self promotion stick figure
1295452615,oh wait... you're here already...
1295445352,[KHPDG] watch it buddy!
1295445158,kidding herself pole dancer giraffe
1295438674,[HHKC] should quit smoking
1295438031,hypochondriac hello kitty cupcake
1295415002,bruised thumb... credit to thehalfwit for the pic
1295414667,barely legible blimp
1295413341,peer pressure pachyderm
1295411794,shady drug dealer red panda
1295410970,[TLTNP] is hungry
```
| closed | 2024-09-20T12:07:35Z | 2024-10-02T12:45:25Z | https://github.com/saulpw/visidata/issues/2523 | [
"bug",
"fixed",
"By Design"
] | reagle | 4 |
influxdata/influxdb-client-python | jupyter | 444 | Python access to InfluxDB parameters via JSON fie | <!--
Thank you for suggesting an idea to improve this client.
* Please add a :+1: or comment on a similar existing feature request instead of opening a new one.
* https://github.com/influxdata/influxdb-client-python/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+is%3Aclosed+sort%3Aupdated-desc+label%3A%22enhancement%22+
-->
__Proposal:__
Access to InfluxDB parameters were unnecessarily difficult to determine and use as magic numbers.
The Influx could save all those parameters to a JSON file, thus allow multiple use cases for multiple datasets.
Then access to the correct parameters for a given dataset comes from dictionary items.
__Current behavior:__
url = "http://localhost:8086" # must find an these paste magic numbers into my code here
token = "my-token" #token generated online and pasted here
bucket = "my-bucket"
org = "my-org"
with InfluxDBClient( url=url, token=token, org=org) as client:
(more code here)
__Desired behavior:__
import json
jfile = open('stock_data_rslippert.json',)
my = json.load( jfile)
with InfluxDBClient( url= my['influx_url'] token=my['influx_token'], org=my['influx_org']) as client:
(more code here)
__Alternatives considered:__
Would need to be some other way that automates collection of the parameters without magic numbers.
__Use case:__
Rather than searching and copy pasting parameters as magic numbers, for many different use cases,
use a similar method as when generating a token, but all needed parameters are generated to an easy to use json file
this simplifies access to the correct parameters, IE. it automates the access for many different use cases.
This is useful because engineers will be working on many different datasets, but the code should be the same.
so rather that pasting in magic numbers, access to the data is automatic. just provide a json filename as reference.
The file.json would look like this:
{
"influx_bucket": "rslippert's Bucket",
"influx_org": "d9ae8eef6f1bd6da",
"influx_token": "E3zGsMqgMrDYthxjfT921FXhipsOfGOIOxpcfvn61SnBE4D1mdgYKF5aYefaCbroy0UpBIsOEEMZr2XM96dtFg==",
"influx_url": "https://us-east-1-1.aws.cloud2.influxdata.com",
"timezone": "EST",
"influxdb_version": "1.0"
}
| closed | 2022-05-26T00:02:10Z | 2022-07-23T07:03:23Z | https://github.com/influxdata/influxdb-client-python/issues/444 | [
"enhancement"
] | rslippert | 10 |
OpenBB-finance/OpenBB | machine-learning | 6,923 | [🕹️] Star eyed supporter | ### What side quest or challenge are you solving?
Get five friends to star
### Points
150
### Description
_No response_
### Provide proof that you've completed the task





| closed | 2024-10-31T10:30:33Z | 2024-11-02T07:40:39Z | https://github.com/OpenBB-finance/OpenBB/issues/6923 | [] | Dehelaan | 1 |
Miksus/rocketry | pydantic | 187 | Scheduler don´t run after_success(fuc) after some time. | When I start the scheduler, it runs stable for some time, but withouht any msg, the after_success(fuc) stop running.
Only the periodic timer functions are executed.
**Screenshots from the DB log**

The Scheduler implementation:
```python
sec_db_engine = create_engine(db_settings.DB_URL, pool_pre_ping=True)
app_scheduler = Rocketry(
execution="async",
# logger_repo=MemoryRepo(),
logger_repo=SQLRepo(engine=sec_db_engine, table="scheduler_log", model=MinimalRunRecord, id_field="id"),
config = {
"task_execution": "async",
'silence_task_prerun': False,
'silence_task_logging': False,
'silence_cond_check': False,
"force_status_from_logs": True
}
)
from .tasks import scheduler
app_scheduler.include_grouper(scheduler)
```
The Task implementation
```python
scheduler= Grouper()
@scheduler.task(
minutely,
execution="main",
)
def check_starting_condition():
if check_first() and check_second():
logging.info(f"Check starting condition passed!")
return "OK"
else:
raise Exception("Im not allowed to run the Taks")
@scheduler.task(after_fail(check_starting_condition),
execution="main")
def failed_check():
logging.info("Der report_check failed!")
@scheduler.task(after_success(check_starting_condition),
execution="async")
async def main_exc_function(
):
# an async function call with aiohttp
return some_stuff
```
I integrated it along with Fastapi
```python
class Server(uvicorn.Server):
"""Customized uvicorn.Server
Uvicorn server overrides signals and we need to include
Rocketry to the signals."""
def handle_exit(self, sig: int, frame) -> None:
app_scheduler.session.shut_down()
return super().handle_exit(sig, frame)
async def run():
server = Server(config=uvicorn.Config(
app,
loop="asyncio",
host=server_settings.SERVER_HOST,
port=server_settings.SERVER_PORT,
reload=server_settings.SERVER_RELOAD,
workers=server_settings.SERVER_WORKER,
timeout_keep_alive=server_settings.SERVER_TIME_OUT,
access_log = False))
api = asyncio.create_task(server.serve())
sched = asyncio.create_task(app_scheduler.serve())
await asyncio.wait([sched, api])
if __name__ == "__main__":
asyncio.run(run())
``` | open | 2023-01-30T13:02:40Z | 2023-02-07T12:22:25Z | https://github.com/Miksus/rocketry/issues/187 | [
"bug"
] | everthought | 2 |
flaskbb/flaskbb | flask | 467 | Banned user pagination goes to normal user page | If there are enough banned users to take up more than one page, the pagination links at the bottom go to the "normal" user page. | closed | 2018-05-12T08:16:07Z | 2018-05-23T20:02:25Z | https://github.com/flaskbb/flaskbb/issues/467 | [
"bug"
] | gordonjcp | 0 |
kennethreitz/responder | flask | 340 | @kenneth-reitz @mmanhertz | @kenneth-reitz @mmanhertz
I'm using APISpec version 1.1.0 and still receive the same error as above. Appreciate your help.
```
[2019-03-29 19:27:37 +1100] [33025] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/anaconda3/envs/proton_py3/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/anaconda3/envs/proton_py3/lib/python3.7/site-packages/gunicorn/workers/base.py", line 129, in init_process
self.load_wsgi()
File "/anaconda3/envs/proton_py3/lib/python3.7/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/anaconda3/envs/proton_py3/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/anaconda3/envs/proton_py3/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/anaconda3/envs/proton_py3/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/anaconda3/envs/proton_py3/lib/python3.7/site-packages/gunicorn/util.py", line 350, in import_app
__import__(module)
File "/Users/bkp2/projects/personal/PROTON/main.py", line 85, in <module>
spec.add_path(resource= rc_Ictrl_get_schema_information)
AttributeError: 'APISpec' object has no attribute 'add_path'
```
_Originally posted by @PruthviKumarBK in https://github.com/kennethreitz/responder/issues/212#issuecomment-477912196_ | closed | 2019-03-29T08:31:06Z | 2019-04-21T02:32:54Z | https://github.com/kennethreitz/responder/issues/340 | [
"question"
] | 1x-eng | 2 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 144 | Linux下基于Electron打包的应用程序是否有控件的获取方式 | Windows下使用uiautomation能获取到Electron打包相关控件,想请教下linux端是否有方式可以实现呢。 | open | 2020-12-21T07:46:18Z | 2021-05-11T05:06:47Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/144 | [
"question"
] | zhouxihong1 | 1 |
tfranzel/drf-spectacular | rest-api | 604 | Could not resolve request body even though it is defined | **Describe the bug**
We expect to have parameters well defined in the generated schema, however a warning is thrown instead and the resulted schema is wrong.
**To Reproduce**
```py
@extend_schema(
request=[
OpenApiParameter(name='username', type=str, location=OpenApiParameter.QUERY, required=True),
OpenApiParameter(name='password', type=str, location=OpenApiParameter.QUERY, required=True),
],
responses={status.HTTP_200_OK: PrivateUserSerializer, status.HTTP_400_BAD_REQUEST: None}
)
@action(detail=False, url_path='login', methods=['post'], permission_classes=[permissions.AllowAny])
def login(self, request):
# ...
```
**Expected behavior**
I expect no warning and correct schema generation. Instead, this is what is generated and the result in console:
```sh
Warning #0: UserViewSet: could not resolve request body for POST /api/users/login/. Defaulting to generic free-form object. (Maybe annotate a Serializer class?)
```
```yml
/api/users/login/:
post:
operationId: api_users_login_create
tags:
- api
requestBody:
content:
application/json:
schema:
type: object
additionalProperties: {}
description: Unspecified request body
```
| closed | 2021-11-24T07:47:43Z | 2021-11-25T09:33:29Z | https://github.com/tfranzel/drf-spectacular/issues/604 | [] | mourad1081 | 3 |
lucidrains/vit-pytorch | computer-vision | 279 | vit_pytorch -> cross_vit.py(mistake) | 
| closed | 2023-09-10T16:01:53Z | 2023-09-10T16:33:28Z | https://github.com/lucidrains/vit-pytorch/issues/279 | [] | RufusRubin | 1 |
BeanieODM/beanie | asyncio | 731 | [BUG] Can not create dbref without id | **Describe the bug**
Trying on the Link demos but getting exception. Can not create dbref without id
**To Reproduce**
class Door(Document):
class Settings:
name = "door"
height: int = 2
width: int = 1
class Window(Document):
class Settings:
name = "window"
x: int = 10
y: int = 10
class House(Document):
class Settings:
name = "house"
name: str
door: Link[Door]
windows: List[Link[Window]]
document_models = [
House,
Door,
Window
]
await init_beanie(database=db, document_models=document_models)
door = Door()
window = Window()
house = House(name="MyHouse", door=door, windows=[window])
await house.save()
**Expected behavior**
Save a House document with a Window and Door document in separate tables.
**Additional context**
Exception has occurred: DocumentWasNotSaved
Can not create dbref without id
File "/test.py", line 74, in main
await house.save()
File "/test.py", line 76, in <module>
asyncio.run(main())
beanie.exceptions.DocumentWasNotSaved: Can not create dbref without id
| closed | 2023-10-04T18:18:13Z | 2023-11-28T01:49:51Z | https://github.com/BeanieODM/beanie/issues/731 | [
"Stale"
] | ID2343242 | 4 |
huggingface/transformers | python | 36,125 | Transformers | ### Model description
Transformer Practice
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | closed | 2025-02-10T22:18:45Z | 2025-02-11T13:52:00Z | https://github.com/huggingface/transformers/issues/36125 | [
"New model"
] | HemanthVasireddy | 1 |
mwaskom/seaborn | data-visualization | 3,023 | Error during legend creation with mixture of marks | Here's a minimal example, it seems that you need all three layers to trigger the error:
```python
(
so.Plot(penguins, "bill_length_mm", "bill_depth_mm", color="species")
.add(so.Dots())
.add(so.Line(), so.PolyFit(1))
.add(so.Line(), so.PolyFit(2))
)
```
<details><summary>Traceback</summary>
```python-traceback
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/IPython/core/formatters.py:343, in BaseFormatter.__call__(self, obj)
341 method = get_real_method(obj, self.print_method)
342 if method is not None:
--> 343 return method()
344 return None
345 else:
File ~/code/seaborn/seaborn/_core/plot.py:275, in Plot._repr_png_(self)
273 def _repr_png_(self) -> tuple[bytes, dict[str, float]]:
--> 275 return self.plot()._repr_png_()
File ~/code/seaborn/seaborn/_core/plot.py:814, in Plot.plot(self, pyplot)
810 """
811 Compile the plot spec and return the Plotter object.
812 """
813 with theme_context(self._theme_with_defaults()):
--> 814 return self._plot(pyplot)
File ~/code/seaborn/seaborn/_core/plot.py:847, in Plot._plot(self, pyplot)
844 plotter._plot_layer(self, layer)
846 # Add various figure decorations
--> 847 plotter._make_legend(self)
848 plotter._finalize_figure(self)
850 return plotter
File ~/code/seaborn/seaborn/_core/plot.py:1608, in Plotter._make_legend(self, p)
1605 for i, artist in enumerate(existing_artists):
1606 # Matplotlib accepts a tuple of artists and will overlay them
1607 if isinstance(artist, tuple):
-> 1608 artist += artist[i],
1609 else:
1610 existing_artists[i] = artist, artists[i]
IndexError: tuple index out of range
<seaborn._core.plot.Plot at 0x144c1f6a0>
```
</details> | closed | 2022-09-13T21:24:39Z | 2022-10-04T23:44:57Z | https://github.com/mwaskom/seaborn/issues/3023 | [
"bug",
"objects-plot"
] | mwaskom | 1 |
strawberry-graphql/strawberry | fastapi | 3,126 | local tests broken because of pydantic | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
Currently it isn't possible to test locally according to the contributing guide because pydantic errors crash the pytest sessions
## System Information
- Operating system: archlinux
- Strawberry version (if applicable): 0.209.2
## Additional ContextA
```
______________________________________________________________________________________________ ERROR collecting test session _______________________________________________________________________________________________
.venv/lib/python3.11/site-packages/_pytest/config/__init__.py:641: in _importconftest
mod = import_path(conftestpath, mode=importmode, root=rootpath)
.venv/lib/python3.11/site-packages/_pytest/pathlib.py:567: in import_path
importlib.import_module(module_name)
/usr/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1204: in _gcd_import
???
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/_pytest/assertion/rewrite.py:178: in exec_module
exec(co, module.__dict__)
tests/http/conftest.py:47: in <module>
@pytest.fixture(params=_get_http_client_classes())
.venv/lib/python3.11/site-packages/_pytest/fixtures.py:1312: in fixture
params=tuple(params) if params is not None else None,
tests/http/conftest.py:30: in _get_http_client_classes
importlib.import_module(f".{module}", package="tests.http.clients"),
/usr/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1204: in _gcd_import
???
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
<frozen importlib._bootstrap_external>:940: in exec_module
???
<frozen importlib._bootstrap>:241: in _call_with_frames_removed
???
tests/http/clients/starlite.py:9: in <module>
from starlite import Request, Starlite
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/starlite/__init__.py:1: in <module>
from starlite.app import Starlite
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/starlite/app.py:6: in <module>
from pydantic_openapi_schema import construct_open_api_with_schema_class
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/__init__.py:1: in <module>
from . import v3_1_0
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/__init__.py:9: in <module>
from .components import Components
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/components.py:7: in <module>
from .header import Header
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
.venv/lib/python3.11/site-packages/ddtrace/internal/module.py:220: in _exec_module
self.loader.exec_module(module)
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/header.py:8: in <module>
class Header(Parameter):
.venv/lib/python3.11/site-packages/pydantic_openapi_schema/v3_1_0/header.py:19: in Header
name: Literal[""] = Field(default="", const=True)
.venv/lib/python3.11/site-packages/pydantic/fields.py:757: in Field
raise PydanticUserError('`const` is removed, use `Literal` instead', code='removed-kwargs')
E pydantic.errors.PydanticUserError: `const` is removed, use `Literal` instead
E
E For further information visit https://errors.pydantic.dev/2.3/u/removed-kwargs
``` | closed | 2023-10-01T04:27:21Z | 2025-03-20T15:56:24Z | https://github.com/strawberry-graphql/strawberry/issues/3126 | [
"bug"
] | devkral | 1 |
InstaPy/InstaPy | automation | 5,969 | After liking post, comment and follow the same user | When running like by hashtags, I've set to comment on 20% and follow 20%.
InstaPy will comment and follow 20% but not necessarily the same person.
So some people will get like + comment and some will get like + follow.
Is there any way to make the 20% it follows to be the same 20% of people it commented so they get like + comment + follow?
Thanks | closed | 2020-12-19T21:54:39Z | 2021-07-21T07:18:30Z | https://github.com/InstaPy/InstaPy/issues/5969 | [
"wontfix"
] | Ardy000 | 1 |
pywinauto/pywinauto | automation | 1,372 | Application hangs at FindAll in uia_element_info.py | ## Expected Behavior
Can quickly complete script
## Actual Behavior
It will take more than 60 seconds to run FindAll of IUIA().root object
## Steps to Reproduce the Problem
1. download and install RingCentral.exe, and login
2. run a python file with code 'Desktop(backend="uia").windows(title="Taskbar")', it will take more than 60 seconds
## Short Example of Code to Demonstrate the Problem
from pywinauto import Desktop, mouse, Application, keyboard
import time
startTime = time.time()
elem = Desktop(backend="uia").windows(title="Taskbar")
print("Desktop,window function usedTime: ", time.time() - startTime)
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.7.7
- Platform and OS: Microsoft Windows 10 Pro
| open | 2024-01-31T07:03:49Z | 2024-01-31T07:03:49Z | https://github.com/pywinauto/pywinauto/issues/1372 | [] | travis-1992 | 0 |
plotly/plotly.py | plotly | 4,366 | Add a function to save animations as mp4 files. | Dear Plotly Express Community,
I hope this message finds you well. I am an enthusiastic user of Plotly Express, and I have a feature suggestion that I believe would greatly enhance the utility of this fantastic library: the ability to save animations as MP4 files.
Currently, Plotly Express provides several functions to export visualizations in various formats, such as JPEG, PNG, and HTML, which are incredibly useful for static plots. However, there is no direct function available to save animations as MP4 files.
I believe that adding this feature would be a significant step forward for Plotly Express for the following reasons:
**Enhanced Presentation:** Saving animations as MP4 files would greatly facilitate their integration into presentations, whether it's for PowerPoint, educational materials, or other media.
**Wider Audience:** Many platforms and tools support MP4 videos, making it easier to share your visualizations with a broader audience, especially those who might not be familiar with Plotly or its interactive features.
**Professional Look:** MP4 animations offer a professional and polished look, which can be crucial in contexts like business presentations and academic lectures.
I propose adding a new function to Plotly Express that enables users to save animations in the following way:
fig.save_animation("file_path/file.mp4")
This function would simplify the process of exporting animations and make Plotly Express an even more versatile tool for data visualization.
I would like to encourage the Plotly Express community and developers to consider this feature suggestion seriously. I believe it would be a valuable addition to an already exceptional library and would benefit countless users who rely on Plotly Express for their data visualization needs.
If you have any questions or need further clarification on this feature suggestion, please feel free to reach out. I am more than willing to provide additional details or assist in any way possible to help implement this feature.
Thank you for your time and consideration.
Sincerely,
Robinson Mumo
| closed | 2023-09-20T16:52:43Z | 2024-07-11T17:24:25Z | https://github.com/plotly/plotly.py/issues/4366 | [] | robbymumo | 2 |
streamlit/streamlit | deep-learning | 10,363 | v1.42.0 - st.login() Error: Proof Key for Code Exchange is required for cross-origin authorization code redemption. | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
I have SSO setup in Azure (Microsoft EntraID) as a Single Page Application (SPA). When I try to authenticate, it returns the following error:
`AADSTS9002325: Proof Key for Code Exchange is required for cross-origin authorization code redemption.`
The [research ](https://learn.microsoft.com/en-us/answers/questions/1626192/how-to-fix-aadsts9002325-proof-key-for-code-exchan) I've found suggests this is because the Openid auth is not using the PKCE flow.
Is this a bug in my code, in the implementation of the new streamlit feature, or not supported? If not supported, adding SPA support would be very much appreciated.
### Reproducible Code Example
```Python
secrets.toml
[auth]
redirect_uri = "https://server.mydomain.com:8501/oauth2/callback"
cookie_secret = "xxxxREDACTEDxxxx"
client_id = "xxxxREDACTEDxxxx"
client_secret = "xxxxREDACTEDxxxx"
server_metadata_url = "https://login.microsoftonline.com/{myTennantIDHere}/v2.0/.well-known/openid-configuration"
client_kwargs = { "prompt" = "login", "scope" = ""xxxxREDACTEDxxxx"/.default" }
app.py
import streamlit as st
if not st.experimental_user.is_logged_in:
if st.button("Log in with 3M Azure SSO"):
st.login()
st.stop()
if st.button("Log out"):
st.logout()
st.markdown(f"Welcome! {st.experimental_user.name}")
```
### Steps To Reproduce

Setup a "Single Page Application" in MS Azure
- Input redirect_uri
- copy client_id
- copy client_secret
- enter corresponding values into `.streamlit/secrets.toml`
Run included streamlit app.py
- button is rendered to sign in
- click button, redirected to MS Azure SSO
- Enter valid user/pass
- MS spits out error:
```
Sorry, but we’re having trouble signing you in.
AADSTS9002325: Proof Key for Code Exchange is required for cross-origin authorization code redemption.
```
### Expected Behavior
I'd expect SPA registrations to work with this auth tool. SPAs seem to be much easier to setup than Web platforms (at least in my experience).
If you won't be supporting SPA's, then it would be good to note in the documentation for clarification so users setup the right kind of SSO integration.
### Current Behavior
Error message from MS Azure at signin:
```
AADSTS9002325: Proof Key for Code Exchange is required for cross-origin authorization code redemption.
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.0
- Python version: 3.12.8
- Operating System: Linux (RHEL9)
- Browser: Chrome/Edge (latest at time of submission)
No streamlit error, SSO provider is where the error manifests, and does not send back to streamlit app due to error during auth process.
### Additional Information
_No response_ | closed | 2025-02-07T22:18:00Z | 2025-02-08T13:47:34Z | https://github.com/streamlit/streamlit/issues/10363 | [
"type:not-issue"
] | numericOverflow | 2 |
piskvorky/gensim | nlp | 3,173 | Gensim 4.0 loading Phraser trained from Gensim 3.x | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Gensim 4.0 cannot load Phraser model from Gensim 3.x
#### Steps/code/corpus to reproduce
```python
gemsim.model.load("saved_phraser.pkl")
```
Will not be able to load phraser trained in gensim 3.x due to a bug in source code. See below.
#### Versions
This line of code from 4.0.1 (and also current development branch) need to be fixed:
https://github.com/RaRe-Technologies/gensim/blob/4.0.1/gensim/models/phrases.py#L367
```python
model.phrasegrams = {
str(model.delimiter.join(component), encoding='utf8'): score
for key, val in phrasegrams.items()
}
```
The existing code will only keep the first phrase. To upgrade and load all phrases, it should be replaced with:
```python
model.phrasegrams = {
str(model.delimiter.join(key), encoding='utf8'): val
for key, val in phrasegrams.items()
}
```
| closed | 2021-06-15T21:01:10Z | 2021-06-16T08:42:33Z | https://github.com/piskvorky/gensim/issues/3173 | [] | canmingh | 2 |
postmanlabs/httpbin | api | 724 | Crash when passing an accent in payload (via headers) | If one goes here:
- http://httpbin.org/#/Response_inspection/get_response_headers
- type "éphémère" and you'll get an error 500:
- https://httpbin.org/response-headers?freeform=%C3%A9ph%C3%A9m%C3%A8re
HTTP 1 headers encoding is a legacy subject (latin-1 etc), as mentioned here:
- https://github.com/elixir-mint/mint/issues/301
My hypothesis is the crash is completely accent-related (and linked to latin-1 <-> UTF-8 transcoding probably).
If one inputs "ephemere", the crash stops.
While a bit exotic, this point is essential for the `content-disposition` header, which gives the filename of the downloaded file (`attachment, filename=xyz`).
| open | 2024-06-18T06:35:48Z | 2024-06-18T06:59:31Z | https://github.com/postmanlabs/httpbin/issues/724 | [] | thbar | 1 |
neuml/txtai | nlp | 343 | Add graph documentation | Update documentation with graph configuration and details. | closed | 2022-09-20T12:12:00Z | 2022-09-20T12:13:32Z | https://github.com/neuml/txtai/issues/343 | [] | davidmezzetti | 0 |
PokemonGoF/PokemonGo-Bot | automation | 5,423 | Error in config | What did I do wrong this time XD??
Docker says:
"/usr/src/app/pokemongo_bot/../configs/config.json:441:0: Error: Unexpected text after end of JSON value
| At line 441, column 0, offset 16194
/usr/src/app/pokemongo_bot/../configs/config.json: has errors
2016-09-13 15:25:09,420 [ cli] [CRITICAL] Error with configuration file"
Config as follows:
{
"websocket_server": false,
"heartbeat_threshold": 10,
"enable_social": true,
"live_config_update": {
"enabled": false,
"tasks_only": false
},
"tasks": [
{
"type": "TelegramTask",
"config": {
"enabled": false,
"master": null,
"// old syntax, still supported: alert_catch": ["all"],
"// new syntax:": {},
"alert_catch": {
"all": {"operator": "and", "cp": 1300, "iv": 0.95},
"Snorlax": {"operator": "or", "cp": 900, "iv": 0.9}
}
}
},
{
"//NOTE: This task MUST be placed on the top of task list": {},
"type": "RandomAlivePause",
"config": {
"enabled": false,
"min_duration": "00:00:10",
"max_duration": "00:10:00",
"min_interval": "00:05:00",
"max_interval": "01:30:00"
}
},
{
"type": "HandleSoftBan"
},
{
"type": "RandomPause",
"config": {
"enabled": false,
"min_duration": "00:00:10",
"max_duration": "00:10:00",
"min_interval": "00:10:00",
"max_interval": "02:00:00"
}
},
{
"type": "CompleteTutorial",
"config": {
"enabled": false,
"// set a name": "",
"nickname": "",
"// 0 = No Team, 1 = Blue, 2 = Red, 3 = Yellow": "",
"team": 0
}
},
{
"type": "CollectLevelUpReward",
"config": {
"collect_reward": true,
"level_limit": -1
}
},
{
"type": "IncubateEggs",
"config": {
"enabled": true,
"infinite_longer_eggs_first": false,
"breakable_longer_eggs_first": true,
"min_interval": 120,
"infinite": [2,5,10],
"breakable": [10]
}
},
{
"type": "UpdateLiveStats",
"config": {
"enabled": false,
"min_interval": 120,
"stats": ["username", "uptime", "stardust_earned", "xp_earned", "xp_per_hour", "stops_visited"],
"terminal_log": true,
"terminal_title": true
}
},
{
"type": "UpdateLiveInventory",
"config": {
"enabled": false,
"min_interval": 120,
"show_all_multiple_lines": false,
"items": ["pokemon_bag", "space_info", "pokeballs", "greatballs", "ultraballs", "razzberries", "luckyegg"]
}
},
{
"type": "ShowBestPokemon",
"config": {
"enabled": true,
"min_interval": 120,
"amount": 5,
"order_by": "cp",
"info_to_show": ["cp", "ivcp", "dps", "hp"]
}
},
{
"type": "TransferPokemon",
"config": {
"enabled": true,
"min_free_slot": 5,
"transfer_wait_min": 3,
"transfer_wait_max": 5
}
},
{
"type": "NicknamePokemon",
"config": {
"enabled": true,
"nickname_above_iv": 0.9,
"nickname_template": "{iv_pct}_{iv_ads}",
"nickname_wait_min": 3,
"nickname_wait_max": 5
}
},
{
"type": "EvolvePokemon",
"config": {
"enabled": false,
```
"// evolve only pidgey and drowzee": "",
"// evolve_list": "pidgey, drowzee",
"// donot_evolve_list": "none",
"// evolve all but pidgey and drowzee": "",
"// evolve_list": "all",
"// donot_evolve_list": "pidgey, drowzee",
"evolve_list": "all",
"donot_evolve_list": "none",
"first_evolve_by": "cp",
"evolve_above_cp": 500,
"evolve_above_iv": 0.8,
"logic": "or",
"min_evolve_speed": 25,
"max_evolve_speed": 30,
"use_lucky_egg": false
}
},
{
"type": "UseIncense",
"config": {
"use_incense": false,
"use_order": [
"ordinary",
"spicy",
"cool",
"floral"
]
}
},
{
"type": "RecycleItems",
"config": {
"enabled": true,
"min_empty_space": 15,
"max_balls_keep": 270,
"max_potions_keep": 70,
"max_berries_keep": 70,
"max_revives_keep": 30,
"item_filter": {
"Pokeball": { "keep" : 100 },
"Potion": { "keep" : 0 },
"Super Potion": { "keep" : 0 },
"Hyper Potion": { "keep" : 20 },
"Revive": { "keep" : 30 },
"Razz Berry": { "keep" : 70 }
},
"recycle_wait_min": 3,
"recycle_wait_max": 5,
"recycle_force": true,
"recycle_force_min": "00:01:00",
"recycle_force_max": "00:05:00"
}
},
{
"type": "CatchPokemon",
"config": {
"enabled": true,
"catch_visible_pokemon": true,
"catch_lured_pokemon": true,
"min_ultraball_to_keep": 5,
"berry_threshold": 0.35,
"vip_berry_threshold": 0.9,
"treat_unseen_as_vip": true,
"daily_catch_limit": 800,
"vanish_settings": {
"consecutive_vanish_limit": 10,
"rest_duration_min": "02:00:00",
"rest_duration_max": "04:00:00"
},
"catch_throw_parameters": {
"excellent_rate": 0.1,
"great_rate": 0.5,
"nice_rate": 0.3,
"normal_rate": 0.1,
"spin_success_rate" : 0.6,
"hit_rate": 0.75
},
"catch_simulation": {
"flee_count": 3,
"flee_duration": 2,
"catch_wait_min": 3,
"catch_wait_max": 6,
"berry_wait_min": 3,
"berry_wait_max": 5,
"changeball_wait_min": 3,
"changeball_wait_max": 5,
"newtodex_wait_min": 20,
"newtodex_wait_max": 30
}
}
},
{
"type": "SpinFort",
"config": {
"enabled": true,
"spin_wait_min": 3,
"spin_wait_max": 5,
"daily_spin_limit": 1900
}
},
{ "type": "UpdateWebInventory",
"config": {
"enabled": true
}
},
{
"type": "MoveToFort",
"config": {
"enabled": true,
"lure_attraction": true,
"lure_max_distance": 2000,
"walker": "StepWalker",
"log_interval": 5
}
},
{
"type": "FollowSpiral",
"config": {
"enabled": true,
"diameter": 4,
"step_size": 70
}
}
],
"map_object_cache_time": 5,
"forts": {
"avoid_circles": true,
"max_circle_size": 50,
"cache_recent_forts": true
},
"pokemon_bag": {
"// if 'show_at_start' is true, it will log all the pokemons in the bag (not eggs) at bot start": {},
"show_at_start": true,
"// if 'show_count' is true, it will show the amount of each pokemon (minimum 1)": {},
"show_count": false,
"// if 'show_candies' is true, it will show the amount of candies for each pokemon": {},
"show_candies": false,
"// 'pokemon_info' parameter define which info to show for each pokemon": {},
"// the available options are": {},
"// ['cp', 'iv_ads', 'iv_pct', 'ivcp', 'ncp', 'level', 'hp', 'moveset', 'dps']": {},
"pokemon_info": ["cp", "iv_pct"]
},
"walk_max": 4.16,
"walk_min": 2.16,
"alt_min": 500,
"alt_max": 1000,
"sleep_schedule": [
{
"time": "12:00",
"duration": "5:30",
"time_random_offset": "00:30",
"duration_random_offset": "00:30",
"wake_up_at_location": ""
},
{
"time": "17:45",
"duration": "3:00",
"time_random_offset": "01:00",
"duration_random_offset": "00:30",
"wake_up_at_location": ""
}
],
"gps_default_altitude": 8.0,
"replicate_gps_xy_noise": false,
"replicate_gps_z_noise": false,
"gps_xy_noise_range": 0.000125,
"gps_z_noise_range": 12.5,
"debug": false,
"test": false,
"walker_limit_output": false,
"health_record": true,
"location_cache": true,
"distance_unit": "km",
"reconnecting_timeout": 15,
"logging": {
"color": true,
"show_datetime": true,
"show_process_name": true,
"show_log_level": true,
"show_thread_name": false
},
"catch": {
"any": {"candy_threshold" : 400 ,"catch_above_cp": 0, "catch_above_iv": 0, "logic": "or"},
"// Example of always catching Rattata:": {},
"// Rattata": { "always_catch" : true }
},
"release": {
"any": {
"release_below_cp": 100,
"release_below_iv": 0.6,
"logic": "or"
},
"Alakazam": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Arbok": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Beedrill": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Butterfree": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Cubone": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Diglett": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Dodrio": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Doduo": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Dugtrio": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Ekans": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Electrode": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Fearow": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Golbat": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Horsea": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Kadabra": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Kakuna": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Kingler": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Krabby": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Magnemite": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Magneton": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Mankey": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Marowak": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Meowth": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Metapod": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Paras": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Parasect": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Persian": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Primeape": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Raticate": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Sandshrew": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Sandslash": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Seadra": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Spearow": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Venomoth": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Venonat": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Voltorb": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Zubat": {
"keep_best_custom": "cp, iv, hp_max",
"amount": 2
},
"Aerodactyl": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Clefable": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Clefairy": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Cloyster": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Dewgong": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Drowzee": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Electabuzz": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Gastly": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Gengar": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Geodude": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Goldeen": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Golduck": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Golem": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Graveler": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Haunter": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Hypno": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Jigglypuff": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Jolteon": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Kabuto": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Kabutops": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Kangaskhan": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Koffing": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Magmar": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Ninetales": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Omanyte": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Omastar": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Pidgeot": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Pidgeotto": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Pikachu": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Pinsir": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Ponyta": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Psyduck": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Raichu": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Rapidash": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Rhydon": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Rhyhorn": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Scyther": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Seaking": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Seel": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Shellder": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Starmie": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Staryu": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Tentacool": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Tentacruel": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Vulpix": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Weezing": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"Wigglytuff": {
"keep_best_cp": 2,
"keep_best_iv": 2
},
"// Example of always releasing Rattata:": {},
"// Rattata": {
"always_release": true
},
"// Example of keeping 3 stronger (based on CP) Pidgey:": {},
"// Pidgey": {
"keep_best_cp": 3
},
"// Example of keeping 2 best (based on IV) Zubat:": {},
"// Zubat": {
"keep_best_iv": 2
},
"// Keep no more than 3 best IV pokemon for every pokemon type": {},
"// any": {
"keep_best_iv": 3
},
"// Discard all pokemon in bag except 100 pokemon with best CP": {},
"// all": {
"keep_best_cp": 100
},
"// Example of keeping the 2 strongest (based on CP) and 3 best (based on IV) Zubat:": {},
"// Voltorb": {
"keep_best_cp": 2,
"keep_best_iv": 3
},
"// Example of custom order of static criterion": {},
"// Venonat": {
"keep_best_custom": "iv, cp, hp_max",
"amount": 2
}
```
},
"vips" : {
"Any pokemon put here directly force to use Berry & Best Ball to capture, to secure the capture rate": {},
"any": {"catch_above_cp": 1200, "catch_above_iv": 0.9, "logic": "or" },
"Lapras": {},
"Moltres": {},
"Zapdos": {},
"Articuno": {},
```
"// S-Tier pokemons (if pokemon can be evolved into tier, list the representative)": {},
"Mewtwo": {},
"Dragonite": {},
"Snorlax": {},
"// Mew evolves to Mewtwo": {},
"Mew": {},
"Arcanine": {},
"Vaporeon": {},
"Gyarados": {},
"Exeggutor": {},
"Muk": {},
"Weezing": {},
"Flareon": {}
},
"websocket": {
"start_embedded_server": true,
"server_url": "127.0.0.1:4000"
}
```
}
| closed | 2016-09-13T15:29:57Z | 2016-09-13T16:36:38Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5423 | [] | jh0s3ph | 0 |
google-research/bert | tensorflow | 880 | some questions about the script run_classifier_with_tfhub | 
In this script, what's the meaning of the tag、trainable? when I set the trainable is True, does the word embedding change in finue-tune?
| open | 2019-10-16T03:00:12Z | 2019-10-16T03:00:12Z | https://github.com/google-research/bert/issues/880 | [] | Cumberbatch08 | 0 |
iperov/DeepFaceLab | machine-learning | 5,499 | 3090 faceset extract is very slow,Is it possible to use chunking and multiple processes for extraction operations? | ## Expected behavior
very fast extract face
## Actual behavior
extract face fastest speed is only 2.34it/s
## Steps to reproduce
data_xxx faceset extract.bat
## Other relevant information
- RTX3090
Is it possible to use chunking and multiple processes for extraction operations? | open | 2022-03-22T06:36:30Z | 2023-06-08T23:18:46Z | https://github.com/iperov/DeepFaceLab/issues/5499 | [] | deviljoker5200 | 1 |
roboflow/supervision | machine-learning | 1,144 | Multi-can tracking | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
I have 6 cams connected in a hallway and my task is to track and count people walking in it (there are always many people there), yet I do not understand how I can produce inference on a multiple cameras AND have the same IDs of people from cam1 to cam2,3...6. I use ultralytics for detection and tried their multi-streaming guide, yet if 1 camera catches a frame without objects - it shuts down. Is there any other way to run inference on multiple cameras or am i missing something? Please help.
### Additional
_No response_ | closed | 2024-04-26T11:54:47Z | 2024-04-26T12:09:35Z | https://github.com/roboflow/supervision/issues/1144 | [
"question"
] | Vdol22 | 1 |
littlecodersh/ItChat | api | 130 | sync check失败,{retcode:"1100",selector:"0"} | 设备系统: android 5.1.1
微信版本: 6.3.27
可以正常扫码登录。登录后就在命令行界面显示LOG OUT.
我看了itchat的源码, 在源码中加了一些调试信息。 发现在sync check的时候, retcode和selector有问题。
sync check msg: window.synccheck={retcode:"1100",selector:"0"}
我用iphone登录itchat是ok。 看到的retcode和selector是
sync check msg: window.synccheck={retcode:"0",selector:"2"}。
想问下问题出在哪里, 有没有什么方法解决这个问题?
| closed | 2016-10-30T09:31:22Z | 2016-11-01T15:19:33Z | https://github.com/littlecodersh/ItChat/issues/130 | [
"bug"
] | hackstoic | 3 |
xlwings/xlwings | automation | 1,941 | Necessity for 32bit Windows numpy/scipy binaries in future | This is not an issue, but a RFC especially for @fzumstein and users of xlwings.
_How important is the availability of the 32-bit numpy/scipy stack in future?_
I'm asking here, because MS Office is AFAIK often installed as 32-bit due to compatibility to old Addins. I know the default bitness during install is 64-bit now, but many companies seem to install 32-bit due to the aforementioned reasons. In this case 32-bit Python version is still mandantory for xlwings, right?
This questions is also discussed here [Releasing (or not) 32-bit Windows wheels](https://discuss.scientific-python.org/t/releasing-or-not-32-bit-windows-wheels/282) and here https://github.com/scipy/scipy/issues/16286.
Any feedback to this question would be helpful. | closed | 2022-06-21T15:46:35Z | 2022-06-21T17:57:01Z | https://github.com/xlwings/xlwings/issues/1941 | [] | carlkl | 2 |
piskvorky/gensim | data-science | 2,880 | Really remove the 10000-token limit in [Word2Vec, FastText, Doc2Vec] | The *2Vec models have an underdocumented implementation limit in their Cython paths: any single text passed to training that's more than 10000 tokens is silently truncated to 10000 tokens, discarding the rest. This may surprise users with larger texts - as much of the text, including words discovered during the vocabulary-survey (which doesn't truncate texts), can thus be skipped.
Fixing this would make a warning like that I objected to in PR #2861 irrelevant.
Fixing this would also fix #2583, the limit with respect to Doc2Vec inference.
As mentioned in #2583, one possible fix would be to auto-break user texts into smaller chunks. Possible fixes thus include:
* auto-breaking user texts into <10k token internal texts
* using malloc, rather than a stack-allocated array of constant length, inside the Cython routines (might add allocate/free overhead & achieve less cache-locality than the current approach)
* use alloca - not an official part of the relevant C standard but likely available everywhere relevant (MacOS, Windows, Linux, BSDs, other Unixes) - instead of the constant stack-allocated array (some risk of overflowing if users provide gigantic texts)
* doing some one-time allocation per thread in Python-land that's usually reused for in Cython land small-sized texts, but when oversized texts are encountered replacing that with a larger allocation.
Each of these may need to be done slightly differently in the `corpus_file` high-thread-parallelism codepaths. | open | 2020-07-12T04:39:25Z | 2022-04-12T21:37:11Z | https://github.com/piskvorky/gensim/issues/2880 | [
"feature",
"difficulty hard",
"impact MEDIUM",
"reach LOW"
] | gojomo | 7 |
ranaroussi/yfinance | pandas | 2,177 | NaN on the last Close for some tickers if using Tickers function with more than one ticker | ### Describe bug
If I call
`tk = yfinance.Tickers('1ABBV.MI').history(start=start_date, end=end_date)` the last Close for 1AABV.MI is always filled
If I Call
`tk = yfinance.Tickers('1ABBV.MI 1AAPL.MI').history(start=start_date, end=end_date)` I always get the last close as NaN for 1AABV.MI
if I try to swap '1AAPL.MI 1ABBV.MI ' is the same NaN Close for 1AABV.MI
At the end 1AABV.MI (and some others) have this strange behavior if called tighter with other Tickers.
### Simple code that reproduces your problem
Works:
`tk = yfinance.Tickers('1ABBV.MI').history(start=start_date, end=end_date)`
Not Works:
`tk = yfinance.Tickers('1ABBV.MI 1AAPL.MI').history(start=start_date, end=end_date)`
### Debug log
DEBUG Entering download()
DEBUG Disabling multithreading because DEBUG logging enabled
DEBUG Entering history()
DEBUG Entering history()
DEBUG 1AAPL.MI: Yahoo GET parameters: {'period1': '2016-09-26 00:00:00+02:00', 'period2': '2024-12-14 00:00:00+01:00', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/1AAPL.MI
DEBUG params={'period1': 1474840800, 'period2': 1734130800, 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG loaded persistent cookie
DEBUG reusing cookie
DEBUG crumb = 'L4bxwRUg0Qo'
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG 1AAPL.MI: yfinance received OHLC data: 2017-10-26 07:00:00 -> 2024-12-13 14:59:05
DEBUG 1AAPL.MI: OHLC after cleaning: 2017-10-26 09:00:00+02:00 -> 2024-12-13 15:59:05+01:00
DEBUG 1AAPL.MI: OHLC after combining events: 2017-10-26 00:00:00+02:00 -> 2024-12-13 00:00:00+01:00
DEBUG 1AAPL.MI: yfinance returning OHLC: 2017-10-26 00:00:00+02:00 -> 2024-12-13 00:00:00+01:00
DEBUG Exiting history()
DEBUG Exiting history()
DEBUG Entering history()
DEBUG Entering history()
DEBUG 1ABBV.MI: Yahoo GET parameters: {'period1': '2016-09-26 00:00:00+02:00', 'period2': '2024-12-14 00:00:00+01:00', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/1ABBV.MI
DEBUG params={'period1': 1474840800, 'period2': 1734130800, 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG 1ABBV.MI: yfinance received OHLC data: 2023-11-24 08:00:00 -> 2024-12-12 08:00:00
DEBUG 1ABBV.MI: OHLC after cleaning: 2023-11-24 09:00:00+01:00 -> 2024-12-12 09:00:00+01:00
DEBUG 1ABBV.MI: OHLC after combining events: 2023-11-24 00:00:00+01:00 -> 2024-12-12 00:00:00+01:00
DEBUG 1ABBV.MI: yfinance returning OHLC: 2023-11-24 00:00:00+01:00 -> 2024-12-12 00:00:00+01:00
DEBUG Exiting history()
DEBUG Exiting history()
DEBUG Exiting download()
### Bad data proof
_No response_
### `yfinance` version
yfinance==0.2.50
### Python version
python 3.13.0
### Operating system
Mac OS | closed | 2024-12-13T15:16:29Z | 2025-02-16T20:14:01Z | https://github.com/ranaroussi/yfinance/issues/2177 | [] | MarcoOvidi | 3 |
sammchardy/python-binance | api | 564 | OpenPose tensolflow Microsoft Visual Studio 2017 cl.exe' failed with exit status 2 error | Hello while trying to run open pose with tesorfolw gpu following https://github.com/ildoonet/tf-pose-estimation
I ran into this error in git bash
ERROR: Command errored out with exit status 1:
command: 'C:\Users\HP\anaconda3\envs\AIMachine\python.exe' -u -c 'import sys,
setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Public\\Documents\\Wondersh
are\\CreatorTemp\\pip-install-onrd94b2\\pycocotools\\setup.py'"'"'; __file__='"'
"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-onrd94b2\\
pycocotools\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);c
ode=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code,
__file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\Public\Documents\Wondershar
e\CreatorTemp\pip-wheel-4edsdv6b'
cwd: C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-onrd94
b2\pycocotools\
Complete output (22 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\pycocotools
copying pycocotools\coco.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\cocoeval.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\mask.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\__init__.py -> build\lib.win-amd64-3.7\pycocotools
running build_ext
cythoning pycocotools/_mask.pyx to pycocotools\_mask.c
c:\users\public\documents\wondershare\creatortemp\pip-install-onrd94b2\pycocot
ools\.eggs\cython-0.29.21-py3.7-win-amd64.egg\Cython\Compiler\Main.py:369: Futur
eWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This
will change in a later release! File: C:\Users\Public\Documents\Wondershare\Cre
atorTemp\pip-install-onrd94b2\pycocotools\pycocotools\_mask.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
building 'pycocotools._mask' extension
creating build\temp.win-amd64-3.7
creating build\temp.win-amd64-3.7\Release
creating build\temp.win-amd64-3.7\Release\common
creating build\temp.win-amd64-3.7\Release\pycocotools
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14
.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\
HP\anaconda3\envs\AIMachine\lib\site-packages\numpy\core\include -I./common -IC:
\Users\HP\anaconda3\envs\AIMachine\include -IC:\Users\HP\anaconda3\envs\AIMachin
e\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\To
ols\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual
Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x8
6)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kit
s\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\includ
e\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17
763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt"
"-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tc./co
mmon/maskApi.c /Fobuild\temp.win-amd64-3.7\Release\./common/maskApi.obj -Wno-cpp
-Wno-unused-function -std=c99
cl : Command line error D8021 : invalid numeric argument '/Wno-cpp'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Commun
ity\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x64\\cl.exe' failed with exit s
tatus 2
----------------------------------------
ERROR: Failed building wheel for pycocotools
Running setup.py clean for pycocotools
Building wheel for tensorpack (setup.py) ... done
Created wheel for tensorpack: filename=tensorpack-0.10.1-py2.py3-none-any.whl
size=293106 sha256=204a5b156ba8aa07fa34c5ab8e8c66f712d643369c087ba8f4859e1883c05
276
Stored in directory: C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-eph
em-wheel-cache-upbjbq8m\wheels\8f\c4\7d\b7ca213c76a0b78c772c6d3173364b8102d262ac
da1ec45207
Successfully built tensorpack
Failed to build pycocotools
Installing collected packages: argparse, dill, fire, kiwisolver, pyparsing, cycl
er, pillow, python-dateutil, matplotlib, llvmlite, numba, psutil, cython, pycoco
tools, idna, urllib3, chardet, requests, imageio, decorator, networkx, tifffile,
PyWavelets, scipy, scikit-image, slidingwindow, tqdm, tabulate, msgpack, msgpac
k-numpy, pyzmq, tensorpack
Running setup.py install for pycocotools ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\HP\anaconda3\envs\AIMachine\python.exe' -u -c 'import sy
s, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Public\\Documents\\Wonder
share\\CreatorTemp\\pip-install-onrd94b2\\pycocotools\\setup.py'"'"'; __file__='
"'"'C:\\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-onrd94b2
\\pycocotools\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__)
;code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code
, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Public\Documents\Wonder
share\CreatorTemp\pip-record-fz1epdn2\install-record.txt' --single-version-exter
nally-managed --compile --install-headers 'C:\Users\HP\anaconda3\envs\AIMachine\
Include\pycocotools'
cwd: C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-install-onrd
94b2\pycocotools\
Complete output (20 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\pycocotools
copying pycocotools\coco.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\cocoeval.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\mask.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\__init__.py -> build\lib.win-amd64-3.7\pycocotools
running build_ext
skipping 'pycocotools\_mask.c' Cython extension (up-to-date)
building 'pycocotools._mask' extension
creating build\temp.win-amd64-3.7
creating build\temp.win-amd64-3.7\Release
creating build\temp.win-amd64-3.7\Release\common
creating build\temp.win-amd64-3.7\Release\pycocotools
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\
14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\User
s\HP\anaconda3\envs\AIMachine\lib\site-packages\numpy\core\include -I./common -I
C:\Users\HP\anaconda3\envs\AIMachine\include -IC:\Users\HP\anaconda3\envs\AIMach
ine\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\
Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visua
l Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (
x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows K
its\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\incl
ude\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.
17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt
" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tc./
common/maskApi.c /Fobuild\temp.win-amd64-3.7\Release\./common/maskApi.obj -Wno-c
pp -Wno-unused-function -std=c99
cl : Command line error D8021 : invalid numeric argument '/Wno-cpp'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Comm
unity\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x64\\cl.exe' failed with exit
status 2
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\HP\anaconda3\envs\AIMac
hine\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\
\Users\\Public\\Documents\\Wondershare\\CreatorTemp\\pip-install-onrd94b2\\pycoc
otools\\setup.py'"'"'; __file__='"'"'C:\\Users\\Public\\Documents\\Wondershare\\
CreatorTemp\\pip-install-onrd94b2\\pycocotools\\setup.py'"'"';f=getattr(tokenize
, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'
"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record
'C:\Users\Public\Documents\Wondershare\CreatorTemp\pip-record-fz1epdn2\install-r
ecord.txt' --single-version-externally-managed --compile --install-headers 'C:\U
sers\HP\anaconda3\envs\AIMachine\Include\pycocotools' Check the logs for full co
mmand output.
| closed | 2020-07-29T15:57:20Z | 2020-07-29T22:09:52Z | https://github.com/sammchardy/python-binance/issues/564 | [] | NusaibaNizam | 0 |
NullArray/AutoSploit | automation | 1,299 | traceback issue | # Running information
<!-- Running detail, OS, arch, did you clone, etc -->
- What branch did you download?
- 4.0
- Clone, or docker run?
- Clone
- What OS are you running?
Parrot OS 4.11.2
# Program information
<!-- Basic python information we will need -->
- Python version number?
Python 3.9.2
- AutoSploit version number?
- 4.0
- Any console output that is relevant to the issue:
Traceback (most recent call last):
File "/home/user/AutoSploit/autosploit.py", line 1, in <module>
from autosploit.main import main
File "/home/user/AutoSploit/autosploit/main.py", line 135
print error_traceback
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(error_traceback)?
- Traceback (error) if any:

| open | 2021-07-15T08:45:55Z | 2021-07-15T08:45:55Z | https://github.com/NullArray/AutoSploit/issues/1299 | [] | Jinnku | 0 |
PrefectHQ/prefect | automation | 17,414 | DaskTaskRunner does not properly wait for mapped futures | ### Bug summary
When passing a list of mapped futures to a downstream task via `wait_for`, the downstream tasks runs immediately and does not wait:
(EDIT: fixed typo)
```python
from prefect import task, flow
from prefect_dask import DaskTaskRunner
from time import sleep
@task
def list_nums():
return [1, 5, 10]
@task(log_prints=True)
def wait(n: int):
print(f"Sleeping for {n=}")
sleep(n)
return n
@task(log_prints=True)
def should_wait():
return "I'm running!"
@flow(task_runner=DaskTaskRunner(address="localhost:8786"))
def my_flow():
nums = list_nums.submit()
futures = wait.map(nums) # does not wait
# futures = [wait.submit(1), wait.submit(5), wait.submit(10)] # waits as expected
result = should_wait.submit(wait_for=futures)
return result
if __name__ == "__main__":
my_flow()
```
output:
```
12:22:51.375 | INFO | Task run 'wait-dee' - Sleeping for n=1
12:22:51.408 | INFO | Task run 'wait-1e1' - Sleeping for n=10
12:22:51.409 | INFO | Task run 'wait-737' - Sleeping for n=5
12:22:51.409 | INFO | Task run 'block-48b' - Finished in state Completed()
12:22:52.382 | INFO | Task run 'wait-dee' - Finished in state Completed()
12:22:56.412 | INFO | Task run 'wait-737' - Finished in state Completed()
12:23:01.435 | INFO | Task run 'wait-1e1' - Finished in state Completed()
```
When passing a list of `.submit` futures, the downstream task does wait as expected. Removing the `DaskTaskRunner` also fixes the issue for both `[.submit()]` and `.map`
### Version info
```Text
Version: 3.2.11
API version: 0.8.4
Python version: 3.10.16
Git commit: 9481694f
Built: Wed, Mar 5, 2025 10:00 PM
OS/Arch: darwin/arm64
Profile: default
Server type: cloud
Pydantic version: 2.9.2
Integrations:
prefect-gcp: 0.6.2
prefect-dask: 0.3.3
prefect-kubernetes: 0.5.3
```
### Additional context
_No response_ | closed | 2025-03-07T17:24:28Z | 2025-03-07T21:43:23Z | https://github.com/PrefectHQ/prefect/issues/17414 | [
"bug",
"integrations"
] | bnaul | 5 |
hyperspy/hyperspy | data-visualization | 3,324 | Producing sum-spectra of non rectangular ROIs from Bruker M4 Tornado X-ray fluorescence BCF files | Hi there,
I cannot find a solution to the following in the documentation:
I am trying to use hyperspy on Bruker M4 Tornado X-ray fluorescence BCF files in order to extract sum-spectra of regions of interest (ROIs). My ROIs are exported from ImageJ as text files containing a list of x and y coordinates. The shapes of the ROIs are arbitrary and generally not rectangular.
Can I tell hyperspy to give me the sum of all spectra from the pixels of the ROI and export this, e.g., as an EMSA or MSA file?
I have only recently started working with hyperspy, so if there is more context needed, I can try to specify.
Thanks! | closed | 2024-02-26T20:35:27Z | 2024-03-23T10:57:33Z | https://github.com/hyperspy/hyperspy/issues/3324 | [] | AbaPanu | 7 |
Sanster/IOPaint | pytorch | 217 | May you upgrade cuda as torch to Found existing installation: torch 1.13.1+cu117 | As other softs are using this one, the torch1.12+cu113 is just obsolete and make a fuss with the premium installs..
With no compatibility, Lama has to be down to keep other functionning.
is it possible ? | closed | 2023-02-15T17:50:13Z | 2023-03-05T23:59:32Z | https://github.com/Sanster/IOPaint/issues/217 | [] | AkoZ | 4 |
litestar-org/litestar | api | 3,373 | bug(typing): Internal type exposed on public interface | We expose `SchemaCreator` on DTOFactory.create_openapi_schema()`, but its an undocumented type.
We should replace it with a protocol.
This weird wording is because `SchemaFactory` is an internal type, but its exposed on a public interface here. We should probably replace the annotation with a Protocol.
_Originally posted by @peterschutt in https://github.com/litestar-org/litestar/pull/3371#discussion_r1560182227_ | open | 2024-04-11T00:00:48Z | 2025-03-20T15:54:35Z | https://github.com/litestar-org/litestar/issues/3373 | [
"Bug :bug:"
] | peterschutt | 0 |
ansible/awx | automation | 15,082 | Are `ghcr.io/ansible/awx_devel` Docker images supported? | Hi,
Sorry for not following the issue template. I'm trying to install AWX locally to play with it. The best solution I found is to follow the https://github.com/ansible/awx/blob/devel/tools/docker-compose/README.md which instructs to:
- build the `awx_devel` Docker image via `make docker-compose-build`
- run the AWX via `make docker-compose`
I noticed that there are existing https://github.com/ansible/awx/pkgs/container/awx_devel images which are frequently updated. The questions I have are:
- Are these images supported? May we use them and skip the `make docker-compose-build`?
- If so, may I update the docs? If not, may we start publishing and supporting them? | closed | 2024-04-09T14:00:53Z | 2024-06-14T15:32:47Z | https://github.com/ansible/awx/issues/15082 | [
"needs_triage",
"community"
] | AlexPykavy | 1 |
httpie/cli | python | 581 | Many tests fail | Ubuntu 16.10
tag: 0.9.8
Building & testing with:
```
echo --------
echo Cleaning
echo --------
cd git-httpie
sudo -u actionmystique -H git-reset-clean-pull-checkout.sh $branch $tag
echo -----------------------
echo Installing Dependencies
echo -----------------------
pip install -r requirements-dev.txt -U
echo --------
echo Building
echo --------
sudo -u actionmystique -H python setup.py build
echo ---------
echo Installing
echo ---------
python setup.py install
echo --------
echo Testing
echo --------
python setup.py test
```
leads to:
```
--------
Testing
--------
running test
running egg_info
writing requirements to httpie.egg-info/requires.txt
writing httpie.egg-info/PKG-INFO
writing top-level names to httpie.egg-info/top_level.txt
writing dependency_links to httpie.egg-info/dependency_links.txt
writing entry points to httpie.egg-info/entry_points.txt
reading manifest file 'httpie.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'httpie.egg-info/SOURCES.txt'
running build_ext
============================================================================ test session starts ============================================================================
platform linux2 -- Python 2.7.12+, pytest-3.0.7, py-1.4.33, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/actionmystique/src/HTTPie/git-httpie, inifile: pytest.ini
plugins: xdist-1.15.0, timeout-1.2.0, httpbin-0.2.3, cov-2.4.0, catchlog-1.2.2
collected 236 items
httpie/utils.py::httpie.utils.humanize_bytes PASSED
tests/test_auth.py::test_basic_auth[http] PASSED
tests/test_auth.py::test_digest_auth[http---auth-type] PASSED
tests/test_auth.py::test_digest_auth[http--A] PASSED
tests/test_auth.py::test_credentials_in_url[http] PASSED
tests/test_auth.py::test_credentials_in_url_auth_flag_has_priority[http] PASSED
tests/test_downloads.py::TestDownloads::test_actual_download[http] PASSED
tests/test_downloads.py::TestDownloads::test_download_with_Content_Length[http] PASSED
tests/test_downloads.py::TestDownloads::test_download_no_Content_Length[http] PASSED
tests/test_downloads.py::TestDownloads::test_download_interrupted[http] PASSED
tests/test_httpie.py::test_GET[http] PASSED
tests/test_httpie.py::test_DELETE[http] PASSED
tests/test_httpie.py::test_PUT[http] PASSED
tests/test_httpie.py::test_POST_JSON_data[http] PASSED
tests/test_httpie.py::test_POST_form[http] PASSED
tests/test_httpie.py::test_POST_form_multiple_values[http] PASSED
tests/test_httpie.py::test_POST_stdin[http] PASSED
tests/test_httpie.py::test_headers[http] PASSED
tests/test_httpie.py::test_headers_unset[http] PASSED
tests/test_httpie.py::test_unset_host_header[http] SKIPPED
tests/test_httpie.py::test_headers_empty_value[http] PASSED
tests/test_httpie.py::test_json_input_preserve_order[http] PASSED
tests/test_auth.py::test_basic_auth[https] PASSED
tests/test_auth.py::test_digest_auth[https---auth-type] PASSED
tests/test_auth.py::test_digest_auth[https--A] PASSED
tests/test_auth.py::test_credentials_in_url[https] PASSED
tests/test_auth.py::test_credentials_in_url_auth_flag_has_priority[https] PASSED
tests/test_downloads.py::TestDownloads::test_actual_download[https] PASSED
tests/test_downloads.py::TestDownloads::test_download_with_Content_Length[https] PASSED
tests/test_downloads.py::TestDownloads::test_download_no_Content_Length[https] PASSED
tests/test_downloads.py::TestDownloads::test_download_interrupted[https] PASSED
tests/test_httpie.py::test_GET[https] PASSED
tests/test_httpie.py::test_DELETE[https] PASSED
tests/test_httpie.py::test_PUT[https] PASSED
tests/test_httpie.py::test_POST_JSON_data[https] PASSED
tests/test_httpie.py::test_POST_form[https] PASSED
tests/test_httpie.py::test_POST_form_multiple_values[https] PASSED
tests/test_httpie.py::test_POST_stdin[https] PASSED
tests/test_httpie.py::test_headers[https] PASSED
tests/test_httpie.py::test_headers_unset[https] PASSED
tests/test_httpie.py::test_unset_host_header[https] SKIPPED
tests/test_httpie.py::test_headers_empty_value[https] PASSED
tests/test_httpie.py::test_json_input_preserve_order[https] PASSED
tests/test_auth.py::test_password_prompt PASSED
tests/test_auth.py::test_only_username_in_url[username@example.org] PASSED
tests/test_auth.py::test_only_username_in_url[username:@example.org] PASSED
tests/test_auth.py::test_missing_auth PASSED
tests/test_auth_plugins.py::test_auth_plugin_parse_auth_false PASSED
tests/test_auth_plugins.py::test_auth_plugin_require_auth_false PASSED
tests/test_auth_plugins.py::test_auth_plugin_require_auth_false_and_auth_provided PASSED
tests/test_auth_plugins.py::test_auth_plugin_prompt_password_false PASSED
tests/test_binary.py::TestBinaryRequestData::test_binary_stdin PASSED
tests/test_binary.py::TestBinaryRequestData::test_binary_file_path PASSED
tests/test_binary.py::TestBinaryRequestData::test_binary_file_form PASSED
tests/test_binary.py::TestBinaryResponseData::test_binary_suppresses_when_terminal PASSED
tests/test_binary.py::TestBinaryResponseData::test_binary_suppresses_when_not_terminal_but_pretty PASSED
tests/test_binary.py::TestBinaryResponseData::test_binary_included_and_correct_when_suitable PASSED
tests/test_cli.py::TestItemParsing::test_invalid_items PASSED
tests/test_cli.py::TestItemParsing::test_escape_separator PASSED
tests/test_cli.py::TestItemParsing::test_backslash_before_non_special_character_does_not_escape[path=c:\windows-path-=-c:\windows] PASSED
tests/test_cli.py::TestItemParsing::test_backslash_before_non_special_character_does_not_escape[path=c:\windows\-path-=-c:\windows\] PASSED
tests/test_cli.py::TestItemParsing::test_backslash_before_non_special_character_does_not_escape[path\==c:\windows-path=-=-c:\windows] PASSED
tests/test_cli.py::TestItemParsing::test_escape_longsep PASSED
tests/test_cli.py::TestItemParsing::test_valid_items PASSED
tests/test_cli.py::TestItemParsing::test_multiple_file_fields_with_same_field_name PASSED
tests/test_cli.py::TestItemParsing::test_multiple_text_fields_with_same_field_name PASSED
tests/test_cli.py::TestQuerystring::test_query_string_params_in_url PASSED
tests/test_cli.py::TestQuerystring::test_query_string_params_items PASSED
tests/test_cli.py::TestQuerystring::test_query_string_params_in_url_and_items_with_duplicates PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_slash PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_port PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_path PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_port_and_slash PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_port_and_path PASSED
tests/test_cli.py::TestLocalhostShorthand::test_dont_expand_shorthand_ipv6_as_shorthand PASSED
tests/test_cli.py::TestLocalhostShorthand::test_dont_expand_longer_ipv6_as_shorthand PASSED
tests/test_cli.py::TestLocalhostShorthand::test_dont_expand_full_ipv6_as_shorthand PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_set_and_valid PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_not_set PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_set_but_invalid_and_data_field PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_set_but_invalid_and_header_field PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_set_but_invalid_and_item_exists PASSED
tests/test_cli.py::TestNoOptions::test_valid_no_options PASSED
tests/test_cli.py::TestNoOptions::test_invalid_no_options PASSED
tests/test_cli.py::TestIgnoreStdin::test_ignore_stdin PASSED
tests/test_cli.py::TestIgnoreStdin::test_ignore_stdin_cannot_prompt_password PASSED
tests/test_cli.py::TestSchemes::test_invalid_custom_scheme PASSED
tests/test_cli.py::TestSchemes::test_invalid_scheme_via_via_default_scheme PASSED
tests/test_cli.py::TestSchemes::test_default_scheme PASSED
tests/test_config.py::test_default_options PASSED
tests/test_config.py::test_default_options_overwrite PASSED
tests/test_config.py::test_migrate_implicit_content_type PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_GET PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_GET_with_headers PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_POST_json PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_POST_form PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_POST_stdin PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_GET_no_data_no_auto_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_no_data_no_auto_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_with_data_auto_JSON_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_GET_with_data_auto_JSON_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_explicit_JSON_auto_JSON_accept PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_GET_explicit_JSON_explicit_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_form_auto_Content_Type PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_form_Content_Type_override PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_print_only_body_when_stdout_redirected_by_default PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_print_overridable_when_stdout_redirected PASSED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/AUTHORS.rst] FAILED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/CONTRIBUTING.rst] FAILED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/CHANGELOG.rst] FAILED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/README.rst] FAILED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/tests/README.rst] FAILED
tests/test_downloads.py::TestDownloadUtils::test_Content_Range_parsing PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename=hello-WORLD_123.txt-hello-WORLD_123.txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename=".hello-WORLD_123.txt"-hello-WORLD_123.txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename="white space.txt"-white space.txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename="\"quotes\".txt"-"quotes".txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename=/etc/hosts-hosts] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename=-None] PASSED
tests/test_downloads.py::TestDownloadUtils::test_filename_from_url PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.bar-0-foo.bar] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.bar-1-foo.bar-1] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.bar-10-foo.bar-10] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA-0-AAAAAAAAAA] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA-1-AAAAAAAA-1] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA-10-AAAAAAA-10] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA.txt-0-AAAAAA.txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA.txt-1-AAAA.txt-1] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.AAAAAAAAAAAAAAAAAAAA-0-foo.AAAAAA] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.AAAAAAAAAAAAAAAAAAAA-1-foo.AAAA-1] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.AAAAAAAAAAAAAAAAAAAA-10-foo.AAA-10] PASSED
tests/test_errors.py::test_error PASSED
tests/test_errors.py::test_error_traceback PASSED
tests/test_errors.py::test_timeout PASSED
tests/test_exit_status.py::test_keyboard_interrupt_during_arg_parsing_exit_status PASSED
tests/test_exit_status.py::test_keyboard_interrupt_in_program_exit_status PASSED
tests/test_exit_status.py::test_ok_response_exits_0 PASSED
tests/test_exit_status.py::test_error_response_exits_0_without_check_status PASSED
tests/test_exit_status.py::test_timeout_exit_status PASSED
tests/test_exit_status.py::test_3xx_check_status_exits_3_and_stderr_when_stdout_redirected PASSED
tests/test_exit_status.py::test_3xx_check_status_redirects_allowed_exits_0 PASSED
tests/test_exit_status.py::test_4xx_check_status_exits_4 PASSED
tests/test_exit_status.py::test_5xx_check_status_exits_5 PASSED
tests/test_httpie.py::test_debug PASSED
tests/test_httpie.py::test_help PASSED
tests/test_httpie.py::test_version PASSED
tests/test_httpie.py::test_headers_empty_value_with_value_gives_error PASSED
tests/test_output.py::test_output_option[True] PASSED
tests/test_output.py::test_output_option[False] PASSED
tests/test_output.py::TestVerboseFlag::test_verbose PASSED
tests/test_output.py::TestVerboseFlag::test_verbose_form PASSED
tests/test_output.py::TestVerboseFlag::test_verbose_json PASSED
tests/test_output.py::TestVerboseFlag::test_verbose_implies_all PASSED
tests/test_output.py::TestColors::test_get_lexer[application/json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/json+foo-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/foo+json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/json-foo-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/x-json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/json+bar-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/bar+json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/json-foo-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/x-json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/vnd.comverge.grid+hal+json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[text/plain-True-{}-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[text/plain-True-foo-Text only] PASSED
tests/test_output.py::TestColors::test_get_lexer_not_found PASSED
tests/test_output.py::TestPrettyOptions::test_pretty_enabled_by_default PASSED
tests/test_output.py::TestPrettyOptions::test_pretty_enabled_by_default_unless_stdout_redirected PASSED
tests/test_output.py::TestPrettyOptions::test_force_pretty PASSED
tests/test_output.py::TestPrettyOptions::test_force_ugly PASSED
tests/test_output.py::TestPrettyOptions::test_subtype_based_pygments_lexer_match PASSED
tests/test_output.py::TestPrettyOptions::test_colors_option PASSED
tests/test_output.py::TestPrettyOptions::test_format_option PASSED
tests/test_output.py::TestLineEndings::test_CRLF_headers_only PASSED
tests/test_output.py::TestLineEndings::test_CRLF_ugly_response PASSED
tests/test_output.py::TestLineEndings::test_CRLF_formatted_response PASSED
tests/test_output.py::TestLineEndings::test_CRLF_ugly_request PASSED
tests/test_output.py::TestLineEndings::test_CRLF_formatted_request PASSED
tests/test_redirects.py::test_follow_all_redirects_shown PASSED
tests/test_redirects.py::test_follow_without_all_redirects_hidden[--follow] PASSED
tests/test_redirects.py::test_follow_without_all_redirects_hidden[-F] PASSED
tests/test_redirects.py::test_follow_all_output_options_used_for_redirects PASSED
tests/test_redirects.py::test_follow_redirect_output_options PASSED
tests/test_redirects.py::test_max_redirects PASSED
tests/test_regressions.py::test_Host_header_overwrite PASSED
tests/test_regressions.py::test_output_devnull PASSED
tests/test_sessions.py::TestSessionFlow::test_session_created_and_reused PASSED
tests/test_sessions.py::TestSessionFlow::test_session_update PASSED
tests/test_sessions.py::TestSessionFlow::test_session_read_only PASSED
tests/test_sessions.py::TestSession::test_session_ignored_header_prefixes PASSED
tests/test_sessions.py::TestSession::test_session_by_path PASSED
tests/test_sessions.py::TestSession::test_session_unicode PASSED
tests/test_sessions.py::TestSession::test_session_default_header_value_overwritten PASSED
tests/test_sessions.py::TestSession::test_download_in_session PASSED
tests/test_ssl.py::test_ssl_version[ssl2.3] PASSED
tests/test_ssl.py::test_ssl_version[tls1] PASSED
tests/test_ssl.py::test_ssl_version[tls1.1] PASSED
tests/test_ssl.py::test_ssl_version[tls1.2] PASSED
tests/test_ssl.py::TestClientCert::test_cert_and_key PASSED
tests/test_ssl.py::TestClientCert::test_cert_pem PASSED
tests/test_ssl.py::TestClientCert::test_cert_file_not_found PASSED
tests/test_ssl.py::TestClientCert::test_cert_file_invalid FAILED
tests/test_ssl.py::TestClientCert::test_cert_ok_but_missing_key FAILED
tests/test_ssl.py::TestServerCert::test_verify_no_OK PASSED
tests/test_ssl.py::TestServerCert::test_verify_custom_ca_bundle_path PASSED
tests/test_ssl.py::TestServerCert::test_self_signed_server_cert_by_default_raises_ssl_error PASSED
tests/test_ssl.py::TestServerCert::test_verify_custom_ca_bundle_invalid_path FAILED
tests/test_ssl.py::TestServerCert::test_verify_custom_ca_bundle_invalid_bundle pytest-httpbin server hit an exception serving request: EOF occurred in violation of protocol (_ssl.c:590)
attempting to ignore so the rest of the tests can run
FAILED
tests/test_stream.py::test_pretty_redirected_stream PASSED
tests/test_stream.py::test_encoded_stream PASSED
tests/test_stream.py::test_redirected_stream PASSED
tests/test_unicode.py::test_unicode_headers PASSED
tests/test_unicode.py::test_unicode_headers_verbose PASSED
tests/test_unicode.py::test_unicode_form_item PASSED
tests/test_unicode.py::test_unicode_form_item_verbose PASSED
tests/test_unicode.py::test_unicode_json_item PASSED
tests/test_unicode.py::test_unicode_json_item_verbose PASSED
tests/test_unicode.py::test_unicode_raw_json_item PASSED
tests/test_unicode.py::test_unicode_raw_json_item_verbose PASSED
tests/test_unicode.py::test_unicode_url_query_arg_item PASSED
tests/test_unicode.py::test_unicode_url_query_arg_item_verbose PASSED
tests/test_unicode.py::test_unicode_url PASSED
tests/test_unicode.py::test_unicode_basic_auth PASSED
tests/test_unicode.py::test_unicode_digest_auth PASSED
tests/test_uploads.py::TestMultipartFormDataFileUpload::test_non_existent_file_raises_parse_error PASSED
tests/test_uploads.py::TestMultipartFormDataFileUpload::test_upload_ok PASSED
tests/test_uploads.py::TestMultipartFormDataFileUpload::test_upload_multiple_fields_with_the_same_name PASSED
tests/test_uploads.py::TestRequestBodyFromFilePath::test_request_body_from_file_by_path PASSED
tests/test_uploads.py::TestRequestBodyFromFilePath::test_request_body_from_file_by_path_with_explicit_content_type PASSED
tests/test_uploads.py::TestRequestBodyFromFilePath::test_request_body_from_file_by_path_no_field_name_allowed PASSED
tests/test_uploads.py::TestRequestBodyFromFilePath::test_request_body_from_file_by_path_no_data_items_allowed PASSED
tests/test_windows.py::TestWindowsOnly::test_windows_colorized_output SKIPPED
tests/test_windows.py::TestFakeWindows::test_output_file_pretty_not_allowed_on_windows PASSED
tests/utils.py::utils.http PASSED
================================================================================= FAILURES ==================================================================================
_______________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/AUTHORS.rst] ________________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/AUTHORS.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c45daf8d0>.returncode
tests/test_docs.py:39: AssertionError
_____________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/CONTRIBUTING.rst] _____________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/CONTRIBUTING.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c44de4950>.returncode
tests/test_docs.py:39: AssertionError
______________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/CHANGELOG.rst] _______________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/CHANGELOG.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c44e22c90>.returncode
tests/test_docs.py:39: AssertionError
________________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/README.rst] ________________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/README.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c44de4e10>.returncode
tests/test_docs.py:39: AssertionError
_____________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/tests/README.rst] _____________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/tests/README.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c44e1cfd0>.returncode
tests/test_docs.py:39: AssertionError
___________________________________________________________________ TestClientCert.test_cert_file_invalid ___________________________________________________________________
self = <test_ssl.TestClientCert instance at 0x7f6c3d403998>, httpbin_secure = <SecureServer(<class 'pytest_httpbin.serve.SecureServer'>, started 140102906935040)>
def test_cert_file_invalid(self, httpbin_secure):
with pytest.raises(SSLError):
http(httpbin_secure + '/get',
> '--cert', __file__)
tests/test_ssl.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/utils.py:207: in http
exit_status = main(args=args, **kwargs)
httpie/core.py:227: in main
log_error=log_error,
httpie/core.py:99: in program
final_response = get_response(args, config_dir=env.config.directory)
httpie/client.py:70: in get_response
response = requests_session.request(**kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/adapters.py:423: in send
timeout=timeout
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:600: in urlopen
chunked=chunked)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:345: in _make_request
self._validate_conn(conn)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:844: in _validate_conn
conn.connect()
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connection.py:326: in connect
ssl_context=context)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:322: in ssl_wrap_socket
context.load_cert_chain(certfile, keyfile)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py:416: in load_cert_chain
self._ctx.use_certificate_file(certfile)
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:740: in use_certificate_file
_raise_current_error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
exception_type = <class 'OpenSSL.SSL.Error'>
def exception_from_error_queue(exception_type):
"""
Convert an OpenSSL library failure into a Python exception.
When a call to the native OpenSSL library fails, this is usually signalled
by the return value, and an error code is stored in an error queue
associated with the current thread. The err library provides functions to
obtain these error codes and textual error messages.
"""
errors = []
while True:
error = lib.ERR_get_error()
if error == 0:
break
errors.append((
text(lib.ERR_lib_error_string(error)),
text(lib.ERR_func_error_string(error)),
text(lib.ERR_reason_error_string(error))))
> raise exception_type(errors)
E Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')]
/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py:54: Error
--------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------
http: error: Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')]
----------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------
connectionpool.py 818 DEBUG Starting new HTTPS connection (1): 127.0.0.1
________________________________________________________________ TestClientCert.test_cert_ok_but_missing_key ________________________________________________________________
self = <test_ssl.TestClientCert instance at 0x7f6c3d2b9b00>, httpbin_secure = <SecureServer(<class 'pytest_httpbin.serve.SecureServer'>, started 140102906935040)>
def test_cert_ok_but_missing_key(self, httpbin_secure):
with pytest.raises(SSLError):
http(httpbin_secure + '/get',
> '--cert', CLIENT_CERT)
tests/test_ssl.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/utils.py:207: in http
exit_status = main(args=args, **kwargs)
httpie/core.py:227: in main
log_error=log_error,
httpie/core.py:99: in program
final_response = get_response(args, config_dir=env.config.directory)
httpie/client.py:70: in get_response
response = requests_session.request(**kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/adapters.py:423: in send
timeout=timeout
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:600: in urlopen
chunked=chunked)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:345: in _make_request
self._validate_conn(conn)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:844: in _validate_conn
conn.connect()
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connection.py:326: in connect
ssl_context=context)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:322: in ssl_wrap_socket
context.load_cert_chain(certfile, keyfile)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py:419: in load_cert_chain
self._ctx.use_privatekey_file(keyfile or certfile)
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:798: in use_privatekey_file
self._raise_passphrase_exception()
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:777: in _raise_passphrase_exception
_raise_current_error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
exception_type = <class 'OpenSSL.SSL.Error'>
def exception_from_error_queue(exception_type):
"""
Convert an OpenSSL library failure into a Python exception.
When a call to the native OpenSSL library fails, this is usually signalled
by the return value, and an error code is stored in an error queue
associated with the current thread. The err library provides functions to
obtain these error codes and textual error messages.
"""
errors = []
while True:
error = lib.ERR_get_error()
if error == 0:
break
errors.append((
text(lib.ERR_lib_error_string(error)),
text(lib.ERR_func_error_string(error)),
text(lib.ERR_reason_error_string(error))))
> raise exception_type(errors)
E Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_PrivateKey_file', 'PEM lib')]
/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py:54: Error
--------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------
http: error: Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_PrivateKey_file', 'PEM lib')]
----------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------
connectionpool.py 818 DEBUG Starting new HTTPS connection (1): 127.0.0.1
_________________________________________________________ TestServerCert.test_verify_custom_ca_bundle_invalid_path __________________________________________________________
self = <test_ssl.TestServerCert instance at 0x7f6c45eab638>, httpbin_secure = <SecureServer(<class 'pytest_httpbin.serve.SecureServer'>, started 140102906935040)>
def test_verify_custom_ca_bundle_invalid_path(self, httpbin_secure):
with pytest.raises(SSLError):
> http(httpbin_secure.url + '/get', '--verify', '/__not_found__')
tests/test_ssl.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/utils.py:207: in http
exit_status = main(args=args, **kwargs)
httpie/core.py:227: in main
log_error=log_error,
httpie/core.py:99: in program
final_response = get_response(args, config_dir=env.config.directory)
httpie/client.py:70: in get_response
response = requests_session.request(**kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/adapters.py:423: in send
timeout=timeout
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:600: in urlopen
chunked=chunked)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:345: in _make_request
self._validate_conn(conn)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:844: in _validate_conn
conn.connect()
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connection.py:326: in connect
ssl_context=context)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:308: in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py:411: in load_verify_locations
self._ctx.load_verify_locations(cafile, capath)
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:669: in load_verify_locations
_raise_current_error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
exception_type = <class 'OpenSSL.SSL.Error'>
def exception_from_error_queue(exception_type):
"""
Convert an OpenSSL library failure into a Python exception.
When a call to the native OpenSSL library fails, this is usually signalled
by the return value, and an error code is stored in an error queue
associated with the current thread. The err library provides functions to
obtain these error codes and textual error messages.
"""
errors = []
while True:
error = lib.ERR_get_error()
if error == 0:
break
errors.append((
text(lib.ERR_lib_error_string(error)),
text(lib.ERR_func_error_string(error)),
text(lib.ERR_reason_error_string(error))))
> raise exception_type(errors)
E Error: [('system library', 'fopen', 'No such file or directory'), ('BIO routines', 'BIO_new_file', 'no such file'), ('x509 certificate routines', 'X509_load_cert_crl_file', 'system lib')]
/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py:54: Error
--------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------
http: error: Error: [('system library', 'fopen', 'No such file or directory'), ('BIO routines', 'BIO_new_file', 'no such file'), ('x509 certificate routines', 'X509_load_cert_crl_file', 'system lib')]
----------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------
connectionpool.py 818 DEBUG Starting new HTTPS connection (1): 127.0.0.1
________________________________________________________ TestServerCert.test_verify_custom_ca_bundle_invalid_bundle _________________________________________________________
self = <test_ssl.TestServerCert instance at 0x7f6c3d2e27e8>, httpbin_secure = <SecureServer(<class 'pytest_httpbin.serve.SecureServer'>, started 140102906935040)>
def test_verify_custom_ca_bundle_invalid_bundle(self, httpbin_secure):
with pytest.raises(SSLError):
> http(httpbin_secure.url + '/get', '--verify', __file__)
tests/test_ssl.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/utils.py:207: in http
exit_status = main(args=args, **kwargs)
httpie/core.py:227: in main
log_error=log_error,
httpie/core.py:99: in program
final_response = get_response(args, config_dir=env.config.directory)
httpie/client.py:70: in get_response
response = requests_session.request(**kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/adapters.py:423: in send
timeout=timeout
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:600: in urlopen
chunked=chunked)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:345: in _make_request
self._validate_conn(conn)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:844: in _validate_conn
conn.connect()
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connection.py:326: in connect
ssl_context=context)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:308: in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py:411: in load_verify_locations
self._ctx.load_verify_locations(cafile, capath)
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:669: in load_verify_locations
_raise_current_error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
exception_type = <class 'OpenSSL.SSL.Error'>
def exception_from_error_queue(exception_type):
"""
Convert an OpenSSL library failure into a Python exception.
When a call to the native OpenSSL library fails, this is usually signalled
by the return value, and an error code is stored in an error queue
associated with the current thread. The err library provides functions to
obtain these error codes and textual error messages.
"""
errors = []
while True:
error = lib.ERR_get_error()
if error == 0:
break
errors.append((
text(lib.ERR_lib_error_string(error)),
text(lib.ERR_func_error_string(error)),
text(lib.ERR_reason_error_string(error))))
> raise exception_type(errors)
E Error: []
/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py:54: Error
--------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------
http: error: Error: []
----------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------
connectionpool.py 818 DEBUG Starting new HTTPS connection (1): 127.0.0.1
========================================================================== pytest-warning summary ===========================================================================
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_auth.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_binary.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_cli.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_config.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_defaults.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_downloads.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_exit_status.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_httpie.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_output.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_sessions.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_stream.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_uploads.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_windows.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
=================================================== 9 failed, 224 passed, 3 skipped, 13 pytest-warnings in 14.51 seconds ====================================================
``` | closed | 2017-05-01T18:14:35Z | 2019-09-03T15:23:17Z | https://github.com/httpie/cli/issues/581 | [] | jean-christophe-manciot | 6 |
giotto-ai/giotto-tda | scikit-learn | 42 | Linting | The azure-pipelines execute the `flake8` command to check the code linting. There are some linting problems:
```
./giotto/__init__.py:1:48: W291 trailing whitespace
./giotto/base.py:1:80: E501 line too long (90 > 79 characters)
./giotto/homology/point_clouds.py:195:13: E128 continuation line under-indented for visual indent
./giotto/utils/__init__.py:1:80: E501 line too long (81 > 79 characters)
```
Once these linting problems are corrected, we can remove the flag `flake8 --exit-zero` and stop the pipelines if the linting does not pass the tests. Proper linting has to be a mandatory check for all PRs.
| closed | 2019-10-20T21:32:24Z | 2019-12-06T06:56:38Z | https://github.com/giotto-ai/giotto-tda/issues/42 | [] | matteocao | 0 |
sammchardy/python-binance | api | 646 | BinanceAPIException: APIError(code=-1022): Signature for this request is not valid. - when trying to cancel order | Hi, i'm trying to cancel an active order, i get the list of orders with
futures_get_open_orders()
and i try to cancel order like this :
futures_cancel_order(symbol='EGLDUSDT', origClientOrderIdList=["asdasda"], timestamp=client.get_server_time())
or:
.futures_cancel_order(symbol='EGLDUSDT', orderIdList=[1231321], timestamp=client.get_server_time())
but all the time i get this error:
Traceback (most recent call last):
File "cancel_order.py", line 69, in <module>
close_order = client.futures_cancel_order(symbol=simbol, origClientOrderIdList=["asdasda"], timestamp=client.get_server_time())
File "/home/aaa/.local/lib/python3.6/site-packages/binance/client.py", line 4998, in futures_cancel_order
return self._request_futures_api('delete', 'order', True, data=params)
File "/home/aaa/.local/lib/python3.6/site-packages/binance/client.py", line 222, in _request_futures_api
return self._request(method, uri, signed, True, **kwargs)
File "/home/aaa/.local/lib/python3.6/site-packages/binance/client.py", line 197, in _request
return self._handle_response()
File "/home/aaa/.local/lib/python3.6/site-packages/binance/client.py", line 230, in _handle_response
raise BinanceAPIException(self.response)
binance.exceptions.BinanceAPIException: APIError(code=-1022): Signature for this request is not valid.
i have to mention that buy/get/userdata etc works.
Any suggestions?
thanks | open | 2021-01-16T00:18:33Z | 2023-05-24T14:43:05Z | https://github.com/sammchardy/python-binance/issues/646 | [] | liviuancas | 7 |
nltk/nltk | nlp | 2,549 | CoreNLP server connection issue | For some reason, I can't connect to the server. Can anyone help me out? Here's the code I'm running.
from nltk.parse.corenlp import CoreNLPServer
server = CoreNLPServer("stanford-corenlp-4.0.0.jar","stanford-corenlp-4.0.0-models.jar",verbose=True)
server.start()
from nltk.parse.corenlpnltk.pa import CoreNLPParser
parser = CoreNLPParser()
parse = next(parser.raw_parse("I put the book in the box on the table."))
server.stop()
I am running it in a file inside the folder that I downloaded for CoreNLP so that I don't have to worry about abs paths or anything.
Here's what it outputs:
[Found java: /usr/bin/java]
[Found java: /usr/bin/java]
Traceback (most recent call last):
File "/Users/benstevens/Desktop/Bernstein2/Ben/stanford-corenlp-4.0.0/BenTest.py", line 4, in
server.start()
File "/Users/benstevens/Library/Python/3.8/lib/python/site-packages/nltk/parse/corenlp.py", line 153, in start
raise CoreNLPServerError("Could not connect to the server.")
nltk.parse.corenlp.CoreNLPServerError: Could not connect to the server.
I would greatly appreciate any help. | open | 2020-05-29T03:42:58Z | 2020-05-29T03:42:58Z | https://github.com/nltk/nltk/issues/2549 | [] | BS211561 | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,273 | AttributeError: module 'umap' has no attribute 'UMAP' on macOS Big Sur | I tried a clean installation of the repo on macOS with Python3.8:
```
brew install python@3.8
mkdir venv
python3.8 -m venv ./venv
source venv/bin/activate
python3 -m pip install --upgrade pip
# packages missing from requirements.txt
python3 -m pip install torch torchvision torchaudio unidecode librosa inflect sounddevice umap PyQt5
python3 -m pip install -r requirements.txt
#python3 demo_cli.py
python3 demo_toolbox.py
```
I upload 4 utterances (30-45s long), and after the last one, I get this error:
```
Traceback (most recent call last):
File "~/Real-Time-Voice-Cloning/toolbox/__init__.py", line 114, in <lambda>
func = lambda: self.synthesize() or self.vocode()
File "~/Real-Time-Voice-Cloning/toolbox/__init__.py", line 311, in vocode
self.ui.draw_umap_projections(self.utterances)
File "~/Real-Time-Voice-Cloning/toolbox/ui.py", line 118, in draw_umap_projections
reducer = umap.UMAP(int(np.ceil(np.sqrt(len(embeds)))), metric="cosine")
AttributeError: module 'umap' has no attribute 'UMAP'
```
I try to generate speech, but the result is completely garbled and I get another of the errors above.
I try the CLI with `python3 demo_cli.py` and all tests pass. I pass a single voice file to clone a voice and I get no speech either; instead, it sounds like the recording of someone running.
I use Python 3.8 because I am sure that the program installs and runs on that version. I tried Python 3.12 and get this error:
```
python3 -m pip install torch torchvision torchaudio unidecode librosa inflect sounddevice umap PyQt5
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
```
Since this is a clean install in a new environment, I'm not sure what I'm doing wrong, so I submit an issue. | open | 2023-11-19T17:45:53Z | 2024-07-25T20:39:17Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1273 | [] | mm3509 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.