repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
mwaskom/seaborn | matplotlib | 3,749 | Cannot plot certain data with kdeplot, warning that dataset has 0 variance which is not. | Cannot plot certain data with kdeplot, warning that dataset has 0 variance which is not.
```
/gpfs/share/code/pku_env/micromamba/envs/pytorch_cpu/lib/python3.12/site-packages/pandas/core/nanops.py:1016: RuntimeWarning: invalid value encountered in subtract
sqr = _ensure_numeric((avg - values) ** 2)
<ipython-input-20-d1f90ecbf4e4>:1: UserWarning: Dataset has 0 variance; skipping density estimate. Pass `warn_singular=False` to disable this warning.
sns.kdeplot(s, bw_adjust=.25, log_scale=True)
```
Checking dataset
```
In [32]: print(s.mean(), s.std(), s.max(), s.min())
7.094745e-11 2.368824e-10 1.5460646e-08 0.0
```
disable `log_scale` or use a fraction of sample such as `s[10000:]` does no help.
dataset npy zipped file [exp2.zip](https://github.com/user-attachments/files/16688291/exp2.zip)
| closed | 2024-08-21T08:11:23Z | 2024-08-21T12:06:46Z | https://github.com/mwaskom/seaborn/issues/3749 | [] | Wongboo | 1 |
ARM-DOE/pyart | data-visualization | 861 | Issue with altitude and gridding when reading in Level3 Files | First, MANY THANKS to all the contributors to this project, what an amazing tool you've created.
Wanted to let you know I found an issue with radar site altitudes when reading in Level3 files. It appears that at least sometimes the site altitudes imported from the Level3 files are in feet, but pyart is interpreting them in meters. This tripped me up for awhile, because when gridding the data the radius of influence can miss the grid in the z dimension, leading to what appears to be "missing data" and "holes in the data" for no obvious reason.
I already have a workaround that suits my needs and am assuming this probably isn't a high priority fix. Just wanted to make the community aware in case someone else comes across this problem or if the devs want to fix it. | closed | 2019-08-02T04:29:45Z | 2020-05-19T21:06:32Z | https://github.com/ARM-DOE/pyart/issues/861 | [] | guidodev | 4 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 15 | Missing author id serialization from the example | This is basically the example from here: http://marshmallow-sqlalchemy.readthedocs.org/en/latest/
``` python
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import scoped_session, sessionmaker, relationship
from sqlalchemy import event
from sqlalchemy.orm import mapper
engine = sa.create_engine('sqlite:///:memory:')
session = scoped_session(sessionmaker(bind=engine))
Base = declarative_base()
class Author(Base):
__tablename__ = 'authors'
id = sa.Column(sa.Integer, primary_key=True)
name = sa.Column(sa.String)
def __repr__(self):
return '<Author(name={self.name!r})>'.format(self=self)
class Book(Base):
__tablename__ = 'books'
id = sa.Column(sa.Integer, primary_key=True)
title = sa.Column(sa.String)
author_id = sa.Column(sa.Integer, sa.ForeignKey('authors.id'))
author = relationship("Author", backref='books')
def __repr__(self):
return '<Book(title={self.title!r})>'.format(self=self)
Base.metadata.create_all(engine)
author = Author(name='Chuck Paluhniuk')
session.add(author)
book = Book(title='Fight Club', author=author)
session.add(book)
session.commit()
from marshmallow_sqlalchemy import ModelSchema
class BookSchema(ModelSchema):
class Meta:
model = Book
sqla_session = session
class AuthorSchema(ModelSchema):
class Meta:
model = Author
sqla_session = session
author_schema = AuthorSchema()
book_schema = BookSchema()
# Print the author to show that it's definitely there
print book.author
dump_data = author_schema.dump(author).data
print dump_data
# {'books': [123], 'id': 321, 'name': 'Chuck Paluhniuk'}
print author_schema.load(dump_data).data
# Everything seems fine until:
print book_schema.dump(book).data
#{'title': u'Fight Club', 'id': 1, 'author': 1}
```
Result of the last print:
```
{'title': u'Fight Club', 'id': 1, 'author': None}
```
The author should not be `None`, it should be `1`
| closed | 2015-08-24T23:54:23Z | 2015-09-03T14:15:39Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/15 | [] | dpwrussell | 4 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 869 | How long should the dataset entries be for the encoder? | closed | 2021-10-07T15:46:37Z | 2021-10-07T15:48:35Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/869 | [] | fancat-programer | 0 | |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 714 | how to change demo_cli.py vocoder pretrained to griffinlim | how to change demo_cli.py vocoder pretrained to griffinlim
im using CPU and its Slow on pretrained so how can i use griffinlim vocoder? | closed | 2021-03-28T18:09:30Z | 2021-04-01T00:16:46Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/714 | [] | CrazyPlaysHD | 1 |
pallets/flask | flask | 4,410 | TypeError: redirect() takes 0 positional arguments but 1 was given | Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1518, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1516, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1502, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/binance/main.py", line 51, in index
return redirect("/register")
TypeError: redirect() takes 0 positional arguments but 1 was given
[2022-01-10 06:19:43,514] ERROR in app: Exception on / [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1518, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1516, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1502, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/binance/main.py", line 51, in index
return redirect("/register")
TypeError: redirect() takes 0 positional arguments but 1 was given
Environment:nginx
- Python version:3.8.10
- Flask version:last version
| closed | 2022-01-10T06:20:38Z | 2022-01-26T00:03:42Z | https://github.com/pallets/flask/issues/4410 | [] | MoonDevevloper | 3 |
httpie/cli | rest-api | 632 | HTTPie always uses full timeout period before printing results | Hi all!
First off, keep up the good work with HTTPie. It's an amazing tool!
Unfortunately, I just installed HTTPie on a new machine and it's showing some behavior I'm not used to. When doing requests to the API we're developing it always takes the full 30 second timeout before responding in the CLI.
At first I thought something in our API was causing the slow down as we're in the middle of development. This turned out not to be the case however as it's fast when using cURL. When looking in the app logs of our API it seems that the actual request isn't sent until the timeout expires.
This is the request we're doing (obfuscated the URL for our client's sake).
```
$ http --debug --timeout=5 PUT https://rest-api-acc.example.com/v1/just/some/resource interval=900
HTTPie 0.9.9
Requests 2.12.3
Pygments 2.1.3
Python 3.6.3 (default, Oct 4 2017, 06:09:15)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]
/usr/local/Cellar/httpie/0.9.9/libexec/bin/python3.6
Darwin 16.7.0
<Environment {
"colors": 256,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/docs#config",
"httpie": "0.9.9"
},
"default_options": "[]"
},
"config_dir": "/Users/dennislaumen/.httpie",
"is_windows": false,
"stderr": "<_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>",
"stderr_isatty": true,
"stdin": "<_io.TextIOWrapper name='<stdin>' mode='r' encoding='UTF-8'>",
"stdin_encoding": "UTF-8",
"stdin_isatty": true,
"stdout": "<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>",
"stdout_encoding": "UTF-8",
"stdout_isatty": true
}>
>>> requests.request(**{
"allow_redirects": false,
"auth": "None",
"cert": "None",
"data": "{\"interval\": \"900\"}",
"files": {},
"headers": {
"Accept": "application/json, */*",
"Content-Type": "application/json",
"User-Agent": "HTTPie/0.9.9"
},
"method": "put",
"params": {},
"proxies": {},
"stream": true,
"timeout": "5.0",
"url": "https://rest-api-acc.example.com/v1/just/some/resource",
"verify": true
})
HTTP/1.1 204 No Content
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json
Date: Tue, 14 Nov 2017 17:05:09 GMT
Expires: 0
Pragma: no-cache
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
```
When sending this request with the `--debug` added the initial debug output is displayed immediately. The response takes the whole timeout.
Any help would be appreciated.
Some details:
* Running on macOS.
* Installed with Homebrew.
* Debug output:
```
$ http --debug
HTTPie 0.9.9
Requests 2.12.3
Pygments 2.1.3
Python 3.6.3 (default, Oct 4 2017, 06:09:15)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]
/usr/local/Cellar/httpie/0.9.9/libexec/bin/python3.6
Darwin 16.7.0
<Environment {
"colors": 256,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/docs#config",
"httpie": "0.9.9"
},
"default_options": "[]"
},
"config_dir": "/Users/dennislaumen/.httpie",
"is_windows": false,
"stderr": "<_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>",
"stderr_isatty": true,
"stdin": "<_io.TextIOWrapper name='<stdin>' mode='r' encoding='UTF-8'>",
"stdin_encoding": "UTF-8",
"stdin_isatty": true,
"stdout": "<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>",
"stdout_encoding": "UTF-8",
"stdout_isatty": true
}>
``` | closed | 2017-11-14T17:08:52Z | 2020-12-20T22:23:21Z | https://github.com/httpie/cli/issues/632 | [
"blocked by upstream"
] | dennislaumen | 2 |
ultralytics/ultralytics | deep-learning | 19,072 | Pytorch WebDataset Dataloader | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi! I have been using YOLO (primarily YOLOv5) on various personal projects for years (since 2021) and have continuously been impressed by your work. I am currently seeking to train YOLOv11 on AWS Sagemaker - my dataset is quite small (only about 12GB) but consists of ~200,000 files (100k images, 100k labels). As a result of this imbalance between size and # of files, it is quite slow to download the images onto the Sagemaker EBS. Instead, I am trying to use Pytorch's webdataset to just download the tar files directly from S3. As such, I was wondering, does YOLOv11 support a webdataset dataloader? If not, how could I go about adapting the existing dataloader to do so?
Thanks so much! Sorry if this is a nonsensical question - I am somewhat new to Sagemaker and to training YOLO with such large numbers of images (the most I have used in the past was 25K in which I was able to wait for it to download onto the EBS instance).
### Additional
_No response_ | open | 2025-02-04T20:04:03Z | 2025-02-05T04:29:57Z | https://github.com/ultralytics/ultralytics/issues/19072 | [
"question",
"detect"
] | AndrewNoviello | 2 |
plotly/dash | data-science | 2,754 | [BUG] Dropdown options not rendering on the UI even though it is generated | **Describe your context**
Python Version -> `3.8.18`
`poetry show | grep dash` gives the below packages:
```
dash 2.7.0 A Python framework for building reac...
dash-bootstrap-components 1.5.0 Bootstrap themed components for use ...
dash-core-components 2.0.0 Core component suite for Dash
dash-html-components 2.0.0 Vanilla HTML components for Dash
dash-prefix 0.0.4 Dash library for managing component IDs
dash-table 5.0.0 Dash table
```
- if frontend related, tell us your Browser, Version and OS
- OS: MacOSx (Sonoma 14.3)
- Browser: Chrome (also tried on Firefox and Safari)
- Version: 121.0.6167.160 (Official Build) (x86_64)
**Describe the bug**
I have a multi-dropdown that syncs up with the input from a separate tab to pull in the list of regions associated with a country. A particular country, GB, when selected does not seem to populate the dropdown options. The UI element created was written to stdout which lists the elements correctly, but it does not render on the UI itself.
stdout printout is as follows:
```
Div([P(children='Group A - (Control)', style={'marginBottom': 5}),
Dropdown(options=[
{'label': 'Cheshire', 'value': 'Cheshire'},
{'label': 'Leicestershire', 'value': 'Leicestershire'},
{'label': 'Hertfordshire', 'value': 'Hertfordshire'},
{'label': 'Surrey', 'value': 'Surrey'},
{'label': 'Lancashire', 'value': 'Lancashire'},
{'label': 'Warwickshire', 'value': 'Warwickshire'},
{'label': 'Cumbria', 'value': 'Cumbria'},
{'label': 'Northamptonshire', 'value': 'Northamptonshire'},
{'label': 'Dorset', 'value': 'Dorset'},
{'label': 'Isle of Wight', 'value': 'Isle of Wight'},
{'label': 'Kent', 'value': 'Kent'},
{'label': 'Lincolnshire', 'value': 'Lincolnshire'},
{'label': 'Hampshire', 'value': 'Hampshire'},
{'label': 'Cornwall', 'value': 'Cornwall'},
{'label': 'Scotland', 'value': 'Scotland'},
{'label': 'Berkshire', 'value': 'Berkshire'},
{'label': 'Gloucestershire, Wiltshire & Bristol', 'value': 'Gloucestershire, Wiltshire & Bristol'},
{'label': 'Durham', 'value': 'Durham'},
{'label': 'Rutland', 'value': 'Rutland'},
{'label': 'Northumberland', 'value': 'Northumberland'},
{'label': 'West Midlands', 'value': 'West Midlands'},
{'label': 'Derbyshire', 'value': 'Derbyshire'},
{'label': 'Merseyside', 'value': 'Merseyside'},
{'label': 'East Sussex', 'value': 'East Sussex'},
{'label': 'Northern Ireland', 'value': 'Northern Ireland'},
{'label': 'Oxfordshire', 'value': 'Oxfordshire'},
{'label': 'Herefordshire', 'value': 'Herefordshire'},
{'label': 'Staffordshire', 'value': 'Staffordshire'},
{'label': 'East Riding of Yorkshire', 'value': 'East Riding of Yorkshire'},
{'label': 'South Yorkshire', 'value': 'South Yorkshire'},
{'label': 'West Sussex', 'value': 'West Sussex'},
{'label': 'Tyne and Wear', 'value': 'Tyne and Wear'},
{'label': 'Buckinghamshire', 'value': 'Buckinghamshire'},
{'label': 'West Yorkshire', 'value': 'West Yorkshire'},
{'label': 'Wales', 'value': 'Wales'},
{'label': 'Somerset', 'value': 'Somerset'},
{'label': 'Worcestershire', 'value': 'Worcestershire'},
{'label': 'North Yorkshire', 'value': 'North Yorkshire'},
{'label': 'Shropshire', 'value': 'Shropshire'},
{'label': 'Nottinghamshire', 'value': 'Nottinghamshire'},
{'label': 'Essex', 'value': 'Essex'},
{'label': 'Greater London & City of London', 'value': 'Greater London & City of London'},
{'label': 'Cambridgeshire', 'value': 'Cambridgeshire'},
{'label': 'Greater Manchester', 'value': 'Greater Manchester'},
{'label': 'Suffolk', 'value': 'Suffolk'},
{'label': 'Norfolk', 'value': 'Norfolk'},
{'label': 'Devon', 'value': 'Devon'},
{'label': 'Bedfordshire', 'value': 'Bedfordshire'}],
value=[],
multi=True,
id={'role': 'experiment-design-geoassignment-manual-geodropdown', 'group_id': 'Group-ID1234'})])
```
**Expected behavior**
When the country GB is selected, I expect the relevant options to be populated in the dropdown that can be selected. The code below:
``` python
def get_geos(self, all_geos):
element = html.Div(
[
html.P("TEST", style={"marginBottom": 5}),
dcc.Dropdown(
id={"role": self.prefix("dropdown"), "group_id": "1234"},
multi=True,
value=[],
searchable=True,
options=[{"label": g, "value": g} for g in all_geos],
),
]
)
print(element) # Print output is posted above showing that the callback is working fine. But it is not rendering correctly on the front end
return element
```
**Screen Recording**
https://github.com/plotly/dash/assets/94958897/13909683-244c-4cbe-853a-be148f3aae1c
| closed | 2024-02-08T13:47:01Z | 2024-05-31T20:12:51Z | https://github.com/plotly/dash/issues/2754 | [] | malavika-menon | 2 |
autogluon/autogluon | computer-vision | 4,705 | ValueError when calling TimeSeriesPredictor.predict with known_covariates : known "weekend" Time-varying covariates | ### Description:
I am encountering an issue when using the `TimeSeriesPredictor` in AutoGluon. The problem arises when I call the predict method with the `known_covariates` argument.
### Code:
Here is my code:
```
# data has a MultiIndex ('item_id', 'timestamp') has column 'target'
predictor = TimeSeriesPredictor(
prediction_length=24,
target="target",
known_covariates_names=["weekend"],
freq = "h",
).fit(data)
from autogluon.timeseries.utils.forecast import get_forecast_horizon_index_ts_dataframe
future_index = get_forecast_horizon_index_ts_dataframe(data, prediction_length=24, freq="h")
future_timestamps = future_index.get_level_values("timestamp")
known_covariates = pd.DataFrame(index=future_index)
known_covariates["weekend"] = future_timestamps.weekday.isin(WEEKEND_INDICES).astype(float)
# Add the column "weekend" contains a covariate that will be known at prediction time
predictor.predict(data, known_covariates=known_covariates)
```
### Error:
When running this, I encounter the following error:
KeyError Traceback (most recent call last)
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/autogluon/timeseries/learner.py:165, in TimeSeriesLearner._align_covariates_with_forecast_index(self, known_covariates, data)
164 try:
--> 165 known_covariates = known_covariates.loc[forecast_index]
166 except KeyError:
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/pandas/core/indexing.py:1191, in _LocationIndexer.__getitem__(self, key)
1190 maybe_callable = self._check_deprecated_callable_usage(key, maybe_callable)
-> 1191 return self._getitem_axis(maybe_callable, axis=axis)
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/pandas/core/indexing.py:1420, in _LocIndexer._getitem_axis(self, key, axis)
1418 raise ValueError("Cannot index with multidimensional key")
-> 1420 return self._getitem_iterable(key, axis=axis)
1422 # nested tuple slicing
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/pandas/core/indexing.py:1360, in _LocIndexer._getitem_iterable(self, key, axis)
1359 # A collection of keys
-> 1360 keyarr, indexer = self._get_listlike_indexer(key, axis)
1361 return self.obj._reindex_with_indexers(
1362 {axis: [keyarr, indexer]}, copy=True, allow_dups=True
1363 )
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/pandas/core/indexing.py:1558, in _LocIndexer._get_listlike_indexer(self, key, axis)
1556 axis_name = self.obj._get_axis_name(axis)
-> 1558 keyarr, indexer = ax._get_indexer_strict(key, axis_name)
1560 return keyarr, indexer
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/pandas/core/indexes/multi.py:2766, in MultiIndex._get_indexer_strict(self, key, axis_name)
2764 return self[indexer], indexer
-> 2766 return super()._get_indexer_strict(key, axis_name)
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/pandas/core/indexes/base.py:6200, in Index._get_indexer_strict(self, key, axis_name)
6198 keyarr, indexer, new_indexer = self._reindex_non_unique(keyarr)
-> 6200 self._raise_if_missing(keyarr, indexer, axis_name)
6202 keyarr = self.take(indexer)
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/pandas/core/indexes/multi.py:2786, in MultiIndex._raise_if_missing(self, key, indexer, axis_name)
2785 else:
-> 2786 return super()._raise_if_missing(key, indexer, axis_name)
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/pandas/core/indexes/base.py:6249, in Index._raise_if_missing(self, key, indexer, axis_name)
6248 if nmissing == len(indexer):
-> 6249 raise KeyError(f"None of [{key}] are in the [{axis_name}]")
6251 not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique())
```
KeyError: "None of [MultiIndex([('1', '2017-10-22 00:00:00'),\n ('1', '2017-10-22 01:00:00'),\n ('1', '2017-10-22 02:00:00'),\n ('1', '2017-10-22 03:00:00'),\n ('1', '2017-10-22 04:00:00'),\n ('1', '2017-10-22 05:00:00'),\n ('1', '2017-10-22 06:00:00'),\n ('1', '2017-10-22 07:00:00'),\n ('1', '2017-10-22 08:00:00'),\n ('1', '2017-10-22 09:00:00'),\n ('1', '2017-10-22 10:00:00'),\n ('1', '2017-10-22 11:00:00'),\n ('1', '2017-10-22 12:00:00'),\n ('1', '2017-10-22 13:00:00'),\n ('1', '2017-10-22 14:00:00'),\n ('1', '2017-10-22 15:00:00'),\n ('1', '2017-10-22 16:00:00'),\n ('1', '2017-10-22 17:00:00'),\n ('1', '2017-10-22 18:00:00'),\n ('1', '2017-10-22 19:00:00'),\n ('1', '2017-10-22 20:00:00'),\n ('1', '2017-10-22 21:00:00'),\n ('1', '2017-10-22 22:00:00'),\n ('1', '2017-10-22 23:00:00')],\n names=['item_id', 'timestamp'])] are in the [index]"
```
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[19], line 3
1 # Add the column "weekend" contains a covariate that will be known at prediction time
2 # 24 * 5 seperately
----> 3 predictor.predict(data, known_covariates=known_covariates)
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/autogluon/timeseries/predictor.py:845, in TimeSeriesPredictor.predict(self, data, known_covariates, model, use_cache, random_seed)
843 if known_covariates is not None:
844 known_covariates = self._to_data_frame(known_covariates)
--> 845 predictions = self._learner.predict(
846 data,
847 known_covariates=known_covariates,
848 model=model,
849 use_cache=use_cache,
850 random_seed=random_seed,
851 )
852 return predictions.reindex(original_item_id_order, level=ITEMID)
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/autogluon/timeseries/learner.py:184, in TimeSeriesLearner.predict(self, data, known_covariates, model, use_cache, random_seed, **kwargs)
182 data = self.feature_generator.transform(data)
183 known_covariates = self.feature_generator.transform_future_known_covariates(known_covariates)
--> 184 known_covariates = self._align_covariates_with_forecast_index(known_covariates=known_covariates, data=data)
185 return self.load_trainer().predict(
186 data=data,
187 known_covariates=known_covariates,
(...)
191 **kwargs,
192 )
File /anaconda/envs/azureml_py38/lib/python3.10/site-packages/autogluon/timeseries/learner.py:167, in TimeSeriesLearner._align_covariates_with_forecast_index(self, known_covariates, data)
165 known_covariates = known_covariates.loc[forecast_index]
166 except KeyError:
--> 167 raise ValueError(
168 f"known_covariates should include the values for prediction_length={self.prediction_length} "
169 "many time steps into the future."
170 )
171 return known_covariates
ValueError: `known_covariates` should include the values for `prediction_length=24` many time steps into the future.
### Attempts to resolve:
I've tried adjusting the structure of my DataFrames and renaming columns, then try different input data, but I am still encountering this error. This makes me believe that the issue may be internal to the predict method or the `_align_covariates_with_forecast_index` method.
Or there might be some missing parametres needed when predicting but I didn't know for `known_covariates` ? | closed | 2024-12-02T15:07:29Z | 2024-12-03T10:43:08Z | https://github.com/autogluon/autogluon/issues/4705 | [
"module: timeseries"
] | Ceciile | 2 |
OthersideAI/self-operating-computer | automation | 158 | How to integrate third-party APIs? For instance, this project: https://github.com/songquanpeng/one-api | How to integrate third-party APIs? For instance, this project: https://github.com/songquanpeng/one-api | open | 2024-02-08T09:21:57Z | 2024-02-08T09:21:57Z | https://github.com/OthersideAI/self-operating-computer/issues/158 | [
"enhancement"
] | lueluelue2006 | 0 |
strawberry-graphql/strawberry | fastapi | 3,142 | Aborting Querys | <!--- Provide a general summary of the changes you want in the title above. -->
In our project we use [Apollo-Client for React](https://www.apollographql.com/docs/react/) and Strawberry.
We want to use abort signals to abort the executions of a queries to safe resources.
While using the simple approach from Apollo seems to work on the client side it does nothing on the server side.
We assume we would need to use Apollo-Server for their approach to work.
Since we could not find anything about aborting in the strawberry docs, we wanted to ask here how it would be possible to implement aborting the execution of a query with strawberry.
<!--- Anything on lines wrapped in comments like these will not show up in the final text. --> | closed | 2023-10-09T12:31:36Z | 2025-03-20T15:56:25Z | https://github.com/strawberry-graphql/strawberry/issues/3142 | [
"info-needed"
] | Stainless2k | 4 |
ipython/ipython | data-science | 14,230 | Using autoreload magic fails with the latest version | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
* Create a notebook with two cells
* First cell with the following code
```
%load_ext autoreload
%autoreload 2
```
* Second cell with
```
print(1234)
```
Upon running the second cell, the following error is displayed
```
Error in callback <bound method AutoreloadMagics.pre_run_cell of <IPython.extensions.autoreload.AutoreloadMagics object at 0x106796e90>> (for pre_run_cell):
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: AutoreloadMagics.pre_run_cell() takes 1 positional argument but 2 were given
```
Downgrading to 8.16.1 fixes this | closed | 2023-10-30T16:26:01Z | 2023-11-01T13:40:03Z | https://github.com/ipython/ipython/issues/14230 | [] | DonJayamanne | 10 |
deeppavlov/DeepPavlov | nlp | 1,468 | train the model on a new data | Hi i was trying to retrain the pretrained ner_ru_bert model on my data but when I run the following command
```
** with configs.ner.ner_rus_bert.open(encoding='utf8') as f:
ner_config = json.load(f)
ner_config['chainer'] = {'in': ['x'],
'in_y': ['y'],
'pipe': [{'class_name': 'bert_ner_preprocessor',
'vocab_file': '{BERT_PATH}/vocab.txt',
'do_lower_case': False,
'max_seq_length': 512,
'max_subword_length': 15,
'token_masking_prob': 0.0,
'in': ['x'],
'out': ['x_tokens',
'x_subword_tokens',
'x_subword_tok_ids',
'startofword_markers',
'attention_mask']},
{'id': 'tag_vocab',
'class_name': 'simple_vocab',
'unk_token': ['O'],
'pad_with_zeros': True,
'save_path': '',
"load_path": "",
'in': ['y'],
'out': ['y_ind']},
{'class_name': 'bert_sequence_tagger',
"n_tags": "#tag_vocab.len",
'keep_prob': 0.1,
'bert_config_file': '{BERT_PATH}/bert_config.json',
'pretrained_bert': '{BERT_PATH}/bert_model.ckpt',
'attention_probs_keep_prob': 0.5,
'use_crf': True,
'ema_decay': 0.9,
'return_probas': False,
'encoder_layer_ids': [-1],
'optimizer': 'tf.train:AdamOptimizer',
'learning_rate': 0.001,
'bert_learning_rate': 2e-05,
'min_learning_rate': 1e-07,
'learning_rate_drop_patience': 30,
'learning_rate_drop_div': 1.5,
'load_before_drop': True,
'clip_norm': 'NULL',
'save_path': '',
"load_path":'',
'in': ['x_subword_tok_ids', 'attention_mask', 'startofword_markers'],
'in_y': ['y_ind'],
'out': ['y_pred_ind']},
{'ref': 'tag_vocab', 'in': ['y_pred_ind'], 'out': ['y_pred']}],
'out': ['x_tokens', 'y_pred']}
ner_config['dataset_reader']['data_path'] = path
ner_model = build_model(configs.ner.ner_rus_bert, download=True)
ner_model = train_model(ner_config,download=True)
**
```
I'm getting this error:
```
**
2021-07-22 17:03:57.562 WARNING in 'deeppavlov.core.models.serializable'['serializable'] at line 52: No load path is set for SimpleVocabulary!
2021-07-22 17:03:57.567 WARNING in 'deeppavlov.core.models.serializable'['serializable'] at line 52: No load path is set for BertSequenceTagger!
2021-07-22 17:04:03.4 ERROR in 'deeppavlov.core.common.params'['params'] at line 112: Exception in <class 'deeppavlov.models.bert.bert_sequence_tagger.BertSequenceTagger'>
Traceback (most recent call last):
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 528, in _apply_op_helper
preferred_dtype=default_dtype)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1273, in internal_convert_to_tensor
(dtype.name, value.dtype.name, value))
ValueError: Tensor conversion requested dtype float32 for Tensor with dtype string: <tf.Tensor 'Optimizer_1/clip_by_norm/mul_1/y:0' shape=() dtype=string>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/common/params.py", line 106, in from_params
component = obj(**dict(config_params, **kwargs))
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_backend.py", line 76, in __call__
obj.__init__(*args, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_backend.py", line 28, in _wrapped
return func(*args, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/models/bert/bert_sequence_tagger.py", line 529, in __init__
**kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/models/bert/bert_sequence_tagger.py", line 242, in __init__
self._init_optimizer()
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_backend.py", line 28, in _wrapped
return func(*args, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/models/bert/bert_sequence_tagger.py", line 348, in _init_optimizer
optimizer_scope_name='Optimizer')
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_backend.py", line 28, in _wrapped
return func(*args, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/models/bert/bert_sequence_tagger.py", line 378, in get_train_op
**kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py", line 241, in get_train_op
return TFModel.get_train_op(self, loss, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py", line 148, in get_train_op
for grad, var in grads_and_vars]
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py", line 148, in <listcomp>
for grad, var in grads_and_vars]
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py", line 142, in clip_if_not_none
return tf.clip_by_norm(grad, clip_norm)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/ops/clip_ops.py", line 174, in clip_by_norm
intermediate = values * clip_norm
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 912, in binary_op_wrapper
return func(x, y, name=name)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 1206, in _mul_dispatch
return gen_math_ops.mul(x, y, name=name)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 6701, in mul
"Mul", x=x, y=y, name=name)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 564, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Mul' Op has type string that does not match type float32 of argument 'x'.
Traceback (most recent call last):
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 528, in _apply_op_helper
preferred_dtype=default_dtype)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1273, in internal_convert_to_tensor
(dtype.name, value.dtype.name, value))
ValueError: Tensor conversion requested dtype float32 for Tensor with dtype string: <tf.Tensor 'Optimizer_1/clip_by_norm/mul_1/y:0' shape=() dtype=string>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/oumaimalahiani/Documents/labelling-api/labelling/russian_train.py", line 92, in <module>
cli()
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/oumaimalahiani/Documents/labelling-api/labelling/russian_train.py", line 81, in train_rubert
ner_model = train_model(ner_config,download=True)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/__init__.py", line 29, in train_model
train_evaluate_model_from_config(config, download=download, recursive=recursive)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/commands/train.py", line 121, in train_evaluate_model_from_config
trainer.train(iterator)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/trainers/nn_trainer.py", line 334, in train
self.fit_chainer(iterator)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/trainers/fit_trainer.py", line 104, in fit_chainer
component = from_params(component_config, mode='train')
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/common/params.py", line 106, in from_params
component = obj(**dict(config_params, **kwargs))
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_backend.py", line 76, in __call__
obj.__init__(*args, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_backend.py", line 28, in _wrapped
return func(*args, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/models/bert/bert_sequence_tagger.py", line 529, in __init__
**kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/models/bert/bert_sequence_tagger.py", line 242, in __init__
self._init_optimizer()
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_backend.py", line 28, in _wrapped
return func(*args, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/models/bert/bert_sequence_tagger.py", line 348, in _init_optimizer
optimizer_scope_name='Optimizer')
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_backend.py", line 28, in _wrapped
return func(*args, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/models/bert/bert_sequence_tagger.py", line 378, in get_train_op
**kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py", line 241, in get_train_op
return TFModel.get_train_op(self, loss, **kwargs)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py", line 148, in get_train_op
for grad, var in grads_and_vars]
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py", line 148, in <listcomp>
for grad, var in grads_and_vars]
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py", line 142, in clip_if_not_none
return tf.clip_by_norm(grad, clip_norm)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/ops/clip_ops.py", line 174, in clip_by_norm
intermediate = values * clip_norm
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 912, in binary_op_wrapper
return func(x, y, name=name)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 1206, in _mul_dispatch
return gen_math_ops.mul(x, y, name=name)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 6701, in mul
"Mul", x=x, y=y, name=name)
File "/Users/oumaimalahiani/.virtualenvs/labelling-api/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 564, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Mul' Op has type string that does not match type float32 of argument 'x'.
**:
```
```
| closed | 2021-07-22T15:10:54Z | 2022-04-06T20:41:21Z | https://github.com/deeppavlov/DeepPavlov/issues/1468 | [
"bug"
] | Oumaimalh | 1 |
trevismd/statannotations | seaborn | 57 | Annotation position misalignment | Hi!
I first appreciate your efforts to make this amazing python library for plotting.
I tried to use this tool to make some plots with annotations, but I encountered an issue that I couldn't fix.
Annotations do not align for multiple annotations.
Here is the image.

I tried to adjust the ylim, but it doesn't work.

Below is my code.
`plt.figure(figsize=(15,8))
ax = sns.boxplot(x=x,y=y, hue=hue,data=df,fliersize=0,saturation=0.7)
plt.legend(bbox_to_anchor=(1.02, 1), loc='upper left', borderaxespad=0)
annotator = Annotator(ax, pairs_all,x=x,y=y, hue=hue, data=df,verbose=False)
annotator.configure(text_format="star",loc='inside', verbose=False)
annotator.set_pvalues(p_values_all)
annotator.annotate()
ax.set_ylim(-950, -550)
plt.show()`
I tried to use several different hyperparameters, such as line_offset_to_group. The only way to make it works is to put annotations outside.

I would very appreciate if anyone could guide me.
Thank you!
| open | 2022-05-01T00:24:29Z | 2022-05-02T17:13:00Z | https://github.com/trevismd/statannotations/issues/57 | [] | inqlee0704 | 4 |
docarray/docarray | pydantic | 919 | docs: DocumentArray: no explanation of minibatch | I'm reading through every doc now and just came to [parallelization](https://docarray.jina.ai/fundamentals/documentarray/parallelization/#parallelization) page. It mentions minibatch, but gives no context. I'm pretty sure it hasn't been mentioned in previous docs (reading through in a linear fashion)
Note: please leave explanation in comment. Don't make a fresh PR. I'm working on ALL docs right now
| closed | 2022-12-08T14:08:31Z | 2023-04-22T09:38:41Z | https://github.com/docarray/docarray/issues/919 | [] | alexcg1 | 0 |
autogluon/autogluon | data-science | 3,949 | [BUG] AutoMM HPO tests crash on scikit-learn upgrade. | `scikit-learn` released a new version `1.4.1post1` which causes all tests under AutoMM `test_hpo.py` to fail.
The reason is because this interferes with ray, a sample run using `1.4.1post1` is shown here: https://github.com/autogluon/autogluon/actions/runs/8011831095/job/21885981252
The temporary fix that we have for now is to cap the version of `scikit-learn`: https://github.com/autogluon/autogluon/pull/3947 | closed | 2024-02-23T19:20:19Z | 2024-11-07T18:38:57Z | https://github.com/autogluon/autogluon/issues/3949 | [
"bug",
"feature: hpo",
"module: multimodal"
] | prateekdesai04 | 1 |
plotly/dash | plotly | 2,643 | Filtering issue in dash table with numeric format | i just upgrade my libary to
dash 2.13.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
since dash 2.13.0 , i cannot used numeric filtering at dash_table.DataTable with expression ">" , "<" , ">=" , "<" with my numeric data format
from dash.dash_table.Format import (Format, Group)
Format().decimal_delimiter(',').group(True).group_delimiter('.')
but with dash 2.8.1 i can still filter it using this format
please help | open | 2023-09-14T05:43:42Z | 2024-08-13T19:37:32Z | https://github.com/plotly/dash/issues/2643 | [
"bug",
"P3"
] | kyoshizzz | 2 |
sherlock-project/sherlock | python | 2,261 | ERROR | ### Installation method
Debian
### Description
When i run sherlock i get that same error

### Steps to reproduce
1. Run sherlock
2. Bam
### Additional information
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | closed | 2024-08-19T01:34:38Z | 2024-08-25T02:46:49Z | https://github.com/sherlock-project/sherlock/issues/2261 | [
"environment"
] | Gs-gcs | 2 |
kubeflow/katib | scikit-learn | 1,774 | [Release] Katib 0.13 release | This is the track issue for Katib 0.13 release.
We should make the new release before Kubeflow 1.5 to deliver the latest Katib improvements.
We have already merged all required features and bug fixes.
Please let us know if I miss something.
We are planing to cut the first release candidate on **January 21st**.
/cc @gaocegege @tenzen-y @johnugeorge @kimwnasptd @seong7 @anencore94
/area release
/priority p1
| closed | 2022-01-13T14:57:34Z | 2022-04-06T09:21:46Z | https://github.com/kubeflow/katib/issues/1774 | [
"priority/p1",
"area/release"
] | andreyvelich | 1 |
mirumee/ariadne | api | 927 | `GraphQLTransportWSHandler` only supports one connection | Once a client has connected to Ariadne through the `GraphQLTransportWSHandler` (the newer GraphQL WebSocket protocol), all future attempts to connect by another client will fail, even if the first client has disconnected.
I believe this issue is caused by this section in the code:
https://github.com/mirumee/ariadne/blob/86a87efd9e3714a39f4da97cce92db6748331f2e/ariadne/asgi/handlers/graphql_transport_ws.py#L111-L116
It looks like the _transport_ gets tagged as having received an "init" message, even though it seems like the _client_ should be tagged.
## Steps to reproduce
I came up with the following sample GraphQL server to demonstrate this issue (using Ariadne v0.16) as a minimal test case:
```python
import asyncio
from ariadne import gql, make_executable_schema, ObjectType, SubscriptionType
from ariadne.asgi import GraphQL
from ariadne.asgi.handlers import GraphQLTransportWSHandler
from starlette.applications import Starlette
from starlette.routing import Route, WebSocketRoute
import uvicorn
type_defs = gql("""
type Query {
hello: String!
}
type Subscription {
counter: Int!
}
""")
query = ObjectType("Query")
@query.field("hello")
def hello_resolver(*_):
return "Hello world"
subscription = SubscriptionType()
@subscription.source("counter")
async def counter_generator(*_):
for i in range(5):
await asyncio.sleep(1)
yield i
@subscription.field("counter")
def counter_resolver(count, *_):
return count
schema = make_executable_schema(type_defs, subscription)
graphql_app = GraphQL(
schema,
debug=True,
websocket_handler=GraphQLTransportWSHandler(),
)
routes = [
Route("/graphql", graphql_app, methods=["GET", "POST"]),
WebSocketRoute("/graphql", endpoint=graphql_app),
]
app = Starlette(debug=True, routes=routes)
if __name__ == "__main__":
uvicorn.run(app, host='0.0.0.0', port=4444)
```
After creating a file with that script and starting the server with `main.py`, I then used [websocat](https://github.com/vi/websocat) to manually trigger the error. First, I connected to the server, manually sent the "connection init" message, then pressed Ctrl-D to disconnect after receiving a "connection ack" message:
```sh-session
$ websocat --protocol 'graphql-ws' 'ws://localhost:4444/graphql'
{"type": "connection_init"}
{"type": "connection_ack"}
^D
```
Then, I did the same thing as a new client, but did not get the "connection ack", indicating that the connection was rejected:
```sh-session
$ websocat --protocol 'graphql-ws' 'ws://localhost:4444/graphql'
{"type": "connection_init"}
```
Running `websocat -vv ...` will also show that an error code of 4429 was returned, which lines up with the error code from the `graphql_transport_ws.py` file linked above. | closed | 2022-09-14T02:02:57Z | 2022-09-26T13:59:25Z | https://github.com/mirumee/ariadne/issues/927 | [] | kylewlacy | 5 |
pydantic/logfire | pydantic | 64 | Index error for Azure OpenAI streaming | ### Description
I'm using `AzureOpenAI` from the `openai` SDK and getting this error.
Is it due to there being an empty chunk?
```
Traceback (most recent call last):
File "/Users/XXXX/.pyenv/versions/3.11.5/envs/XXXX/lib/python3.11/site-packages/logfire/_internal/integrations/openai.py", line 137, in __stream__
chunk_content = content_from_stream(chunk)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/XXXX/.pyenv/versions/3.11.5/envs/XXXX/lib/python3.11/site-packages/logfire/_internal/integrations/openai.py", line 183, in <lambda>
content_from_stream=lambda chunk: chunk.choices[0].delta.content,
~~~~~~~~~~~~~^^^
IndexError: list index out of range
```
### Python, Logfire & OS Versions, related packages
```TOML
logfire="0.28.0"
platform="macOS-13.5.2-arm64-arm-64bit"
python="3.11.5 (main, Dec 23 2023, 11:01:02) [Clang 14.0.3 (clang-1403.0.22.14.1)]"
[related_packages]
requests="2.31.0"
pydantic="2.6.1"
fastapi="0.109.2"
openai="1.12.0"
protobuf="4.25.2"
rich="13.7.0"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-grpc="1.22.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.43b0"
opentelemetry-instrumentation-asgi="0.43b0"
opentelemetry-instrumentation-fastapi="0.43b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-util-http="0.43b0"
```
| closed | 2024-05-01T12:43:59Z | 2024-05-02T07:44:20Z | https://github.com/pydantic/logfire/issues/64 | [
"bug",
"OpenAI"
] | gabrielchua | 2 |
huggingface/datasets | pytorch | 6,496 | Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again. | **Describe the bug**
Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub.
```
huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc)
A commit has happened since. Please refresh and try again.
```
**Steps to reproduce the bug**
This is a minimal reproducer:
```
import dask.dataframe as dd
import pandas as pd
import random
import os
import huggingface_hub
import datasets
huggingface_hub.login(token=os.getenv("HF_TOKEN"))
data = {"number": [random.randint(0,10) for _ in range(1000)]}
df = pd.DataFrame.from_dict(data)
dataframe = dd.from_pandas(df, npartitions=1)
dataframe = dataframe.repartition(npartitions=3)
schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema
repo_id = "GLorr/test-dask"
repo_path = f"hf://datasets/{repo_id}"
huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True)
dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema)
```
**Expected behavior**
Would expect to write to the hub without any problem.
**Environment info**
```
datasets==2.15.0
huggingface-hub==0.19.4
```
| open | 2023-12-14T11:24:54Z | 2023-12-14T12:22:21Z | https://github.com/huggingface/datasets/issues/6496 | [] | GeorgesLorre | 1 |
DistrictDataLabs/yellowbrick | scikit-learn | 524 | PredictionError Plot for Multiple target Regressor | Some regressors can fit to multiple targets (see #510 for the list).
Should we enable `PredictionError` to create different colors scatter plot for different targets?
For example,

| closed | 2018-07-25T05:54:30Z | 2020-06-12T05:06:54Z | https://github.com/DistrictDataLabs/yellowbrick/issues/524 | [
"type: feature",
"priority: low"
] | zjpoh | 4 |
deepfakes/faceswap | deep-learning | 1,207 | ImportError: numpy.core.multiarray failed to import | D:\faceswap-master>python faceswap.py extract -i D:\faceswap-master\src\han_li.mp4 -o D:\faceswap-master\faces
Setting Faceswap backend to AMD
No GPU detected. Switching to CPU mode
01/29/2022 22:23:02 INFO Log level set to: INFO
01/29/2022 22:23:02 WARNING No GPU detected. Switching to CPU mode
01/29/2022 22:23:02 INFO Switching backend to CPU. Using Tensorflow for CPU operations.
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
01/29/2022 22:23:06 ERROR Got Exception on main handler:
Traceback (most recent call last):
File "D:\faceswap-master\lib\cli\launcher.py", line 180, in execute_script
script = self._import_script()
File "D:\faceswap-master\lib\cli\launcher.py", line 46, in _import_script
module = import_module(mod)
File "C:\Users\44626\AppData\Local\Programs\Python\Python37\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "D:\faceswap-master\scripts\extract.py", line 14, in <module>
from scripts.fsmedia import Alignments, PostProcess, finalize
File "D:\faceswap-master\scripts\fsmedia.py", line 18, in <module>
from lib.face_filter import FaceFilter as FilterFunc
File "D:\faceswap-master\lib\face_filter.py", line 7, in <module>
from lib.vgg_face import VGGFace
File "D:\faceswap-master\lib\vgg_face.py", line 15, in <module>
from fastcluster import linkage
File "C:\Users\44626\AppData\Local\Programs\Python\Python37\lib\site-packages\fastcluster.py", line 37, in <module>
from _fastcluster import linkage_wrap, linkage_vector_wrap
ImportError: numpy.core.multiarray failed to import
01/29/2022 22:23:06 CRITICAL An unexpected crash has occurred. Crash report written to 'D:\faceswap-master\crash_report.2022.01.29.222305067754.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
-----------------------------------
win10 no gpu
my pip list:
absl-py 0.15.0
astunparse 1.6.3
cached-property 1.5.2
cachetools 4.2.4
certifi 2021.10.8
cffi 1.15.0
charset-normalizer 2.0.10
clang 5.0
colorama 0.4.4
cycler 0.11.0
enum34 1.1.10
fastcluster 1.2.4
ffmpy 0.2.3
flatbuffers 1.12
gast 0.3.3
google-auth 1.35.0
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
grpcio 1.43.0
h5py 2.10.0
idna 3.3
imageio 2.14.1
imageio-ffmpeg 0.4.5
importlib-metadata 4.10.1
joblib 1.1.0
Keras 2.2.4
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
kiwisolver 1.3.2
Markdown 3.3.6
matplotlib 3.2.2
numpy 1.19.4
nvidia-ml-py 11.495.46
oauthlib 3.1.1
opencv-python 4.5.5.62
opt-einsum 3.3.0
Pillow 9.0.0
pip 21.3.1
plaidml 0.7.0
plaidml-keras 0.7.0
protobuf 3.19.4
psutil 5.9.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
pyparsing 3.0.7
python-dateutil 2.8.2
pywin32 303
PyYAML 6.0
requests 2.27.1
requests-oauthlib 1.3.0
rsa 4.8
scikit-learn 1.0.2
scipy 1.7.3
setuptools 60.5.0
six 1.15.0
tensorboard 2.2.2
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.2.3
tensorflow-estimator 2.2.0
termcolor 1.1.0
threadpoolctl 3.0.0
tqdm 4.62.3
typing-extensions 3.7.4.3
urllib3 1.26.8
Werkzeug 2.0.2
wheel 0.37.1
wrapt 1.12.1
zipp 3.7.0
| closed | 2022-01-29T14:30:24Z | 2022-05-15T01:22:24Z | https://github.com/deepfakes/faceswap/issues/1207 | [] | Odimmsun | 3 |
huggingface/datasets | computer-vision | 7,214 | Formatted map + with_format(None) changes array dtype for iterable datasets | ### Describe the bug
When applying with_format -> map -> with_format(None), array dtypes seem to change, even if features are passed
### Steps to reproduce the bug
```python
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32")})
dataset = Dataset.from_dict({f"array0": [np.zeros((100,10,10), dtype=np.float32)]*25}, features=features)
ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features)
ex_0 = next(iter(ds))
ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features).with_format(None)
ex_1 = next(iter(ds))
assert ex_1["array0"].dtype == ex_0["array0"].dtype, f"{ex_1['array0'].dtype} {ex_0['array0'].dtype}"
```
### Expected behavior
Dtypes should be preserved.
### Environment info
3.0.2 | open | 2024-10-10T12:45:16Z | 2024-10-12T16:55:57Z | https://github.com/huggingface/datasets/issues/7214 | [] | alex-hh | 1 |
jupyter/nbviewer | jupyter | 518 | R kernel always dead: Jupyter and conda for R | I installed R-Essentials by following the steps in https://www.continuum.io/blog/developer/jupyter-and-conda-r. But when I run "Jupyter notebook" and start a R kernel, the kernel is always dead, and terminal feeds back:
_[I 09:25:00.040 NotebookApp] KernelRestarter: restarting kernel (4/5)
/home/ray/anaconda3/lib/R/bin/exec/R: symbol lookup error: /home/ray/anaconda3/lib/R/bin/exec/../../lib/../../libreadline.so.6: undefined symbol: PC_
Does anyone know how to handle it? Thanks.
| closed | 2015-10-18T01:36:25Z | 2015-10-18T07:40:22Z | https://github.com/jupyter/nbviewer/issues/518 | [] | RaymondChia | 1 |
pytorch/vision | machine-learning | 8,048 | ImageFolder balancer | ### 🚀 The feature
The new feature impacts the file [torchvision/datasets/folder.py](https://github.com/pytorch/vision/blob/main/torchvision/datasets/folder.py).
The idea is to add to the `make_dataset` function a new optional parameter that allows balancing the dataset folder. The new parameter, `sampling_strategy`, can assume the following values: `None` (default), `"oversample"` and `"undersample"`.
|Value| Description|
|---|---|
|`None` | no operation performed on the dataset. This value will be the dafault. |
|`"oversample"`| the dataset will be balanced by adding image path copies of minoritary classes up to the number of the majority class.|
|`"undersample"` | the dataset will be balanced by deleting images path of majority classes up to the number of the minoritary class.|
### Motivation, pitch
While working with an unbalanced dataset, I find extremely useful to balance it at runtime instead of copying/removing images in the filesystem.
After the balanced data folder is defined you can also apply data augmentation when you define the data loader to not use simply image copies and avoid overfitting.
### Alternatives
The implementation can be done in two ways:
1. add the parameter `sampling_strategy` to the `make_dataset` function;
2. define a new class, namely `BalancedImageFolder`, that overwrites the `make_dataset` method in order to define the `sampling` parameter.
We believe that the less impactful way is the first implementation, because, for how the code is currently structured, for overwriting the `make_dataset` method I probably will have to change the structure of the file (because for defining the new `make_balanced_dataset` function I have to copy a lot of code from the original `make_dataset` function, and this is obviously a bad practice).
### Additional context
_No response_ | closed | 2023-10-16T09:41:20Z | 2023-10-17T14:15:24Z | https://github.com/pytorch/vision/issues/8048 | [] | lorenzomassimiani | 2 |
howie6879/owllook | asyncio | 36 | KeyError: 'session' while handling path / | 按正常方式安装的屏蔽了redirect,host已经改为0.0.0.0
python server.py 之后就报错
KeyError
'session'
Traceback (most recent call last):
File /root/.local/share/virtualenvs/owllook-kjRL-ddR/lib/python3.6/site-packages/sanic/app.py, line 556, in handle_request
response = await response
File /root/.miniconda3/lib/python3.6/asyncio/coroutines.py, line 110, in __next__
return self.gen.send(None)
File /root/owllook/owllook/views/novels_blueprint.py, line 82, in index
user = request['session'].get('user', None)
KeyError: 'session' while handling path / | closed | 2018-08-11T14:57:46Z | 2018-08-12T04:27:16Z | https://github.com/howie6879/owllook/issues/36 | [] | do1234521 | 3 |
deepset-ai/haystack | pytorch | 8,055 | feat: Add `model_kwargs` and `tokenizer_kwargs` option to `TransformersSimilarityRanker`, `SentenceTransformersDocumentEmbedder`, `SentenceTransformersTextEmbedder` | **Is your feature request related to a problem? Please describe.**
We are starting to see more open source embedding and ranking models that have long model max lengths (e.g. up to 8k tokens). This is great advancement!
However, as a user I'd like to be able to set the max length of these models to a lower value sometimes (e.g. 1024) so I can better control the memory usage during inference time. For example, when left at 8K tokens and I accidentally pass one large document to the Ranker or Embedders it causes the whole batch to have an 8K matrix length which can cause an OOM if I only have a small amount of resources.
This is easily fixable if I can specify `model_max_length` which is a kwarg that I can pass to the from_pretrained method of the `Tokenizer`.
So in general I think it would be wise to add `model_kwargs` and `tokenizer_kwargs` as optional params when we load models from HuggingFace or SentenceTransformers. A good place to start would be the components `TransformersSimilarityRanker`, `SentenceTransformersDocumentEmbedder`, and `SentenceTransformersTextEmbedder`.
**Additional context**
Some example models that would benefit from these parameters:
* https://huggingface.co/BAAI/bge-reranker-v2-m3 --> Reranker with 8k model max length
* https://huggingface.co/antoinelouis/mono-xm/tree/main --> Embedder that requires a user to set a `default_language` as a model_kwarg to benefit from the language specific adapter for embedding.
| closed | 2024-07-23T09:19:42Z | 2024-08-02T08:37:11Z | https://github.com/deepset-ai/haystack/issues/8055 | [] | sjrl | 0 |
mitmproxy/pdoc | api | 208 | Inline/link 'imported members' | I have a package that has an `__init__.py` which does no more than a `from _package import *`. The classes and methods defined there are not shown in the packages docs. I do see a (complete) page is generated for `_package`, but it's not linked anywhere.
In Sphinx's autodoc, inlining is achieved with the `imported-members` option. It would be nice to have something similar. | closed | 2021-01-25T16:05:54Z | 2021-02-01T13:14:00Z | https://github.com/mitmproxy/pdoc/issues/208 | [
"enhancement"
] | brenthuisman | 2 |
slackapi/bolt-python | fastapi | 412 | Make "events" in the document even clearer | Currently, we are using the term "event" for both Events API data and any incoming payload requests from Slack in the Bolt document. For example,
* There is the "Listening to events" section: https://slack.dev/bolt-python/concepts#event-listening which is referring to the Events API
* Then there is the "Acknowledging events" section: https://slack.dev/bolt-python/concepts#acknowledge which is referring to actions, shortcuts, commands, and options. Also, Bolt acknowledges the "events" for Events API under the hood.
This can be confusing for developers especially when they are not yet familiar with the Slack Platform yet.
To improve this, we can consistently use the term "request" for incoming requests from Slack. Specifically,
* We can rename the section "Acknowledging events" to "Acknowledging requests"
* We can replace all the phrases like "acknowledging events" to "acknowledge requests" in all sections
### The page URLs
* https://slack.dev/bolt-python/concepts#acknowledge and others
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2021-07-20T23:47:25Z | 2021-08-06T22:38:50Z | https://github.com/slackapi/bolt-python/issues/412 | [
"docs",
"good first issue"
] | seratch | 3 |
seleniumbase/SeleniumBase | web-scraping | 2,177 | bug: sbase-behavegui: sbase-gui: test checkboxes behaviour is faulty | OS: Windows
Steps:
1. Clone the repo locally
2. run one of the following commands `sbase gui` or `sbase behave-gui`
3. Tests get loaded
4. Try to click the test-checkboxes, the tick disappears on mouse release
Troubleshoot:
1. In sbase behave-gui around line 284
```
cb = tk.Checkbutton(
text_area,
text=(row),
fg="black",
anchor="w",
pady=0,
borderwidth=0,
highlightthickness=0,
variable=ara[count],
)
``` | closed | 2023-10-11T07:37:39Z | 2023-10-12T17:47:59Z | https://github.com/seleniumbase/SeleniumBase/issues/2177 | [
"bug"
] | jilaypandya | 3 |
kubeflow/katib | scikit-learn | 1,639 | Narrow and explicit RBAC | /kind feature
I am trying to deploy Katib in an enterprise environment and have some hard time explaining Katib's requested RBAC
rules. It would be much appreciated if ClusterRole explicitly declared only necessary verbs for each individual resource.
Look at this:
``` yaml
rules:
- apiGroups:
- ""
resources:
- configmaps
- serviceaccounts
- services
- events
- namespaces
- persistentvolumes
- persistentvolumeclaims
- pods
- pods/log
- pods/status
verbs:
- "*"
```
Full access for Namespaces, Roles and RoleBindings effectively gives Katib unconstrained privileges to do anything it wants. This is plainly unacceptable in my case.
I suggest that ClusterRoles would be narrow and explicit like that:
```
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- roles
verbs:
- get
- list
... e.t.c
```
E.g. let's get rid of those stars in rules.verbs and olso remove unnecessary verbs from there | closed | 2021-08-25T19:09:25Z | 2023-01-25T16:51:55Z | https://github.com/kubeflow/katib/issues/1639 | [
"priority/p1",
"kind/feature"
] | maanur | 15 |
deezer/spleeter | tensorflow | 483 | Help with the 5 stem model | I want to use the 5 stems model to separate the different voices of an apology into different files. How can I modify the code?
I would greatly appreciate the help you can give me. | open | 2020-08-27T08:33:12Z | 2021-02-19T07:28:41Z | https://github.com/deezer/spleeter/issues/483 | [
"question"
] | GaZerep | 1 |
matterport/Mask_RCNN | tensorflow | 2,169 | how to use tensorboard in ballon splash model. | I have trained the existing ballon splash model on my pc. now I want to visualize the loss, accuracy, etc on tensorboard. how should i do that? thank you. | open | 2020-05-08T09:14:08Z | 2020-07-30T09:11:09Z | https://github.com/matterport/Mask_RCNN/issues/2169 | [] | imtinan39 | 5 |
ludwig-ai/ludwig | data-science | 3,568 | local variable 'tokens' referenced before assignment | **Describe the bug**
LLM finetuning tutorial notebook failed.
**To Reproduce**
Steps to reproduce the behavior:
1. Open [ludwig_llama2_7b_finetuning_4bit.ipynb](https://colab.research.google.com/drive/1c3AO8l_H6V_x37RwQ8V7M6A-RmcBf2tG?usp=sharing)
2. Replace the huggingface tokens
3. Run all notebook cell
4. See error `RuntimeError: Caught exception during model preprocessing: local variable 'tokens' referenced before assignment`
**Expected behavior**
Notebook shouldn't fail.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Environment (please complete the following information):**
- OS: google colab
| closed | 2023-08-31T12:03:32Z | 2023-08-31T22:47:25Z | https://github.com/ludwig-ai/ludwig/issues/3568 | [] | randy-tsukemen | 1 |
mouredev/Hello-Python | fastapi | 452 | 遇到网上赌博被黑提款一直没有到账怎么拿回损失的? |
诚信帮出黑咨询+微:xiaolu460570 飞机:@lc15688
切记,只要您赢了钱,遇到任何不给您提现的借口,基本表明您被黑了。
如果你出现以下这些情况,说明你已经被黑了:↓ ↓
【1】限制你账号的部分功能!出款和入款端口关闭,以及让你充值解开通道通道等等!
【2】客服找一些借口言语系统维护,风控审核等等借口,就是不让!
【网被黑怎么办】【网赌赢了平台不给出款】【系统更新】【赌失败】【注单异常】【网络搬运】【提交失败】
【单注为回归】【单注未更新】【出通道】【打双倍流水】【充值相当于的金额】
关于网上赌平台赢钱了各种借口不给出款最新解决方法
切记,只要你赢了钱,遇到任何不给你提现的借口,基本表明你已经被黑了。
第一:提款被拒绝,各种借口理由,就是不让出款,让打倍投流水等等!
第二:限制你账号的部分功能!出款和入款端口关闭,以及让你充值解开通道通道等等!
第三:客服找一些借口言语系统维护,风控审核等等借口,就是不让付款!
第四:确认被黑了,应该怎么做?(找专业团队教你怎么办挽回损失,出款不成功不预付任何前期费用)
第五:保持安静,不要和客服争吵,防止号被冻结。
第六:稳定客服情绪,只要平台觉得你还在正常游戏。
第七:忽悠客服,不经意的表达自己的经济实力,且适当的装傻。
第八:只要可以登陆,可以转换排序,重置我们的排序,我们会帮助把损失降到最低。
团队经验深(8年老团队)团队掌控着一些最新款技术可以帮助到你。
只要账号还能正常登录我们团队有80%的拼图出款成功。(注:我们团队是先出款后收费,让您先交费的再次请勿上当受骗,诚信合作!)
网要玩就玩实际的实体平台资金安全保障-只要有人说能百赌百赢的和各种彩金诱惑等等……都是杀猪台野鸡台资金账号被黑被冻结只是迟早的事#远离赌博 | closed | 2025-03-03T13:38:51Z | 2025-03-04T08:41:40Z | https://github.com/mouredev/Hello-Python/issues/452 | [] | 376838 | 0 |
seleniumbase/SeleniumBase | pytest | 3,007 | The `width_ratio` was missing in UC Mode coordinates calculation | ## The `width_ratio` was missing in UC Mode coordinates calculation
In resolving https://github.com/seleniumbase/SeleniumBase/issues/2998, I forget to add the commit that uses the `width_ratio`, which is necessary for processing `uc_gui_click_captcha()` / `uc_gui_click_cf()` on Windows where the scale ratio is not set to 100%. That ratio will allow for accurately calculating the click coordinates. | closed | 2024-08-07T16:47:20Z | 2024-08-07T17:15:23Z | https://github.com/seleniumbase/SeleniumBase/issues/3007 | [
"bug",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
pydata/pandas-datareader | pandas | 23 | Google CSV API Deprecated | It looks like Google has dropped their finance API.
https://developers.google.com/finance/?csw=1
``` python
import pandas.io.data as web
web.DataReader('gs','google')
IOError: after 3 tries, Google did not return a 200 for url 'http://www.google.com/finance/historical?q=gs&startdate=Jan+01%2C+2010&enddate=Mar+24%2C+2015&output=csv'
```
Anyone else having this issue? Tests that use Google in pandas-datareader are failing now. We may have to remove Google finance support.
| closed | 2015-03-25T02:37:22Z | 2017-01-08T22:19:21Z | https://github.com/pydata/pandas-datareader/issues/23 | [] | davidastephens | 5 |
nltk/nltk | nlp | 3,379 | The save_to_json function for saving a PerceptronTagger doesn't work | ## Description
Multiple bugs are present in this function. First of all the `os` library is used but not imported. Also the `os` library is not used correctly as `os.isdir()` doesn't exist and `os.path.isdir()` should be called instead. Then we attempt to `json.dump` a set. Finally the `TRAINED_TAGGER_PATH` that is checked with said `os` librairy is then **not used** to save the files.
## Steps to Reproduce
Call `PerceptronTagger.save_to_json()` and observe the following traceback:
```python
Traceback (most recent call last):
File "c:\Users\TFerrari\Desktop\nltk_push\nltk_reproduce.py", line 28, in <module>
ptagger.save_to_json(modelname)
File "C:\Users\TFerrari\AppData\Local\Programs\Python\Python312\Lib\site-packages\nltk\tag\perceptron.py", line 260, in save_to_json
assert os.isdir(
^^
NameError: name 'os' is not defined. Did you forget to import 'os'?
```
Once `os` is imported and `PerceptronTagger.save_to_json()` is called again observe the following traceback:
```python
Traceback (most recent call last):
File "c:\Users\TFerrari\Desktop\nltk_push\nltk_reproduce.py", line 29, in <module>
ptagger.save_to_json(modelname)
File "c:\Users\TFerrari\Desktop\nltk_push\perceptron2.py", line 261, in save_to_json
assert os.isdir(
^^^^^^^^
AttributeError: module 'os' has no attribute 'isdir'. Did you mean: 'chdir'?
```
After a quick fix call `PerceptronTagger.save_to_json()` again and observe the following traceback:
```python
Traceback (most recent call last):
File "c:\Users\TFerrari\Desktop\nltk_push\nltk_reproduce.py", line 29, in <module>
ptagger.save_to_json(modelname)
File "c:\Users\TFerrari\Desktop\nltk_push\perceptron2.py", line 270, in save_to_json
json.dump(self.classes, fout)
File "C:\Users\TFerrari\AppData\Local\Programs\Python\Python312\Lib\json\__init__.py", line 179, in dump
for chunk in iterable:
File "C:\Users\TFerrari\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 439, in _iterencode
o = _default(o)
^^^^^^^^^^^
File "C:\Users\TFerrari\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type set is not JSON serializable
```
Finally, when everything works, the files are saved correctly but are not saved in the `TRAINED_TAGGER_PATH` that is checked by the assert.
## Cause
These errors were introduced in #3270
https://github.com/nltk/nltk/blob/0753ee5bb0096b4a4b3dd587e784ce07e7f34dab/nltk/tag/perceptron.py#L258-L269
`os` is not imported at the start of the file and `assert os.isdir(TRAINED_TAGGER_PATH), f"Path set for saving needs to be a directory"` should be `assert os.path.isdir(TRAINED_TAGGER_PATH), f"Path set for saving needs to be a directory"`
We try to `json.dump()` a set which is not possible. Allowing the dump to default to a list fixes the problem.
```python
with open(loc + TAGGER_JSONS[lang]["classes"], "w") as fout:
json.dump(self.classes, fout, default=list)
```
`TRAINED_TAGGER_PATH` is then not used to save the model which is saved and the `loc` location when it should be saved at the `TRAINED_TAGGER_PATH\loc` location.
```python
with open(loc + TAGGER_JSONS[lang]["weights"], "w") as fout:
json.dump(self.model.weights, fout)
with open(loc + TAGGER_JSONS[lang]["tagdict"], "w") as fout:
json.dump(self.tagdict, fout)
with open(loc + TAGGER_JSONS[lang]["classes"], "w") as fout:
json.dump(self.classes, fout, default=list)
```
should be
```python
with open(TRAINED_TAGGER_PATH + loc + TAGGER_JSONS[lang]["weights"], "w") as fout:
json.dump(self.model.weights, fout)
with open(TRAINED_TAGGER_PATH + loc + TAGGER_JSONS[lang]["tagdict"], "w") as fout:
json.dump(self.tagdict, fout)
with open(TRAINED_TAGGER_PATH + loc + TAGGER_JSONS[lang]["classes"], "w") as fout:
json.dump(self.classes, fout, default=list)
``` | open | 2025-03-13T11:08:24Z | 2025-03-13T11:08:24Z | https://github.com/nltk/nltk/issues/3379 | [] | LordTT | 0 |
pywinauto/pywinauto | automation | 1,192 | Getting control only using automation id (auto_id) | ## Expected Behavior
I want to get the control only using automation id. Looks like that is not working in pywinauto.
When I try to get the control by calling child_window() function using auto_id and title it works as expected
## Actual Behavior
But when I call the same function child_window() using only auto_id, things wont work as expected. Reason is, parent under which I am looking for control has 3 children. 2 has automation ids, but one does not have. But when I print the print_control_identifiers(), I see that the one which does not have automation id, also got an automation id which is same as other one. So the parent under which I am looking has 2 children with same automation id, hence it is failing.
Will pywinauto assign automation ids on fly for those control that does not have automation id?
Will I be able to work on the controls only by using automation id?
I am using pywinauto for first time. Any help would be greatly appreciated.
- Pywinauto version: 0.6.8
- Python version and bitness: Python 3.7 (32-bit)
- Platform and OS: Windows 10
| closed | 2022-03-18T22:05:47Z | 2022-03-19T00:30:40Z | https://github.com/pywinauto/pywinauto/issues/1192 | [] | manju1847 | 1 |
huggingface/datasets | computer-vision | 7,051 | How to set_epoch with interleave_datasets? | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start.
How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset...
Something like
```
dataset_a = load_dataset(...)
dataset_b = load_dataset(...)
def epoch_shuffled_dataset(ds):
# How to make this maintain the number of shards in ds??
for epoch in itertools.count():
ds.set_epoch(epoch)
yield from iter(ds)
shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a})
interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted')
``` | closed | 2024-07-15T18:24:52Z | 2024-08-05T20:58:04Z | https://github.com/huggingface/datasets/issues/7051 | [] | jonathanasdf | 7 |
koxudaxi/datamodel-code-generator | pydantic | 1,906 | Support pydantic.dataclasses as output format | In some projects, I use [pydantic.dataclasses](https://docs.pydantic.dev/latest/concepts/dataclasses/) to add validation to dataclass models.
This is useful on project that requires dataclasses but still want to use pydantic for validation. All fields and type annotation of pydantic are compatible and can be generated in the same way.
We could easily support this as there is only 2 changes with pydantic.BaseModel:
* the class definition :
```python
from pydantic import BaseModel
class MyModel(BaseModel):
id: str
```
to
```
from pydantic.dataclasses import dataclass
@dataclass
class MyModel:
id: str
```
* Ordering of default value fields :
With dataclasses, default value fields must be written after fields with no default value.
Using BaseModel generation seems to generate fields in definition order without any ordering.
```python
@dataclass
class MyModel:
id: str
description: str = None # This field needs to be after id field
```
| open | 2024-04-08T15:44:07Z | 2024-11-06T07:29:42Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1906 | [
"enhancement"
] | AltarBeastiful | 2 |
ultralytics/ultralytics | pytorch | 19,198 | Segment task training from scratch | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,I want to use yolov8n-seg model for my segmentation task, but for some reasons I need to replace all silu activation by relu activation. I have two questions,
1. Should I train the model from scratch using coco dataset or should I just use your pretrained model to finetune my model using my own dataset ?
2. If I need to train from scratch, where can I close the auto downloading of yolov11n.pt? Cuz I tried to train from scratch but always need to download your pretrained model and since I can’t surf the net, it always gets stucked.what should I do to solve it?
### Additional
_No response_ | open | 2025-02-12T08:03:33Z | 2025-02-12T09:52:46Z | https://github.com/ultralytics/ultralytics/issues/19198 | [
"question",
"segment"
] | QiqLiang | 4 |
jowilf/starlette-admin | sqlalchemy | 69 | Enhancement: logging and/or more explicit form validation exceptions | **Is your feature request related to a problem? Please describe.**
I've spent this night in pitiful attempts to understand why the **{identity}/edit/{pk}** view's form submission **didn't trigger the update** command in my mongodb - create, list and detail worked fine, in the example the aforementioned view worked fine, but it silently refused to act properly in my case.
The reason for it is the fact that the on-write **validation** on my UUID field for that model **fails** but the **error message** about it **can't be displayed** on the frontend since that field is not a part of the ModelView. I found it out after overwriting the _edit_ async method for my ModelView and adding logging there.
The case I encountered goes like this:
1. Update the included fields
2. Click "Submit"
3. Be redirected to the very same page with no explicit warnings on the result
4. Check the logs - 200 on POST
5. Check the DB and item list - see the unchanged item
_6. Curse elaborately and spend the night researching the guts of the library to understand what exactly is going on._
**Describe the solution you'd like**
- First of all, it's a dreadful idea to return _200 OK_ for every type of result, especially an internal _ValidationError_. There really should be certain differentiation, that's what http result codes are for.
- Secondly, it would be good to see the exception details in the reason of the response in debug mode, so no similar confusion will arise when the error is related to a non-visible field.
- Thirdly, some form of architectural decision must be made regarding the model errors that currently can't be shown through the ModelView (as goes the one described).
- Finally, there clearly should be some controllable logging wired through the library to facilitate the investigations (and help the library improve and grow long-term). | closed | 2022-12-28T00:37:30Z | 2023-03-12T19:53:35Z | https://github.com/jowilf/starlette-admin/issues/69 | [
"enhancement"
] | sshghoult | 2 |
PokemonGoF/PokemonGo-Bot | automation | 5,488 | Loitering does not seem to work properly | <!--
STOP ! ! !
Read the following before creating anything (or you will have your issue/feature request closed without notice)
1. Please only create an ISSUE or a FEATURE REQUEST - don't mix the two together in one item
2. For a Feature Request please only fill out the FEATURE REQUEST section
3. For a Issue please only fill out the ISSUE section
4. Issues are NOT to be used for help/config problems/support - use the relevant slack channels as per the README
5. Provide a good summary in the title, don't just write problem, or awesome idea!
6. Delete all irrelevant sections not related to your issue/feature request (including this top section)
===============ISSUE SECTION===============
Before you create an Issue, please check the following:
1. Have you validated that your config.json is valid JSON? Use http://jsonlint.com/ to check.
2. Have you [searched our issue tracker](https://github.com/PokemonGoF/PokemonGo-Bot/issues?q=is%3Aissue+sort%3Aupdated-desc) to see if the issue already exists? If so, comment on that issue instead rather than creating a new issue.
3. Are you running on the `master` branch? We work on the `dev` branch and then add that functionality to `master` when it is stable. Your issue may be fixed on `dev` and there is no need for this issue, just wait and it will eventually be merged to `master`.
4. All Issue sections MUST be completed to help us determine the actual problem and find its cause
-->
### Expected Behavior
<!-- Tell us what you expect to happen -->
Run the bot, go to the next point as in my path file, then, while loitering, move to forts, spin the pokestop and catch pokemons.
### Actual Behavior
<!-- Tell us what is happening -->
only while is following a path spin the pokestops, while is loitering it does nothing.
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
<!-- Provide your FULL config file, feel free to use services such as pastebin.com to reduce clutter -->
http://pastebin.com/HDWecKDg
### Output when issue occurred
<!-- Provide a reasonable sample from your output log (not just the error message), feel free to use services such as pastebin.com to reduce clutter -->
http://pastebin.com/2fY2XXBC
in the line 40 and 426 start to loiter.
### Steps to Reproduce
<!-- Tell us the steps you have taken to reproduce the issue -->
./run.sh
### Other Information
OS: Ubuntu 16.04
<!-- Tell us what Operating system you're using -->
Branch: master
<!-- dev or master -->
Git Commit: a8ee31256d412413b107cce81b62059634e8c802
<!-- run 'git log -n 1 --pretty=format:"%H"' -->
Python Version: Python 2.7.12
<!-- run 'python -V' and paste it here) -->
Any other relevant files/configs (eg: path files)
<!-- Anything else which may be of relevance -->
path file
[
{"location": "40.7814675, -73.9741015, 33", "loiter": 300},
{"location": "40.7795502, -73.9632225, 46", "loiter": 300},
{"location": "40.7739931, -73.9665484, 23", "loiter": 300},
{"location": "40.7741881, -73.9707756, 25", "loiter": 300},
{"location": "40.7757805, -73.9717841, 26", "loiter": 300},
{"location": "40.7772429, -73.9712047, 29", "loiter": 300},
{"location": "40.7678016, -73.9717411, 18", "loiter": 300},
{"location": "40.7663878, -73.9732003, 21", "loiter": 300},
{"location": "40.7647626, -73.9732003, 32", "loiter": 300},
{"location": "40.7653802, -73.9750671, 26", "loiter": 300},
{"location": "40.7665015, -73.9748954, 14", "loiter": 300},
{"location": "40.7762842, -73.9740157, 32", "loiter": 300}
]
<!-- ===============END OF ISSUE SECTION=============== -->
<!-- Note: Delete these lines and everything BELOW if creating an Issue -->
| closed | 2016-09-16T18:02:01Z | 2016-09-22T11:36:18Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5488 | [] | jmarenas | 3 |
AutoViML/AutoViz | scikit-learn | 46 | [bug] problem with time series charts | Here is minimal reproducible example with google colab:
1. Date time column is no recognized, when input is file:
```
!pip install autoviz
from autoviz.AutoViz_Class import AutoViz_Class
import pandas as pd
AV = AutoViz_Class()
df = pd.DataFrame({'time': ['2020-01-15', '2020-02-15', '2020-03-15', '2020-04-15', '2020-05-15'], 'values': [1.0,2.5,3.2,4.2,5.6]})
df['time'] = pd.to_datetime(df['time'])
df.to_csv('ts.csv', index=False)
dft = AV.AutoViz("ts.csv", verbose=2)
hape of your Data Set loaded: (5, 2)
############## C L A S S I F Y I N G V A R I A B L E S ####################
Classifying variables in data set...
Data Set Shape: 5 rows, 2 cols
Data Set columns info:
* time: 0 nulls, 5 unique vals, most common: {'2020-05-15': 1, '2020-03-15': 1}
* values: 0 nulls, 5 unique vals, most common: {3.2: 1, 5.6: 1}
--------------------------------------------------------------------
Numeric Columns: ['values']
Integer-Categorical Columns: []
String-Categorical Columns: []
Factor-Categorical Columns: []
String-Boolean Columns: []
Numeric-Boolean Columns: []
Discrete String Columns: []
NLP text Columns: []
Date Time Columns: []
ID Columns: ['time']
Columns that will not be considered in modeling: []
2 Predictors classified...
This does not include the Target column(s)
1 variables removed since they were ID or low-information variables
List of variables removed: ['time']
No categorical or numeric vars in data set. Hence no bar charts.
Time to run AutoViz (in seconds) = 0.562
```
2. When input is dataframe - chart is not generated, but date time column is recognized:
```
dft = AV.AutoViz("", dfte=df, verbose=2)
Shape of your Data Set loaded: (5, 2)
############## C L A S S I F Y I N G V A R I A B L E S ####################
Classifying variables in data set...
Data Set Shape: 5 rows, 2 cols
Data Set columns info:
* time: 0 nulls, 5 unique vals, most common: {Timestamp('2020-05-15 00:00:00'): 1, Timestamp('2020-04-15 00:00:00'): 1}
* values: 0 nulls, 5 unique vals, most common: {3.2: 1, 5.6: 1}
--------------------------------------------------------------------
Numeric Columns: ['values']
Integer-Categorical Columns: []
String-Categorical Columns: []
Factor-Categorical Columns: []
String-Boolean Columns: []
Numeric-Boolean Columns: []
Discrete String Columns: []
NLP text Columns: []
Date Time Columns: ['time']
ID Columns: []
Columns that will not be considered in modeling: []
2 Predictors classified...
This does not include the Target column(s)
No variables removed since no ID or low-information variables found in data set
Could not draw Date Vars
No categorical or numeric vars in data set. Hence no bar charts.
Time to run AutoViz (in seconds) = 0.408
```
Expected result: chart with date on x-axis, and value on y-axis. | closed | 2021-09-24T08:17:05Z | 2021-09-25T12:09:58Z | https://github.com/AutoViML/AutoViz/issues/46 | [] | mglowacki100 | 3 |
ivy-llc/ivy | tensorflow | 27,908 | jax backend with torch frontend setitem issues | ### Bug Explanation
the following torch front code with jax fails while works in native jax
### Steps to Reproduce Bug
```python
import ivy
import torch
import ivy.functional.frontends.torch as torch_frontend
x = torch.ones((2, 2))
val = torch.tensor([5.])
x[(0, 0)] = val
ivy.set_jax_backend()
x = torch_frontend.ones((2, 2))
val = torch_frontend.tensor([5.])
x[(0, 0)] = val
```
### Environment
Linux.
### Ivy Version
Nightly
### Backend
- [ ] NumPy
- [ ] TensorFlow
- [ ] PyTorch
- [X] JAX
### Device
CPU | closed | 2024-01-12T15:20:06Z | 2024-01-14T22:27:30Z | https://github.com/ivy-llc/ivy/issues/27908 | [
"Bug Report"
] | Ishticode | 0 |
opengeos/leafmap | plotly | 15 | Add support for loading data from PostGIS | Reference:
- https://geopandas.readthedocs.io/en/latest/docs/reference/api/geopandas.GeoDataFrame.from_postgis.html | closed | 2021-05-31T23:28:41Z | 2021-06-01T04:45:19Z | https://github.com/opengeos/leafmap/issues/15 | [
"Feature Request"
] | giswqs | 1 |
graphql-python/graphene | graphql | 895 | Using Dataloader at request level (graphene + flask-graphql) | Hi,
there seems to be some discussion about what's the best way to use dataloader objects (see https://github.com/facebook/dataloader/issues/62#issue-193854091). The general question is whether dataloader objects should be used as application level caches or rather at request level.
My current implementation is based on https://docs.graphene-python.org/en/latest/execution/dataloader/ where dataloaders seem to be used as application level caches. The nice thing about this is that requests can benefit from what has already been cached by previous requests. However, I'm struggling with how to invalidate my dataloader in case the data in the repository changes. It occured to me that such issues could be prevented by moving the dataloader to the request level as suggested (sure, cached data would not be shared between requests anymore). Unfortunately, it is not clear to me how to do this based on the example in the documentation because the request itself is not explicitely represented.
Can someone provide a small example that uses graphene + flask-graphql
Cheers,
Sebastian
| closed | 2019-01-21T10:43:59Z | 2021-05-14T08:51:53Z | https://github.com/graphql-python/graphene/issues/895 | [
"question"
] | sebastianthelen | 9 |
axnsan12/drf-yasg | django | 128 | How to get rid of common prefixes and version in urlpatterns in swagger ui? | Hello!
I've developed an API with urlpatterns like:
```
urlpatterns = [
#----- v1 -----#
path('api/v1/examples/', include('examples.urls', namespace='v1')),
path('api/v1/moreexamples/', include('moreexamples.urls', namespace='v1')),
#----- v2 -----#
path('api/v1/examples/', include('examples.urls', namespace='v2')),
path('api/v1/moreexamples/', include('moreexamples.urls', namespace='v2')),
]
```
But i can't figure out how can I get rid of /api/{version}/ in swagger ui schema. And how can I switch the ui depending on API <version>?
I've found in Documentation this one:
> When using API versioning with NamespaceVersioning or URLPathVersioning, versioned endpoints that do not match the version used to access the SchemaView will be excluded from the endpoint list - for example, /api/v1.0/endpoint will be shown when viewing /api/v1.0/swagger/, while /api/v2.0/endpoint will not
> the longest common path prefix of all the urls in your API - see determine_path_prefix()
But it couldn't help me. Maybe it's possible to clarify how could I deal with the situation? Any examples any?
Thank you | closed | 2018-05-21T17:54:59Z | 2020-07-10T14:30:04Z | https://github.com/axnsan12/drf-yasg/issues/128 | [] | tehdoorsareopen | 6 |
vllm-project/vllm | pytorch | 14,499 | [Feature]: Convert all `os.environ(xxx)` to `monkeypatch.setenv` in test suite | ### 🚀 The feature, motivation and pitch
see title
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | closed | 2025-03-08T18:48:43Z | 2025-03-17T03:35:58Z | https://github.com/vllm-project/vllm/issues/14499 | [
"good first issue",
"feature request"
] | robertgshaw2-redhat | 2 |
Sanster/IOPaint | pytorch | 364 | [Feature Request] Translate Text | **Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
Automatic recognition of text via OCR
Masks from the text boxes
Inpaint
Translate the text and paste it back
Like: https://github.com/iuliaturc/detextify/tree/main/detextify | closed | 2023-08-29T11:23:17Z | 2023-08-30T03:18:46Z | https://github.com/Sanster/IOPaint/issues/364 | [] | TheMBeat | 1 |
donnemartin/system-design-primer | python | 482 | Request: Add GraphQL in the `Communication` section | Amazing resources, thanks for that!
In the disadvantages for REST section, there are a few references (such as adding more fields to clients that don't need it) that is a great segue to introducing GraphQL.
Would be nice to see GraphQL get a little more attention :) | open | 2020-10-13T15:30:08Z | 2022-04-23T13:17:23Z | https://github.com/donnemartin/system-design-primer/issues/482 | [
"needs-review"
] | edisonywh | 3 |
pydata/bottleneck | numpy | 135 | port to C | closed | 2016-08-01T18:23:08Z | 2016-10-14T22:23:09Z | https://github.com/pydata/bottleneck/issues/135 | [] | kwgoodman | 4 | |
django-oscar/django-oscar | django | 3,932 | add native shipping-method | I added a feature to the shipping-method system in my personal project,
That is, if it adds a new record to the app shipping models database, It is used as a new shipping-method in the store.
As mentioned on this [page](https://github.com/django-oscar/django-oscar/blob/master/docs/source/howto/how_to_configure_shipping.rst), this issue can contain a lot of details, but is it worth it to be added to the core as a feature to meet the needs of users in many cases so that they do not implement themselves again ?
Of course, the maintainer can also say a general point to be observed if it is to be added as a feature . | open | 2022-06-20T12:49:45Z | 2022-06-21T11:33:44Z | https://github.com/django-oscar/django-oscar/issues/3932 | [] | mojtabaakbari221b | 2 |
plotly/dash | data-visualization | 2,946 | [BUG] Component value changing without user interaction or callbacks firing | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.1
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Windows 11
- Browser Chrome & Safari
**Describe the bug**
A clear and concise description of what the bug is.
**Expected behavior**
The app has the following layout:
```
session_picker_row = dbc.Row(
[
...
dbc.Col(
dcc.Dropdown(
options=[],
placeholder="Select a session",
value=None,
id="session",
),
),
...
dbc.Col(
dbc.Button(
children="Load Session / Reorder Drivers",
n_clicks=0,
disabled=True,
color="success",
id="load-session",
)
),
],
)
```
The dataflow within callbacks is unidirectional from `session` to `load-session` using the following callback:
```
@callback(
Output("load-session", "disabled"),
Input("season", "value"),
Input("event", "value"),
Input("session", "value"),
prevent_initial_call=True,
)
def enable_load_session(season: int | None, event: str | None, session: str | None) -> bool:
"""Toggles load session button on when the previous three fields are filled."""
return not (season is not None and event is not None and session is not None)
```
I have noticed that sometimes the `n_click` property of `load-session`, which starts from 0, goes to 1 and drops back down to 0. Simultaneously, the `value` property of `session` would revert to `None` which is what I initialize it with. This is all without any callback firing.
The line causing this behavior is editing a cached (with `dcc.store`) dataframe and doesn't trigger any callback. Might this have something to do with the browser cache?
| closed | 2024-08-10T03:24:28Z | 2024-08-10T16:54:40Z | https://github.com/plotly/dash/issues/2946 | [] | Casper-Guo | 8 |
yihong0618/running_page | data-visualization | 541 | 关于支持vivo手机运动健康导出数据 | 从github看到了大佬的项目,我是想导出我在vivo手机的骑行记录。官方不支持导出,找了好多方法都不行😭。目前我只能导出了 Android-->data-->com.vivo.health 下的文件夹,但是里面并没有找到我想要的数据(可能是看不懂),所以想请教一下大佬,关于这个文件夹内容数据转化,或者您有相关的文档嘛 | open | 2023-11-08T02:36:02Z | 2024-02-07T06:26:17Z | https://github.com/yihong0618/running_page/issues/541 | [] | KonoSubazZ | 6 |
plotly/dash-cytoscape | dash | 93 | Clarify usage of yarn vs npm in contributing.md | Right now, it's not clear in `contributing.md` whether yarn or npm should be used. Since `yarn` was chosen to be the package manager, we should more clearly indicate that in contributing. | closed | 2020-07-03T16:30:13Z | 2020-07-08T23:05:37Z | https://github.com/plotly/dash-cytoscape/issues/93 | [] | xhluca | 1 |
plotly/dash | dash | 2,608 | [BUG] adding restyleData to input causing legend selection to clear automatically | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.11.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [MacOS Ventura 13.5, Apple M1]
- Browser [chrome]
- Version [chrome Version 115.0.5790.114 (Official Build) (arm64)]
**Describe the bug**
After adding `restyleData` in `Input` in `@app.callback` like below:
@app.callback(
Output("graph", "figure"),
Input("input_1", "value"),
Input('upload-data', 'contents'),
Input("graph", "restyleData"),
)
When I click or double click on a legend, initially the graph is updated correctly (e.g. removing the data series if single clicked, or only showing the data series is double clicked). But then the graph is reverted back to its initial state automatically (i.e. all data series are shown) after a certain period of time (depending on the size of the df). If I click or double click the legend the second time, the graph does not revert back automatically. If I click or double click the third time, the graph reverts back again automatically...and so on.
To isolate the issue, the `restyleData` input is not used anywhere in the function (e.g. `def update_line_chart(input_1, contents, restyleData)`) but in a `print` statement. But the `restyleData` content prints out correctly it seems.
**Expected behavior**
When I click or double click on a legend, the update to the graph retains.
**Screenshots**
If applicable, add screenshots or screen recording to help explain your problem.
https://github.com/plotly/dash/assets/5752865/79f3cdfd-2c13-46c7-8097-98e9fa046217
| closed | 2023-08-01T04:39:47Z | 2024-07-25T13:39:35Z | https://github.com/plotly/dash/issues/2608 | [] | crossingchen | 3 |
identixone/fastapi_contrib | pydantic | 151 | How to save data in Mongo? | * FastAPI Contrib version: 0.2.7
* FastAPI version: 0.59.0
* Python version: 3.7.7
* Operating System: MacOS
### Description
Hello. I was looking for tools with what will be possible to play FastAPI with MongoDB in nice, simple way, because I never before using NoSQL. I found FastAPI_Contrib, my attention takes especially two of features:
1. ModelSerializers: serialize (pydantic) incoming request, connect data with DB model and save
2. MongoDB integration: Use models as if it was Django (based on pydantic models)
I was trying all day to understand the documentation how to use FastAPI_Contrib, unfortunately documentation is so hard for entry-level users. What I want to achieve at this time is nothing more than just:
1. Create Model
2. Create Serializer
3. Send the request with data from for example Postman
4. Save that data in MongoDB
Just a first step of CRUD...
### What I Did
I was trying take it in very different ways, but for this issue I will present the most simply way based on documentation...
```
Project Structure:
project/
- project/
-- sample/
--- __init__.py
--- serializers.py
- __init__.py
- main.py
```
project/main.py:
```python
import os
from fastapi_contrib.db.utils import setup_mongodb, create_indexes
import motor.motor_asyncio
from fastapi import FastAPI
from dotenv import load_dotenv
from project.serializers import sample_router
load_dotenv(verbose=True)
DATABASE_URL =os.getenv("DB_URL")
SECRET = os.getenv("SECRET")
CONTRIB_APPS = os.getenv("CONTRIB_APPS")
CONTRIB_APPS_FOLDER_NAME = os.getenv("CONTRIB_APPS_FOLDER_NAME")
client = motor.motor_asyncio.AsyncIOMotorClient(
DATABASE_URL, uuidRepresentation="standard"
)
db = client["sample"]
app = FastAPI()
@app.on_event("startup")
async def startup():
setup_mongodb(db)
await create_indexes()
print('Is it connected to DB?')
app.include_router(sample_router)
```
project/sample/serializers.py
```python3
from fastapi import APIRouter
from fastapi_contrib.db.models import MongoDBModel
from fastapi_contrib.serializers import openapi
from fastapi_contrib.serializers.common import Serializer
# from yourapp.models import SomeModel
sample_router = APIRouter()
class SomeModel(MongoDBModel):
field1: str
class Meta:
collection = "test"
@openapi.patch
class SomeSerializer(Serializer):
read_only1: str = "const"
write_only2: int
not_visible: str = "42"
class Meta:
model = SomeModel
exclude = {"not_visible"}
write_only_fields = {"write_only2"}
read_only_fields = {"read_only1"}
@sample_router.post("/test/", response_model=SomeSerializer.response_model)
async def root(serializer: SomeSerializer):
model_instance = await serializer.save()
return model_instance.dict()
```
When I send data as POST to /test/ from postman
```
{
"write_only2": 2,
"field1": "string"
}
```
or by curl
```
curl -X POST "http://127.0.0.1:8000/test/" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"id\":0,\"field1\":\"string\",\"write_only2\":0}"
```
Then I got errors:

if I will remove:
```
class Meta:
collection = "test"
```
as it in an example in documentation then I got other error:

I will be grateful if someone will explain to me using simple examples how to properly combine models, serializers, and perform CRUD operations on them reflected in MongoDB.
By the way.
I think is it good idea to rewrite documentation to be more affordable to not so advenced users. And add to them a tutorial, awesome will be to see there real world examples. I think, good documentation can make this package a very popular.
Regards,
Oskar
| closed | 2020-07-22T00:36:02Z | 2020-09-04T10:44:18Z | https://github.com/identixone/fastapi_contrib/issues/151 | [] | oskar-gmerek | 3 |
huggingface/diffusers | pytorch | 11,063 | prepare_attention_mask - incorrect padding? | ### Describe the bug
I'm experimenting with attention masking in Stable Diffusion (so that padding tokens aren't considered for cross attention), and I found that UNet2DConditionModel doesn't work when given an `attention_mask`.
https://github.com/huggingface/diffusers/blob/8ead643bb786fe6bc80c9a4bd1730372d410a9df/src/diffusers/models/attention_processor.py#L740
For the attn1 blocks (self-attention), the target sequence length is different from the current length (target 4096, but it's only 77 for a typical CLIP output). The padding routine pads by *adding* `target_length` zeros to the end of the last dimension, which results in a sequence length of 4096 + 77, rather than the desired 4096. I think it should be:
```diff
- attention_mask = F.pad(attention_mask, (0, target_length), value=0.0)
+ attention_mask = F.pad(attention_mask, (0, target_length - current_length), value=0.0)
```
`encoder_attention_mask` works fine - it's passed to the attn2 block and no padding ends up being necessary.
It seems that this would additionally fail if current_length were greater than target_length, since you can't pad by a negative amount, but I don't know that that's a practical concern.
(I know that particular masking isn't even semantically valid, but that's orthogonal to this issue!)
### Reproduction
```python
# given a Stable Diffusion pipeline
# given te_mask = tokenizer_output.attention_mask
pipeline.unet(latent_input, timestep, text_encoder_output, attention_mask=te_mask).sample
```
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.28.1
- Transformers version: 4.48.3
- Accelerate version: 1.3.0
- PEFT version: not installed
- Bitsandbytes version: 0.45.2
- Safetensors version: 0.5.2
- xFormers version: 0.0.29.post2
- Accelerator: NVIDIA GeForce RTX 3060, 12288 MiB
NVIDIA GeForce RTX 4060 Ti, 16380 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_ | open | 2025-03-14T19:01:01Z | 2025-03-15T00:23:48Z | https://github.com/huggingface/diffusers/issues/11063 | [
"bug"
] | cheald | 1 |
yunjey/pytorch-tutorial | deep-learning | 91 | METEOR, Blue@k and CIDER metrics for image captioning | Hello @yunjey
I want to calculate the meteor, cider, etc. to examine the image captioning accuracy.
How can I do it? Is there any source where I can get?
Thanks | closed | 2018-01-10T19:12:24Z | 2018-01-14T11:39:19Z | https://github.com/yunjey/pytorch-tutorial/issues/91 | [] | VisheshTanwar-IITR | 2 |
piskvorky/gensim | machine-learning | 3,014 | Deprecation warnings: `scipy.sparse.sparsetools` and `np.float` | #### Problem description
Run the test for the new version of [WEFE](https://github.com/raffaem/wefe)
#### Steps/code/corpus to reproduce
```
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/scipy/sparse/sparsetools.py:21
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/scipy/sparse/sparsetools.py:21: DeprecationWarning: `scipy.sparse.sparsetools` is deprecated!
scipy.sparse.sparsetools is a private module for scipy.sparse, and should not be used.
_deprecated()
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:34
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:34: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
method='lar', copy_X=True, eps=np.finfo(np.float).eps,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:164
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:164: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
method='lar', copy_X=True, eps=np.finfo(np.float).eps,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:281
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:281: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, copy_Gram=True, verbose=0,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:865
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:865: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1121
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1121: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1149
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1149: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, positive=False):
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1379
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1379: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
max_n_alphas=1000, n_jobs=None, eps=np.finfo(np.float).eps,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1621
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1621: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
max_n_alphas=1000, n_jobs=None, eps=np.finfo(np.float).eps,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1755
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1755: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, copy_X=True, positive=False):
```
#### Versions
Please provide the output of:
```python
Linux-5.8.0-33-generic-x86_64-with-glibc2.32
Python 3.8.6 (default, Sep 25 2020, 09:36:53)
[GCC 10.2.0]
Bits 64
NumPy 1.20.0rc1
SciPy 1.6.0rc1
gensim 4.0.0beta
FAST_VERSION 1
```
| open | 2020-12-20T21:20:36Z | 2022-05-05T05:57:04Z | https://github.com/piskvorky/gensim/issues/3014 | [
"difficulty easy",
"need info",
"reach HIGH",
"impact LOW"
] | raffaem | 11 |
streamlit/streamlit | streamlit | 10,751 | Support admonitions / alerts / callouts in markdown | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Implement support for alert blocks (aka admonitions) within the Streamlit markdown flavour.
### Why?
This is supported by many other markdown flavours as well.
### How?
Unfortunately, there isn't one common syntax for this... we probably have to choose between one of these:
[Github markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax):
> [!NOTE]
> This is a note alert block
```
> [!NOTE]
> This is a note alert block
```
[PyMdown](https://facelessuser.github.io/pymdown-extensions/extensions/details/):
```
??? note
This is a note alert block
```
[Material for Mkdocs](https://squidfunk.github.io/mkdocs-material/reference/admonitions/) & [Python Markdown](https://python-markdown.github.io/extensions/admonition/):
```
!!! note
This is a note alert block
```
[Docusaurus](https://docusaurus.io/docs/markdown-features/admonitions):
```
:::note
This is a note alert block
:::
```
### Additional Context
- We could just reuse the [alert frontend components](https://docs.streamlit.io/develop/api-reference/status) that already exist in Streamlit
- Remark/reype plugins:
- [remark-github-beta-blockquote-admonitions](https://github.com/myl7/remark-github-beta-blockquote-admonitions)
- [remark-github-admonitions-to-directives](https://github.com/incentro-ecx/remark-github-admonitions-to-directives)
- [rehype-github-alerts](https://github.com/chrisweb/rehype-github-alerts) | open | 2025-03-12T17:00:47Z | 2025-03-12T17:03:22Z | https://github.com/streamlit/streamlit/issues/10751 | [
"type:enhancement",
"feature:markdown"
] | lukasmasuch | 1 |
Lightning-AI/pytorch-lightning | pytorch | 20,394 | MAP values are not changing | ### Bug description
I've tried to build the F-RCNN based on pytorch lighting and added MAP50 and MAP75 just like YOLO does out of the box.
But the MAP values are not changing. Is there anything wrong I'm doing?
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
import numpy as np
from torchmetrics.detection import IntersectionOverUnion
from torchmetrics.detection import MeanAveragePrecision
class CocoDNN(L.LightningModule):
def __init__(self):
super().__init__()
self.model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights="DEFAULT")
self.metric = MeanAveragePrecision(iou_type="bbox",average="micro",iou_thresholds=[0.5, 0.75],extended_summary=True)
def forward(self, images, targets=None):
return self.model(images, targets)
def training_step(self, batch, batch_idx):
imgs, annot = batch
batch_losses = []
for img_b, annot_b in zip(imgs, annot):
#print(len(img_b), len(annot_b))
if len(img_b) == 0:
continue
loss_dict = self.model(img_b, annot_b)
losses = sum(loss for loss in loss_dict.values())
#print(losses)
batch_losses.append(losses)
batch_mean = torch.mean(torch.stack(batch_losses))
self.log('train_loss', batch_mean, on_step=True, on_epoch=True, prog_bar=True, logger=True)
return batch_mean
def validation_step(self, batch, batch_idx):
imgs, annot = batch
targets ,preds = [], []
for img_b, annot_b in zip(imgs, annot):
if len(img_b) == 0:
continue
if len(annot_b)> 1:
targets.extend(annot_b)
else:
targets.append(annot_b[0])
#print(f"Annotated : {len(annot_b)} - {annot_b}")
#print("")
loss_dict = self.model(img_b, annot_b)
#print(f"Predicted : {len(loss_dict)} - {loss_dict}")
if len(loss_dict)> 1:
preds.extend(loss_dict)
else:
preds.append(loss_dict[0])
#preds.append(loss_dict)
self.metric.update(preds, targets)
map_results = self.metric.compute()
#self.log_dict('logs',map_results)
#print(map_results)
#print(map_results['map_50'].float().item())
self.log('map_50', map_results['map_50'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
self.log('map_75', map_results['map_75'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True)
return map_results['map_75']
def configure_optimizers(self):
return optim.SGD(self.parameters(), lr=0.001, momentum=0.9, weight_decay=0.0005)
```
### Error messages and logs

<img width="1136" alt="image" src="https://github.com/user-attachments/assets/e77af71e-75c3-41eb-bb7c-809c43fef887">
```
### Environment
<details>
<summary>Current environment</summary>
#- PyTorch Lightning Version (e.g., 2.4.0): 2.4
#- PyTorch Version (e.g., 2.4):2.4
#- Python version (e.g., 3.12): 3.11
#- OS (e.g., Linux): MACOS
#- CUDA/cuDNN version:
#- GPU models and configuration: CPU
#- How you installed Lightning(`conda`, `pip`, source): PIP
```
</details>
### More info
_No response_ | closed | 2024-11-04T17:55:36Z | 2024-11-04T19:06:14Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20394 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | shanalikhan | 0 |
litestar-org/polyfactory | pydantic | 82 | Enhancement: Missing ConstrainedDate | We apparently are missing the `ConstrainedDate` type, and thus must add it. | closed | 2022-10-10T17:07:28Z | 2022-10-13T18:37:20Z | https://github.com/litestar-org/polyfactory/issues/82 | [
"enhancement",
"help wanted",
"good first issue"
] | Goldziher | 0 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 568 | [Feature request] 可以增加抖音视频(非直播)下载同时把弹幕合并下载到视频里的功能吗? | 大神,可以增加抖音视频(非直播)下载同时把弹幕合并下载到视频里的功能吗?
因为有些视频的弹幕挺有趣的我想一起下载合并到视频保存起来。
谢谢~
| open | 2025-02-27T11:31:20Z | 2025-02-27T11:31:20Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/568 | [
"enhancement"
] | crazygod5555 | 0 |
huggingface/datasets | numpy | 6,700 | remove_columns is not in-place but the doc shows it is in-place | ### Describe the bug
The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)
In the text classification example of transformers v4.38.1, the columns are not removed.
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Steps to reproduce the bug
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Expected behavior
Actually remove the columns.
### Environment info
1. datasets v2.17.0
2. transformers v4.38.1 | closed | 2024-02-28T12:36:22Z | 2024-04-02T17:15:28Z | https://github.com/huggingface/datasets/issues/6700 | [] | shelfofclub | 3 |
reiinakano/scikit-plot | scikit-learn | 9 | Error installing No module named sklearn.metrics | Hi there,
I am getting an error installing it
``` bash
pip install scikit-plot ~ 1
Collecting scikit-plot
Downloading scikit-plot-0.2.1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\users\arthur\.babun\cygwin\tmp\pip-build-yrgynz\scikit-plot\setup.py", line 9, in <module>
import scikitplot
File "c:\users\arthur\.babun\cygwin\tmp\pip-build-yrgynz\scikit-plot\scikitplot\__init__.py", line 5, in <module>
from scikitplot.classifiers import classifier_factory
File "c:\users\arthur\.babun\cygwin\tmp\pip-build-yrgynz\scikit-plot\scikitplot\classifiers.py", line 7, in <module>
from scikitplot import plotters
File "c:\users\arthur\.babun\cygwin\tmp\pip-build-yrgynz\scikit-plot\scikitplot\plotters.py", line 9, in <module>
from sklearn.metrics import confusion_matrix
ImportError: No module named sklearn.metrics
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in c:\users\arthur\.babun\cygwin\tmp\pip-build-yrgynz\scikit-plot\
``` | closed | 2017-02-22T02:01:07Z | 2017-02-22T04:32:11Z | https://github.com/reiinakano/scikit-plot/issues/9 | [] | ArthurZ | 5 |
autogluon/autogluon | computer-vision | 4,486 | use in pyspark | ## Description
from autogluon.tabular import TabularPredictor
can TabularPredictor use spark engine to deal with big data?
## References
| closed | 2024-09-23T10:45:40Z | 2024-09-26T18:12:28Z | https://github.com/autogluon/autogluon/issues/4486 | [
"enhancement",
"module: tabular"
] | hhk123 | 1 |
ploomber/ploomber | jupyter | 255 | Improve jupyter notebook static analysis | If any of the parameters is a dictionary, check all keys also appear in the passed parameters | closed | 2020-09-18T17:02:11Z | 2021-10-11T12:38:58Z | https://github.com/ploomber/ploomber/issues/255 | [] | edublancas | 1 |
netbox-community/netbox | django | 17,643 | Device Interface "VLAN group" Filter deletes “Untagged VLAN” and “Tagged VLANs” entry. "Netbox Version V4" | ### Deployment Type
Self-hosted
### NetBox Version
v4.1.2
### Python Version
3.10
### Steps to Reproduce
Since some sub-versions of version 4, the settings of “Untagged VLAN” and “Tagged VLANs” are deleted as soon as “VLAN group” is selected again for a device interface.
I thought I had already reported this, but unfortunately not.
1. create 2 VLAN groups (VLAN Group A, VLAN Group B) with 2 VLANs each (Test A.1, Test A.2, Test B.1, Test B.2).
2. open a device interface.
3. 802.1Q Switching:
- 802.1Q Mode = Tagged
- VLAN group = VLAN Group A
- Untagged VLAN = Test A.1 (1234)
- Tagged VLANs = Test A.1 (1234)
4. change “VLAN group” to “VLAN Group B”
Result, “Untagged VLAN” and “Tagged VLANs” are emptied.

Jeremy's statement years ago was that “VLAN group” is only a filter to be able to search better in “Untagged VLAN” “Tagged VLANs”. This is also the reason why “VLAN group” is not saved.
The fact that “VLAN group” now empties the entries “Untagged VLAN” and “Tagged VLANs” is not good.
### Expected Behavior
“Untagged VLAN” and ‘Tagged VLANs’ remain unchanged when ‘VLAN group’ is changed.
### Observed Behavior
VLAN group” empties ‘Untagged VLAN’ and ‘Tagged VLANs’. | closed | 2024-09-30T07:14:18Z | 2024-12-30T03:06:55Z | https://github.com/netbox-community/netbox/issues/17643 | [] | LHBL2003 | 2 |
axnsan12/drf-yasg | rest-api | 230 | Support for examples | Wondering if there's any plan for supporting examples in serializer Fields? This is pretty important if you intend people to use your API effectively, and can dramatically increase an API's ease of use.
An easy place to implement this would be by using the `initial` field.
I recognize that this is fairly straightforward to implement by subclassing Field inspector, but this seems like something important enough to warrant built-in support. | open | 2018-10-15T21:20:40Z | 2025-03-07T12:16:42Z | https://github.com/axnsan12/drf-yasg/issues/230 | [
"triage"
] | agethecoolguy | 4 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 274 | Import Helper Issue | Traceback (most recent call last):
File "demo_cli.py", line 3, in <module>
from synthesizer.inference import Synthesizer
File "C:\Users\kille\Desktop\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 1, in <module>
from synthesizer.tacotron2 import Tacotron2
File "C:\Users\kille\Desktop\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py", line 3, in <module>
from synthesizer.models import create_model
File "C:\Users\kille\Desktop\Real-Time-Voice-Cloning-master\synthesizer\models\__init__.py", line 1, in <module>
from .tacotron import Tacotron
File "C:\Users\kille\Desktop\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py", line 4, in <module>
from synthesizer.models.helpers import TacoTrainingHelper, TacoTestHelper
File "C:\Users\kille\Desktop\Real-Time-Voice-Cloning-master\synthesizer\models\helpers.py", line 3, in <module>
from tensorflow.contrib.seq2seq import Helper
**ImportError: cannot import name 'Helper' from 'tensorflow.contrib.seq2seq' (unknown location)**
Not too sure what the issue is and why it can't locate it. Any help?
| closed | 2020-02-01T06:49:18Z | 2020-07-04T23:00:23Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/274 | [] | ThatOneGuy2259 | 1 |
coqui-ai/TTS | python | 3,439 | Docker images running is not well now? [Bug] | ### Describe the bug
When I followed the [Using premade images In Docker images](https://docs.coqui.ai/en/dev/docker_images.html), and after I run the docker preparing to list_models, I got the below error. That seems something wrong on threads limits in docker? When I tryed to fix this, it occured other problem like `RuntimeError: can't start new thread` or ` Sub-process /usr/bin/dpkg returned an error code (2)`. Please give me some helps to sove them? How do other users use this feature?
```
root@3a4b872dba7f:~# python3 TTS/server/server.py --list_models
OpenBLAS blas_thread_init: pthread_create failed for thread 1 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 2 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 3 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 4 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 5 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 6 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 7 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 8 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 9 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 10 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 11 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 12 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 13 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 14 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 15 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 16 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 17 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 18 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 19 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 20 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 21 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 22 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 23 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 24 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 25 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 26 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 27 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 28 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 29 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 30 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 31 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 32 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 33 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 34 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 35 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 36 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 37 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 38 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 39 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 40 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 41 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 42 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 43 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 44 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 45 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 46 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 47 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 1 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 2 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 3 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 4 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 5 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 6 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 7 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 8 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 9 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 10 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 11 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 12 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 13 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 14 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 15 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 16 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 17 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 18 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 19 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 20 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 21 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 22 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 23 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 24 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 25 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 26 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 27 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 28 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 29 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 30 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 31 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 32 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 33 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 34 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 35 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 36 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 37 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 38 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 39 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 40 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 41 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 42 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 43 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 44 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 45 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 46 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 47 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
Traceback (most recent call last):
File "/root/TTS/server/server.py", line 16, in <module>
from TTS.utils.synthesizer import Synthesizer
File "/root/TTS/utils/synthesizer.py", line 11, in <module>
from TTS.tts.configs.vits_config import VitsConfig
File "/root/TTS/tts/configs/vits_config.py", line 5, in <module>
from TTS.tts.models.vits import VitsArgs, VitsAudioConfig
File "/root/TTS/tts/models/vits.py", line 12, in <module>
from librosa.filters import mel as librosa_mel_fn
File "/usr/local/lib/python3.10/dist-packages/librosa/filters.py", line 49, in <module>
import scipy.signal
File "/usr/local/lib/python3.10/dist-packages/scipy/signal/__init__.py", line 311, in <module>
from . import _sigtools, windows
File "/usr/local/lib/python3.10/dist-packages/scipy/signal/windows/__init__.py", line 42, in <module>
from ._windows import *
File "/usr/local/lib/python3.10/dist-packages/scipy/signal/windows/_windows.py", line 7, in <module>
from scipy import linalg, special, fft as sp_fft
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/usr/local/lib/python3.10/dist-packages/scipy/__init__.py", line 189, in __getattr__
return _importlib.import_module(f'scipy.{name}')
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.10/dist-packages/scipy/linalg/__init__.py", line 206, in <module>
from ._misc import *
File "/usr/local/lib/python3.10/dist-packages/scipy/linalg/_misc.py", line 3, in <module>
from .blas import get_blas_funcs
File "/usr/local/lib/python3.10/dist-packages/scipy/linalg/blas.py", line 213, in <module>
from scipy.linalg import _fblas
KeyboardInterrupt
```
>
### To Reproduce
1. Install and run docker image following the [Using premade images In Docker images docs](https://docs.coqui.ai/en/dev/docker_images.html).
2. Then execute the command line `python3 TTS/server/server.py --list_models`
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
root@19160f312c1e:~# python3 /data1/huxh/collect_env_info.py
OpenBLAS blas_thread_init: pthread_create failed for thread 1 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 2 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 3 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 4 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 5 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 6 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 7 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 8 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 9 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 10 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 11 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 12 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 13 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 14 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 15 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 16 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 17 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 18 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 19 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 20 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 21 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 22 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 23 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 24 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 25 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 26 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 27 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 28 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 29 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 30 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 31 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 32 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 33 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 34 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 35 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 36 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 37 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 38 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 39 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 40 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 41 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 42 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 43 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 44 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 45 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 46 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
OpenBLAS blas_thread_init: pthread_create failed for thread 47 of 48: Operation not permitted
OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/numpy/core/__init__.py", line 23, in <module>
from . import multiarray
File "/usr/local/lib/python3.10/dist-packages/numpy/core/multiarray.py", line 10, in <module>
from . import overrides
File "/usr/local/lib/python3.10/dist-packages/numpy/core/overrides.py", line 6, in <module>
from numpy.core._multiarray_umath import (
ImportError: PyCapsule_Import could not import module "datetime"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data1/huxh/collect_env_info.py", line 6, in <module>
import numpy
File "/usr/local/lib/python3.10/dist-packages/numpy/__init__.py", line 144, in <module>
from . import core
File "/usr/local/lib/python3.10/dist-packages/numpy/core/__init__.py", line 49, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.10 from "/usr/bin/python3"
* The NumPy version is: "1.22.0"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: PyCapsule_Import could not import module "datetime"
```
```
### Additional context
_No response_ | closed | 2023-12-18T06:58:54Z | 2024-02-04T23:03:51Z | https://github.com/coqui-ai/TTS/issues/3439 | [
"bug",
"wontfix"
] | SunAriesCN | 1 |
Urinx/WeixinBot | api | 296 | aaaa | aa | closed | 2022-10-21T14:21:49Z | 2022-10-21T14:27:37Z | https://github.com/Urinx/WeixinBot/issues/296 | [] | gannicusleon | 0 |
SYSTRAN/faster-whisper | deep-learning | 1,261 | vad_filter = True causes pre-exit | "segments, info = model.transcribe(video_path, beam_size=5, vad_filter=True)"
1) vad_filter on will cause the program to stop shortly and generate just 1 line of subtitle instead of going through the whole video. (below is an example and it's the full output of a 5 minutes' MTV)
____________________________________________________
1
00:00:43,920 --> 00:01:10,939
You
____________________________________________________
2) While eliminating the vad_filter parameter seems to cause a lot of meaning less output in the subtitles generated, or missing out some parts. (below is an example, the actual vocals starts on 00:00:40 or so and it's nothing like "We'll be right back")
____________________________________________________
1
00:00:00,000 --> 00:00:29,980
We'll be right back.
2
00:00:30,000 --> 00:00:59,980
We'll be right back.
3
00:01:00,000 --> 00:01:04,420
I want to feel you in my bones.
_____________________________________________________
how can I make it right? | open | 2025-03-11T14:57:07Z | 2025-03-12T14:35:32Z | https://github.com/SYSTRAN/faster-whisper/issues/1261 | [] | Mark111112 | 1 |
docarray/docarray | pydantic | 1,309 | remove validation of url file extensions | Currently, our `Url` types like `ImageUrl`, `VideoUrl` etc are performing validation based on the file extension: only file extensions of that modality are allowed.
However, that is problematic since we cannot catch all edge cases. For example, http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800 is a completly valid image url, but currently our validation will fail it, because the extension is not "valid".
We could fix this edge case, but the general problem remains: We are not confident that we can handle all edge cases for all data modalities, and failing validation where it should not can be a serious blocker for users.
So, we decided to remove file extension validation for all or our `Url` types. | closed | 2023-03-29T12:37:55Z | 2023-04-03T13:46:40Z | https://github.com/docarray/docarray/issues/1309 | [
"DocArray v2",
"good-first-issue"
] | JohannesMessner | 2 |
koxudaxi/datamodel-code-generator | pydantic | 2,051 | Relative paths in url '$ref's are added to local file path instead of the url. | **Describe the bug**
If file A refers to file B via url, and file B refers to file C using a relative path, datamodel-codegen searches local file system (from A) for file C instead of adding the relative path to the url of file B.
**To Reproduce**
I cloned
https://github.com/ga4gh-beacon/beacon-v2/tree/main
and tried converting the json schema file (file A in the description above)
https://github.com/ga4gh-beacon/beacon-v2/blob/main/models/json/beacon-v2-default-model/analyses/defaultSchema.json
to pydantic.
this file refers to file B using url like this:
```
"$ref": "https://raw.githubusercontent.com/ga4gh-beacon/beacon-v2/main/framework/json/common/beaconCommonComponents.json#/definitions/Info"
```
File B refers to file C using relative path like this:
```
"$ref": "./ontologyTerm.json",
```
File C is available at
https://raw.githubusercontent.com/ga4gh-beacon/beacon-v2/main/framework/json/common/ontologyTerm.json
but instead of finding this (by adding it to the url of file B), I get error
```
FileNotFoundError: [Errno 2] No such file or directory: '/Users/sondre/beacon-v2-main/models/json/beacon-v2-default-model/analyses/ontologyTerm.json'
```
i.e. the relative path is added to the path of file A.
Used commandline:
```
$ datamodel-codegen --input defaultSchema.json
```
**Expected behavior**
I think it should add the relative path to the file containing the relative path
**Version:**
- Python version: 3.10.0
- datamodel-code-generator version: 0.25.8
**Additional context**
I forked and made a workaround, but i figured i should still let you know
| open | 2024-08-01T17:53:27Z | 2024-11-24T14:54:51Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2051 | [
"bug"
] | sondsorb | 0 |
AutoGPTQ/AutoGPTQ | nlp | 190 | Cuda is not working on my GPU on linux ubuntu | I tryed to install the binding on a linux ubuntu pc with a GPU, but every thing works perfectly until I try to run the model. I get this warning, and the model is definetely not using the GPU:
WARNING:auto_gptq.nn_modules.qlinear_old:CUDA extension not installed.
I used the prebuilt wheel:
[auto_gptq-0.2.2+cu117-cp310-cp310-linux_x86_64.whl](https://github.com/PanQiWei/AutoGPTQ/releases/download/v0.2.2/auto_gptq-0.2.2+cu117-cp310-cp310-linux_x86_64.whl)
Do you have an idea why I get this? | open | 2023-07-12T15:30:19Z | 2023-07-25T11:22:53Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/190 | [
"bug"
] | ParisNeo | 9 |
iterative/dvc | machine-learning | 9,832 | `dvc status --remote` | Hello,
my team and I are currently utilzing a dvc pipeline that checks if all data is pushted to the remote storage. This is currently done by pulling all data and then run `dvc status -q`. Pulling all data is slow, especially if you are storing many images. A faster way would be to only download all `.dir` cache files and just verify the existence of the respective hashes on the remote server, not having to spend much time on downloading all the files.
Ideally `dvc status --remote` or `dvc status -r` would do such a check for me. | closed | 2023-08-10T14:00:57Z | 2023-08-11T14:39:23Z | https://github.com/iterative/dvc/issues/9832 | [
"awaiting response"
] | helpmefindaname | 3 |
jupyter/nbgrader | jupyter | 1,550 | Community meetup? | Hi all.
Would anyone be interested in some sort of community event, to get a picture of all the different ways nbgrader is used and developed now? I have a feeling that a lot has been done in the last few years, but it has been local, without as much sharing as there could be.
I don't think it should be anything big, but this is my initial proposal:
* Online (we need to make the first one as accessible as possible for many people)
* One or two half days
* Two main program ideas:
* "cool things and problems" presentation: everyone presents a few cool things they have done with nbgrader lately, and a few problems they are working on
* Self-organized discussion (based on the first part, we will see many problems and solutions to match up)
* Most of the program is made by people writing suggestions in wiki pages/hackmd and then self-organization close to the event itself ("unconference")
I hope the outcomes would include seeing what each other are up to, co-work on making some of our developments reusable, and a plan for future community meetups and development.
I happened to chat with @gollington, and we could get some sort of Jupyter support for this. | closed | 2022-03-16T09:56:16Z | 2022-07-13T15:14:16Z | https://github.com/jupyter/nbgrader/issues/1550 | [] | rkdarst | 6 |
holoviz/panel | matplotlib | 7,707 | For IntSlider, the increment is not expected if set min=0, max=590 | #### ALL software version info
8.1.5
See below.

When I move the slider to next, it becomes 31.

This is not expected. It should be 29.
Similar issue happens in FloatSlider.
| closed | 2025-02-13T08:04:25Z | 2025-02-13T08:34:12Z | https://github.com/holoviz/panel/issues/7707 | [] | morganh-nv | 2 |
piskvorky/gensim | nlp | 3,457 | Replace copy of FuzzyTM in gensim/models/flsamodel.py with dep | The [gensim/models/flsamodel.py](https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/models/flsamodel.py)
file is an outdated, modified copy of [FuzzyTM/FuzzyTM.py](https://github.com/ERijck/FuzzyTM/blob/master/FuzzyTM/FuzzyTM.py)
from [FuzzyTM](https://github.com/ERijck/FuzzyTM/) by @ERijck.
In addition, the copy in gensim does not attribute the original author and does not state the license of the file as GNU GPLv2.
Please replace flsamodel.py with a wrapper that depends on and uses FuzzyTM, or at minimum, correctly state the origin, copyright holder and license of the file.
If any changes are needed to FuzzyTM, please send a pull request for them and then depend on the release that adds the changes.
| closed | 2023-03-13T08:07:24Z | 2023-03-13T10:04:21Z | https://github.com/piskvorky/gensim/issues/3457 | [] | pabs3 | 3 |
pydantic/pydantic | pydantic | 11,066 | (🐞) pickle fail with explicit type argument | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
see example code
### Example Code
```Python
import pickle
from pydantic import BaseModel
class M[T](BaseModel):
a: int
def f[T]():
pickle.dumps(list[T]()) # fine
pickle.dumps(M(a=1)) # fine
pickle.dumps(M[T](a=1)) # _pickle.PicklingError: Can't pickle <class '__main__.M[TypeVar]'>: attribute lookup M[TypeVar] on __main__ failed
f()
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.3
pydantic-core version: 2.27.1
pydantic-core build: profile=release pgo=false
install path: C:\Users\AMONGUS\projects\test-python\.venv\Lib\site-packages\pydantic
python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
platform: Windows-10-10.0.19045-SP0
related packages: typing_extensions-4.12.2
commit: unknown
```
| closed | 2024-12-09T01:08:58Z | 2024-12-09T13:15:57Z | https://github.com/pydantic/pydantic/issues/11066 | [
"bug V2",
"pending"
] | KotlinIsland | 3 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 389 | Training with CPU: AttributeError: module 'torch' has no attribute 'cpu' | https://github.com/CorentinJ/Real-Time-Voice-Cloning/blob/1e1687743a0c2b1f8027076ffc3651a61bbc8b66/encoder/train.py#L14
The torch.cpu is giving me "AttributeError: module 'torch' has no attribute 'cpu'"
_Originally posted by @JokerYan in https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/366#issuecomment-650927212_ | closed | 2020-06-29T22:04:49Z | 2020-06-30T15:11:29Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/389 | [] | ghost | 2 |
wkentaro/labelme | computer-vision | 1,041 | Export / Import labels.txt on Windows standalone version | Hi, am running the standalone version of labelme, until now is working like a charm. The thing is i cant run it from cmd to specify the label.txt file.
So any time i open a folder already in tagging process, the labels list is empty and i have to go trough some already tagged images and the list will populate with labels found.
The thing here is, is there anyway to export this labels.txt file?
An then, import it in windows standalone version?

If this is not possible, how should i make this file, for training just:
Caracter
Tag
Bomba
Roller
Wildsytle
3D
Moniker
S_Tren
Thanks for the help.
| closed | 2022-06-24T22:41:04Z | 2022-06-25T04:05:00Z | https://github.com/wkentaro/labelme/issues/1041 | [] | abundis-rmn2 | 1 |
autogluon/autogluon | scikit-learn | 4,164 | [tabular] Kaggle GPU leads to exception with `best_quality` | An exception is raised if using a GPU Kaggle notebook and specifying `best_quality` preset.
[User Report](https://www.kaggle.com/competitions/playground-series-s4e5/discussion/499495#2789980) using a Kaggle P100 GPU Notebook.
This is probably due to dynamic stacking's sub-fit process in Ray. It might also be due to any usage of Ray. We should test both on Kaggle and locally.
Error log:
```
Starting holdout-based sub-fit for dynamic stacking. Context path is: AutogluonModels/ag-20240501_091619/ds_sub_fit/sub_fit_ho.
/opt/conda/lib/python3.10/site-packages/requests/__init__.py:109: RequestsDependencyWarning: urllib3 (2.1.0) or chardet (None)/charset_normalizer (3.3.2) doesn't match a supported version!
warnings.warn(
2024-05-01 09:16:19,705 INFO util.py:124 -- Outdated packages:
ipywidgets==7.7.1 found, needs ipywidgets>=8
Run `pip install -U ipywidgets`, then restart the notebook server for rich notebook output.
2024-05-01 09:16:21,761 ERROR services.py:1330 -- Failed to start the dashboard , return code -11
2024-05-01 09:16:21,763 ERROR services.py:1355 -- Error should be written to 'dashboard.log' or 'dashboard.err'. We are printing the last 20 lines for you. See 'https://docs.ray.io/en/master/ray-observability/ray-logging.html#logging-directory-structure' to find where the log file is.
2024-05-01 09:16:21,765 ERROR services.py:1399 --
The last 20 lines of /tmp/ray/session_2024-05-01_09-16-19_717749_24/logs/dashboard.log (it contains the error message from the dashboard):
2024-05-01 09:16:21,701 INFO head.py:254 -- Starting dashboard metrics server on port 44227
``` | closed | 2024-05-03T00:29:39Z | 2024-06-13T20:02:26Z | https://github.com/autogluon/autogluon/issues/4164 | [
"bug",
"module: tabular",
"env: kaggle",
"resource: GPU",
"priority: 0"
] | Innixma | 2 |
keras-team/keras | data-science | 20,485 | Support custom cell/RNN layers with extension types | ### Issue type
Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.15
### Custom code
Yes
### OS platform and distribution
Windows 11
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I want to write a Keras-like model with [keras.layers.RNN](https://www.tensorflow.org/guide/keras/working_with_rnns#built-in_rnn_layers_a_simple_example) that supports [Extension types](https://www.tensorflow.org/guide/extension_type), both for inputs and states.
### Standalone code to reproduce the issue
```shell
import keras
import tensorflow as tf
class MaskedTensor(tf.experimental.ExtensionType):
"""A tensor paired with a boolean mask, indicating which values are valid."""
values: tf.Tensor
mask: tf.Tensor
shape: tf.TensorShape
dtype: tf.DType
def __init__(self, values, mask):
self.values = values
self.mask = mask
self.shape = values.shape
self.dtype = values.dtype
@tf.experimental.dispatch_for_api(tf.compat.v1.transpose)
def transpose(a: MaskedTensor, perm=None, name="transpose", conjugate=False):
values = tf.transpose(a.values, perm, conjugate, name)
mask = tf.transpose(a.mask, perm, conjugate, name)
return MaskedTensor(values, mask)
@tf.experimental.dispatch_for_api(tf.shape)
def shape(input: MaskedTensor, out_type=None, name=None):
return tf.shape(input.values, out_type, name)
@tf.experimental.dispatch_for_api(tf.unstack)
def unstack(value: MaskedTensor, num=None, axis=0, name="unstack"):
values = tf.unstack(value.values, num, axis, name)
mask = tf.unstack(value.mask, num, axis, name)
return [MaskedTensor(x, m) for x, m in zip(values, mask)]
@keras.saving.register_keras_serializable()
class Cell(tf.keras.layers.Layer):
@property
def state_size(self):
return tf.TensorShape([5])
def call(self, inputs, states):
assert isinstance(inputs, MaskedTensor)
assert isinstance(states, MaskedTensor)
return inputs, states
def get_initial_state(self, inputs=None, batch_size=None, dtype=None):
return MaskedTensor(tf.zeros((batch_size, 5)), tf.ones((batch_size, 5), tf.bool))
if __name__ == "__main__":
input_spec = MaskedTensor.Spec(
values=tf.TensorSpec(shape=[2, 10, 5]),
mask=tf.TensorSpec(shape=[2, 10, 5]),
shape=[2, 10, 5],
dtype=tf.float32,
)
x = tf.keras.layers.Input(type_spec=input_spec)
y = tf.keras.layers.RNN(Cell(), return_sequences=True, stateful=True, unroll=True)(x)
model = tf.keras.models.Model(x, y)
model.summary()
```
### Relevant log output
```shell
File "D:\projects\testing\proof_of_concept.py", line 62, in <module>
y = tf.keras.layers.RNN(Cell(), return_sequences=True, stateful=True, unroll=True)(x)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\.envs\tensorflow\Lib\site-packages\keras\src\layers\rnn\base_rnn.py", line 557, in __call__
return super().__call__(inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\.envs\tensorflow\Lib\site-packages\keras\src\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "D:\.envs\tensorflow\Lib\site-packages\tensorflow\python\framework\constant_op.py", line 103, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Attempt to convert a value (MaskedTensor(values=<tf.Tensor: shape=(2, 5), dtype=float32, numpy=
array([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]], dtype=float32)>, mask=<tf.Tensor: shape=(2, 5), dtype=bool, numpy=
array([[ True, True, True, True, True],
[ True, True, True, True, True]])>, shape=TensorShape([2, 5]), dtype=tf.float32)) with an unsupported type (<class '__main__.MaskedTensor'>) to a Tensor.
```
| open | 2024-11-12T09:13:58Z | 2024-11-19T21:31:41Z | https://github.com/keras-team/keras/issues/20485 | [
"type:Bug"
] | Johansmm | 4 |
plotly/dash-table | dash | 633 | Feature Request: Blink animation for cell value change | Is there any way to update a table such that the values which have changed from the state of the table "blink" a certain colour?
| open | 2019-10-30T11:51:11Z | 2020-12-02T09:24:38Z | https://github.com/plotly/dash-table/issues/633 | [] | vab2048 | 1 |
serengil/deepface | deep-learning | 527 | Does Ensemble model use only 2 models?! | While I was using your code, I noticed that the lightgbm tree only shows 2 models out of all models in ensemble method.
Here's what tried to do:
```python
from deepface import DeepFace
import lightgbm as lgb
home = DeepFace.functions.get_deepface_home()
ensemble_model_path = home+'/.deepface/weights/face-recognition-ensemble-model.txt'
deepface_ensemble = lgb.Booster(model_file = ensemble_model_path)
lgb.plot_tree(deepface_ensemble,orientation='vertical',
show_info=['split_gain','internal_value','internal_count','internal_weight','leaf_count','leaf_weight','data_percentage'])
lgb.plot_importance(deepface_ensemble)
```
This gives the following tree and importance plots


Does this mean that the tree only uses VGG_Face_cosine & OpenFace_cosine to get the score values?
Also, How to get score values instead of distances when doing only single model? _(maybe a feature to add later?!)_
| closed | 2022-08-04T13:53:46Z | 2022-08-04T14:27:53Z | https://github.com/serengil/deepface/issues/527 | [
"question"
] | falkaabi | 2 |
ydataai/ydata-profiling | data-science | 1,479 | Add a report on outliers | ### Missing functionality
I'm missing an easy report to see outliers.
### Proposed feature
An outlier to me is some value more than 3 std dev away from the mean.
I calculate this as:
```python
mean = X.mean()
std = X.std()
lower, upper = mean - 3*std, mean + 3*std
outliers = X[(X < lower) | (X > upper)]
100 * outliers.count() / X.count()
```
It would be nice if there is an interactive report added with the outliers
### Alternatives considered
See code above :)
### Additional context
_No response_ | open | 2023-10-15T08:25:24Z | 2023-10-16T20:56:52Z | https://github.com/ydataai/ydata-profiling/issues/1479 | [
"feature request 💬"
] | svaningelgem | 1 |
gee-community/geemap | streamlit | 1,686 | Layer visualization GUI not working | It seems some of the recent refactor PRs breaks the layer visualization GUI, which was not captured by the CI testings.
Not working

Working

| closed | 2023-08-31T14:58:00Z | 2023-08-31T17:43:09Z | https://github.com/gee-community/geemap/issues/1686 | [
"bug"
] | giswqs | 0 |
allure-framework/allure-python | pytest | 843 | Missing Detailed JSON Files for Passing Tests with allure-pytest | I’m experiencing an issue where the Allure report JSON files (or detailed output) are only generated when tests fail. When tests pass, the allure-results directory is created, but it does not include the detailed JSON files (or JSON folder) that are necessary for generating a complete report. This behavior occurs even though all documented setup steps have been followed and explicit Allure steps/attachments have been added. | open | 2025-02-19T10:49:51Z | 2025-02-19T10:49:51Z | https://github.com/allure-framework/allure-python/issues/843 | [] | sundargandhi2002 | 0 |
521xueweihan/HelloGitHub | python | 2,167 | 【开源自荐】Databasir 专注于数据库模型文档的管理平台 | ## 项目推荐
- 项目地址:https://github.com/vran-dev/databasir
- 类别:Java
- 备注:该项目于 #2110 自荐过一次,经过两个月的迭代对功能、文档都做了更多的完善,故再次自荐。(由于无法 reopen issue,所以额外单开了一个 issue,在此感谢项目发起人提供的机会)
- 项目后续更新计划:
- 支持 PDF、Word 文档导出
- 支持 UML 关系保存
- 项目描述:
**Databasir** 是一款集中式的数据库模型文档管理平台,旨在通过**自动化的方式解决数据模型文档管理过程中维护成本高、内容更新不及时以及团队协作复杂等问题**。
- 自动化功能:支持自动或手动同步数据库元数据并生成文档
- 个性化功能:理论上支持所有拥有 JDBC 驱动的数据库、提供了文档模板自定义功能、支持 UML、Markdown 等文档导出
- 团队化功能:扁平化的角色管理、支持 gitlab / github OAuth 登录、文档内容变更自动通知等高效协作功能
- 版本化功能:支持文档版本记录,可以进行版本差异对比,一键查看字段、表、索引等变更详情
- 推荐理由:
在软件行业,API 文档的自动化有着非常广泛而成熟的方案,但数据库模型文档的自动化却还是一片蓝海,我曾在网上搜寻良久,但并没有找到一款能同时满足我以下需求点的产品
1. **自动化**:基于数据库自动生成文档
2. **版本化**:文档历史版本回溯,版本差异对比
3. **团队化**:适应不同团队结构,多样性功能为跨团队协作赋能
4. **个性化**:给予使用者一定的文档定制能力
鉴于此,我利用业余的时间开发并开源了 Databasir 这个项目,它的定位就是专注于数据库模型文档的管理。
体验地址:https://demo.databasir.com/
文档地址:https://doc.databasir.com/
项目地址:https://github.com/vran-dev/databasir
- 截图
文档同步

版本差异对比

自定义文档表头

UML 图片导出

导出 Markdown 展示
 | closed | 2022-04-19T03:47:21Z | 2022-05-08T23:58:13Z | https://github.com/521xueweihan/HelloGitHub/issues/2167 | [
"已发布",
"Java 项目"
] | vran-dev | 3 |
mckinsey/vizro | data-visualization | 313 | Rename docs pages that include `_` to use `-` | Google doesn't recognise underscores as word separators when it indexes pages. So if we have a page called `first_dashboard` then Google will report that as `firstdashboard` to its algorithm. (If we had `first-dashboard` then it would go into the mix as `first dashboard` which earns more google juice for the keywords "dashboard"). [More explanation here](https://www.woorank.com/en/blog/underscores-in-urls-why-are-they-not-recommended)
As we are at an early stage with Vizro, we can make some changes (and use RTD redirects to ensure we don't break anyone's links) that set the docs up for success later. SEO doesn't seem that important but every little helps.
## Solution
1. Rename pages
2. Set up redirects in readthedocs to redirect to newly hyphenated pages for external users who have bookmarks, and blog posts we can't update etc.
3. Change all existing internal linking within the docs to the new page names | closed | 2024-02-15T12:14:45Z | 2024-02-21T09:56:45Z | https://github.com/mckinsey/vizro/issues/313 | [
"Docs :spiral_notepad:"
] | stichbury | 2 |
HumanSignal/labelImg | deep-learning | 897 | "Missing string id : " + string_id AssertionError: Missing string id : lightWidgetTitle | Hi i got this error when runing `python3 labelImg.py`
```text
"Missing string id : " + string_id
AssertionError: Missing string id : lightWidgetTitle
```
How do i solve this issue ? | open | 2022-06-15T12:18:04Z | 2022-06-19T05:02:40Z | https://github.com/HumanSignal/labelImg/issues/897 | [] | rizki4106 | 8 |
recommenders-team/recommenders | data-science | 1,354 | [BUG] Fix Docs / Doc Pipeline | ### Description
Update docs and make sure pipeline is correctly building / updating docs
### How do we replicate the issue?
with merge to staging/main docs should update
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
| closed | 2021-03-25T14:17:41Z | 2021-12-17T09:40:22Z | https://github.com/recommenders-team/recommenders/issues/1354 | [
"bug"
] | gramhagen | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.