repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
graphql-python/graphene-sqlalchemy | sqlalchemy | 407 | Allow a custom filter class with the purpose of using all the base filters, and adding sqlalchemy-filter esk filters | So a pretty important use case at my company is the ability to add custom filters that aren't field specific
Here is an example use case using the below hack discussed
```
class UserNode(SQLAlchemyObjectType):
class Meta:
model = User
interfaces = (LevNode,)
filter = UserFilter
class UserFilter(GrapheneSQLAlchemyFilter):
use_has_contact = graphene.Boolean()
is_valid = graphene.Boolean()
@staticmethod
def user_in_filter(info: LevResolveInfo, query: Query, value: bool) -> Query:
return query.join(Contact).filter(Contact.id.is_not(None))
@staticmethod
def is_valid_filter(info: LevResolveInfo, query: Query, value: bool) -> ColumnElement:
if value:
return User.deleted_at.is_(None)
else:
return User.deleted_at.is_not(None)
```
Step 1. Update BaseTypeFilter class to allow for "filter" as a _meta field. We get all the custom filter functions from the classes that extend GrapheneSQLAlchemyFilter. We ensure those functions contain the correct variables. and then add the fields to the filter fields list.
```
class GrapheneSQLAlchemyFilter(graphene.InputObjectType):
pass
class BaseTypeFilter(graphene.InputObjectType):
@classmethod
def __init_subclass_with_meta__(
cls, filter_fields=None, model=None, _meta=None, custom_filter_class=None, **options
):
from graphene_sqlalchemy.converter import convert_sqlalchemy_type
# Init meta options class if it doesn't exist already
if not _meta:
_meta = InputObjectTypeOptions(cls)
_meta.filter_class = custom_filter_class
logic_functions = _get_functions_by_regex(".+_logic$", "_logic$", cls)
custom_filter_fields = {}
if custom_filter_class and issubclass(custom_filter_class, GrapheneSQLAlchemyFilter):
custom_filter_fields = yank_fields_from_attrs(custom_filter_class.__dict__, _as=graphene.InputField)
functions = dict(_get_functions_by_regex(".+_filter$", "_filter$", custom_filter_class))
for field_name in custom_filter_fields.keys():
assert functions.get(field_name), f"Custom filter field {field_name} must have a corresponding filter method"
annotations = functions.get(field_name)
assert annotations.get('info'), "Each custom filter method must have an info field with valid type annotations"
assert annotations.get('query'), "Each custom filter method must have a query field with valid type annotations"
assert annotations.get('value'), "Each custom filter method must have a value field with valid type annotations"
new_filter_fields = custom_filter_fields
..........
```
**Then override the execute_filters method. We have it accept a "resolve_info" or "info" so that we can pass those to the custom filter functions**
```
@classmethod
def execute_filters(
cls, query, filter_dict: Dict[str, Any], model_alias=None, info=None
) -> Tuple[Query, List[Any]]:
model = cls._meta.model
.....
# Here we first check if this is input field that isn't a model_attr and is part of the filter_class (we set that on the meta earlier)
else:
# Allow custom filter class to be used for custom filtering over
if not hasattr(input_field, "model_attr") and cls._meta.filter_class:
clause = getattr(cls._meta.filter_class, field + "_filter")(info, query, field_filters)
if isinstance(clause, tuple):
query, clause = clause
elif isinstance(clause, Query):
query = clause
continue
clauses.append(clause)
else:
model_field = getattr(model, input_field.model_attr or field)
```
**Update SQLAlchemy base to accept a "filter" field**
```
class SQLAlchemyObjectTypeOptions(ObjectTypeOptions):
.....
filter = None
``` | closed | 2024-03-18T15:32:36Z | 2024-09-15T00:55:04Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/407 | [] | adiberk | 1 |
pandas-dev/pandas | data-science | 60,534 | BUG: to_json overflows when date does not fit in NS date. | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import datetime as dt
df = pd.DataFrame(data={'date': [dt.datetime(2999, 1, 1)]}).to_json()
```
### Issue Description
When converting dataframes to json then dates that do not fit inside a NS precision timestamp will overflow.
```
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
Cell In[46], [line 1](vscode-notebook-cell:?execution_count=46&line=1)
----> [1](vscode-notebook-cell:?execution_count=46&line=1) df = pd.DataFrame(data={'date': [dt.datetime(2999, 1, 1)]}).to_json()
File ~/code/datastores/.venv/lib/python3.12/site-packages/pandas/util/_decorators.py:333, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
[327](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/util/_decorators.py:327) if len(args) > num_allow_args:
[328](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/util/_decorators.py:328) warnings.warn(
[329](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/util/_decorators.py:329) msg.format(arguments=_format_argument_list(allow_args)),
[330](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/util/_decorators.py:330) FutureWarning,
[331](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/util/_decorators.py:331) stacklevel=find_stack_level(),
[332](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/util/_decorators.py:332) )
--> [333](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/util/_decorators.py:333) return func(*args, **kwargs)
File ~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2702, in NDFrame.to_json(self, path_or_buf, orient, date_format, double_precision, force_ascii, date_unit, default_handler, lines, compression, index, indent, storage_options, mode)
[2699](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2699) config.is_nonnegative_int(indent)
[2700](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2700) indent = indent or 0
-> [2702](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2702) return json.to_json(
[2703](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2703) path_or_buf=path_or_buf,
[2704](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2704) obj=self,
[2705](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2705) orient=orient,
[2706](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2706) date_format=date_format,
[2707](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2707) double_precision=double_precision,
[2708](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2708) force_ascii=force_ascii,
[2709](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2709) date_unit=date_unit,
[2710](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2710) default_handler=default_handler,
[2711](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2711) lines=lines,
[2712](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2712) compression=compression,
[2713](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2713) index=index,
[2714](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2714) indent=indent,
[2715](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2715) storage_options=storage_options,
[2716](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2716) mode=mode,
[2717](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/core/generic.py:2717) )
File ~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:210, in to_json(path_or_buf, obj, orient, date_format, double_precision, force_ascii, date_unit, default_handler, lines, compression, index, indent, storage_options, mode)
[197](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:197) else:
[198](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:198) raise NotImplementedError("'obj' should be a Series or a DataFrame")
[200](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:200) s = writer(
[201](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:201) obj,
[202](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:202) orient=orient,
[203](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:203) date_format=date_format,
[204](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:204) double_precision=double_precision,
[205](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:205) ensure_ascii=force_ascii,
[206](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:206) date_unit=date_unit,
[207](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:207) default_handler=default_handler,
[208](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:208) index=index,
[209](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:209) indent=indent,
--> [210](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:210) ).write()
[212](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:212) if lines:
[213](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:213) s = convert_to_line_delimits(s)
File ~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:263, in Writer.write(self)
[261](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:261) def write(self) -> str:
[262](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:262) iso_dates = self.date_format == "iso"
--> [263](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:263) return ujson_dumps(
[264](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:264) self.obj_to_write,
[265](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:265) orient=self.orient,
[266](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:266) double_precision=self.double_precision,
[267](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:267) ensure_ascii=self.ensure_ascii,
[268](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:268) date_unit=self.date_unit,
[269](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:269) iso_dates=iso_dates,
[270](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:270) default_handler=self.default_handler,
[271](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:271) indent=self.indent,
[272](https://vscode-remote+ssh-002dremote-002bepyc-002dts.vscode-resource.vscode-cdn.net/home/skaae/code/datastores/~/code/datastores/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py:272) )
OverflowError: Overflow occurred in npy_datetimestruct_to_datetime
```
### Expected Behavior
Convert date to json
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.3
python-bits : 64
OS : Linux
OS-release : 6.8.0-49-generic
Version : #49-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 4 02:06:24 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : 8.30.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : 3.9.3
numba : 0.60.0
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : 18.1.0
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : 2.0.36
tables : None
tabulate : None
xarray : 2024.11.0
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| closed | 2024-12-10T11:02:00Z | 2024-12-10T11:32:40Z | https://github.com/pandas-dev/pandas/issues/60534 | [
"Bug",
"Needs Triage"
] | skaae | 1 |
axnsan12/drf-yasg | rest-api | 241 | Invalid curl parameters | I'm using the built-in session-based authentication, and the login endpoint (`http://localhost:8000/api/auth/login`) is working fine. Here is a snippet from the google chrome console showing the all the required cookies are set:

Afther succesfull authentication, I can see the rest of my endpoints, but I cannot call any POST method, because it looks like there is no `X-CSRFToken` in `curl` parameters. Here I'm trying to create a new dummy user:

I've also tried the `django-rest-swagger` library, and it doesn't have this problem, but really don't want to use it due to the lack of functionality (also it seems that it's not supported anymore).
UPDATE 1: my library versions
```
Django==2.1.2
djangorestframework==3.8.2
drf-yasg==1.11.0
```
UPDATE 2: my swagger usage
```python
schema_view = get_schema_view(
info=openapi.Info(
title='My API',
default_version='alpha',
),
public=False,
permission_classes=[permissions.AllowAny],
)
urlpatterns.append(
re_path(r'^swagger/$', schema_view.with_ui(renderer='swagger'))
)
``` | closed | 2018-10-27T17:35:06Z | 2018-10-27T18:08:11Z | https://github.com/axnsan12/drf-yasg/issues/241 | [] | gd-gl | 6 |
desec-io/desec-stack | rest-api | 190 | api: Quickly changing NS RRset of new domain breaks nsmaster provisioning | If `ns1.desec.io` is removed quickly from the `NS` RRset of a new domain, the supermaster mechanism breaks and the domain is not provisioned on nsmaster. | closed | 2019-05-12T23:54:49Z | 2019-05-13T14:24:00Z | https://github.com/desec-io/desec-stack/issues/190 | [
"bug",
"api"
] | peterthomassen | 0 |
kennethreitz/responder | flask | 246 | Multi-threading with POST request | Hello,
I'm implement POST request as written in the documentation(without the background task):
https://python-responder.org/en/latest/quickstart.html#receiving-data-background-tasks
What is a good way to use responder in production?
Can someone show me example of POST request that work with multi-threading?
Thanks! | closed | 2018-11-20T21:54:25Z | 2024-03-31T00:57:44Z | https://github.com/kennethreitz/responder/issues/246 | [] | Agur-A | 0 |
miguelgrinberg/Flask-Migrate | flask | 340 | Installation Error | Hi,
An error ocurrs when trying to install any version of **flask-migrate**.
Windows 10, 64 bits, Python 3.8.2
```
Collecting Flask-Migrate
Using cached Flask_Migrate-2.5.3-py2.py3-none-any.whl (13 kB)
Collecting alembic>=0.7
Using cached alembic-1.4.2.tar.gz (1.1 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing wheel metadata: started
Preparing wheel metadata: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: 'D:\Programação\Projetos\Python\Flask_Tutorial\venv\Scripts\python.exe' 'D:\Programação\Projetos\Python\Flask_Tutorial\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\afons\AppData\Local\Temp\tmpp75e29yb'
cwd: C:\Users\afons\AppData\Local\Temp\pycharm-packaging\alembic
Complete output (20 lines):
Error in sitecustomize; set PYTHONVERBOSE for traceback:
SyntaxError: (unicode error) 'utf-8' codec can't decode byte 0xe7 in position 0: invalid continuation byte (sitecustomize.py, line 7)
running dist_info
creating C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.egg-info
writing C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.egg-info\PKG-INFO
writing dependency_links to C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.egg-info\dependency_links.txt
writing entry points to C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.egg-info\entry_points.txt
writing requirements to C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.egg-info\requires.txt
writing top-level names to C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.egg-info\top_level.txt
writing manifest file 'C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.egg-info\SOURCES.txt'
reading manifest file 'C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.jpg' under directory 'docs'
warning: no files found matching '*.sty' under directory 'docs'
warning: no files found matching '*.dat' under directory 'tests'
warning: no files found matching 'run_tests.py'
no previously-included directories found matching 'docs\build\output'
writing manifest file 'C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.egg-info\SOURCES.txt'
creating 'C:\Users\afons\AppData\Local\Temp\pip-modern-metadata-qiokrujq\alembic.dist-info'
error: invalid command 'bdist_wheel'
----------------------------------------
ERROR: Command errored out with exit status 1: 'D:\Programação\Projetos\Python\Flask_Tutorial\venv\Scripts\python.exe' 'D:\Programação\Projetos\Python\Flask_Tutorial\venv\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\afons\AppData\Local\Temp\tmpp75e29yb' Check the logs for full command output.
``` | closed | 2020-05-12T20:04:53Z | 2020-09-08T21:17:32Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/340 | [
"question"
] | afonsosantos | 7 |
predict-idlab/plotly-resampler | plotly | 226 | If add heatmap, x-axis would be not aligned. | Hello,
I’m trying to plot a stock price chart with a heatmap and a price scatter. However, when I add both the scatter and heatmap traces to the chart, the result is not what I expected.
plotly==5.15.0
plotly-resampler==0.8.3.2
dash==2.10.2.
Here’s my current code snippet:
```
fig = FigureWidgetResampler(make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0, row_heights=[3, 1]))
# orderbook heatmap in background
fig.add_trace(
go.Heatmap(
x=orderbook_df.index,
y=orderbook_df['orderbook_price'],
z=orderbook_df['orderbook_qty'],
colorscale='Greys',
),
row=1, col=1
)
# Price line
fig.add_trace(
go.Scatter(
mode='lines',
),
hf_x=orderbook_df.index,
hf_y=orderbook_df['BestAsk'],
row=1, col=1
)
```
As you can see from the images I posted below, when I add both traces, the chart doesn’t look ideal.
This chart is without FigureWidgetResampler, which is correct:

With FigureWidgetResampler, this is still correct when only price scatter exists:

With FigureWidgetResampler, this is not correct when both scatter and heatmap exists:

Also, I found that if add the scatter first then add the heatmap, the chart won't show anything.
Can anyone suggest how to solve this issue?
Thanks in advance for any help! | open | 2023-06-13T04:08:46Z | 2024-08-28T13:56:30Z | https://github.com/predict-idlab/plotly-resampler/issues/226 | [
"bug"
] | kevinyin9 | 4 |
xorbitsai/xorbits | numpy | 429 | BUG: Tensor tolist() does not fallback to numpy | ### Describe the bug
A clear and concise description of what the bug is.
When I call tolist() for a tensor, it does not fallback to numpy as expected
```
In [10]: import xorbits.numpy as np
In [11]: x = np.array([1,2,3])
In [12]: type(x)
Out[12]: xorbits.core.data.DataRef
In [13]: x.data.data_type.name
Out[13]: 'tensor'
In [14]: x.tolist()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[14], line 1
----> 1 x.tolist()
File ~/desktop/xorbits/python/xorbits/core/data.py:197, in DataRef.__getattr__(self, item)
194 def __getattr__(self, item):
195 from .adapter import MemberProxy
--> 197 return MemberProxy.getattr(self, item)
File ~/desktop/xorbits/python/xorbits/core/adapter.py:190, in MemberProxy.getattr(cls, ref, item)
187 return ret
189 if not hasattr(mars_entity, item):
--> 190 raise AttributeError(f"'{data_type.name}' object has no attribute '{item}'")
192 attr = getattr(mars_entity, item, None)
193 if callable(attr):
AttributeError: 'tensor' object has no attribute 'tolist'
```
### To Reproduce
To help us to reproduce this bug, please provide information below:
1. Your Python version
2. The version of Xorbits you use 3.10.9
3. Versions of crucial packages, such as numpy, scipy and pandas latest
4. Full stack of the error.
5. Minimized code to reproduce the error.
```
import xorbits.numpy as np
x = np.array([1,2,3])
x.tolist()
```
### Expected behavior
A clear and concise description of what you expected to happen.
```
In [1]: import numpy as np
In [2]: x = np.array([1,2,3])
In [3]: x.tolist()
Out[3]: [1, 2, 3]
```
### Additional context
Add any other context about the problem here.
| closed | 2023-05-05T09:53:17Z | 2023-05-24T09:35:18Z | https://github.com/xorbitsai/xorbits/issues/429 | [
"bug"
] | Zhou1213CN | 0 |
vllm-project/vllm | pytorch | 15,194 | [Usage]: VLLM 0.7.3 with tensor parallelism outputs only exclamation marks when using multiple GPUs | ## Environment
- OS: Ubuntu 22.04
- GPUs: 2x NVIDIA L20 (49GB each)
- VLLM version: 0.7.3
- CUDA version: 12.4.131
- Driver version: 535.161.08
- Model: QwQ-32B-AWQ (AWQ quantized model)
## Problem Description
When running VLLM with tensor parallelism across two GPUs, the model sometimes outputs only exclamation marks (`!`) instead of proper text. This issue only occurs with multiple GPUs and appears to be related to concurrent requests - single GPU deployment works fine.
The problem is consistently reproducible when sending concurrent requests with the same prompt to the API endpoint, but non-concurrent requests sometimes produce normal responses.
## Steps to Reproduce
1. Start VLLM server with tensor parallelism:
```bash
vllm serve /root/data/models/QwQ-32B-AWQ --api-key dev-key --gpu-memory-utilization 0.9 --tensor-parallel-size 2 --quantization awq --host 0.0.0.0 --port 8877 --served-model-name qwq
```
2. Send multiple concurrent requests using curl:
```bash
curl -X POST "http://localhost:8877/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer dev-key" \
-d '{
"model": "qwq",
"messages": [
{"role": "system", "content": "You are a helpful AI assistant"},
{"role": "user", "content": "Introduce the Four Great Inventions of ancient China"}
],
"max_tokens": 800,
"temperature": 0.5
}'
```
3. The response contains only exclamation marks:
```json
{
"id":"chatcmpl-177cd18eb0dc403ab938890cf4a942e7",
"object":"chat.completion",
"created":1742455354,
"model":"qwq",
"choices":[
{
"index":0,
"message":{
"role":"assistant",
"reasoning_content":null,
"content":"!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!",
"tool_calls":[]
},
"logprobs":null,
"finish_reason":"length",
"stop_reason":null
}
],
"usage":{
"prompt_tokens":42,
"total_tokens":842,
"completion_tokens":800,
"prompt_tokens_details":null
},
"prompt_logprobs":null
}
```
## Additional Information
- The server logs show normal operation with no errors
- The issue is consistently reproducible when sending multiple concurrent requests with the same prompt
- Single GPU deployment works correctly with the same model and configuration
- Non-concurrent requests sometimes produce normal responses (see example below)
- I noticed the warning: `awq quantization is not fully optimized yet. The speed can be slower than non-quantized models.`
- Also noticed: `Detected that the model can run with awq_marlin, however you specified quantization=awq explicitly, so forcing awq. Use quantization=awq_marlin for faster inference`
## Example of Normal Response (Non-concurrent)
When sending a single request (without concurrent load), the model sometimes responds normally:
```bash
curl -X POST "http://localhost:8877/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer dev-key" \
-d '{
"model": "qwq",
"messages": [
{"role": "system", "content": "你是一个通用的ai助手,请对输出的结果再次校验,是否存在明显的表达错误,如果错误请修正"},
{"role": "user", "content": "介绍一下中国的四大发明"}
],
"max_tokens": 800,
"temperature": 0.5
}'
```
Response begins with proper text:
```
{"id":"chatcmpl-9364d553d332440abf9e80fb1070386e","object":"chat.completion","created":1742456299,"model":"qwq","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"嗯,用户让我介绍一下中国的四大发明,这应该是一个比较常见的问题。首先,我需要确认四大发明指的是什么。根据历史知识,四大发明通常指的是火药、指南针、印刷术和造纸术,但可能用户提到的是中国的四大发明,这里可能需要具体化...
```
## GPU Information
```
Thu Mar 20 15:24:52 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.08 Driver Version: 535.161.08 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA L20 On | 00000000:65:01.0 Off | Off |
| N/A 45C P0 83W / 350W | 45762MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA L20 On | 00000000:67:01.0 Off | Off |
| N/A 43C P0 78W / 350W | 45760MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
```
## Questions
1. What could be causing this issue with tensor parallelism, and how can I fix it?
2. Could it be related to the AWQ quantization or some other configuration problem?
3. Are there any known workarounds for using AWQ models with tensor parallelism in vLLM 0.7.3? | open | 2025-03-20T07:34:28Z | 2025-03-20T08:26:58Z | https://github.com/vllm-project/vllm/issues/15194 | [
"usage"
] | yilaguan | 1 |
robotframework/robotframework | automation | 4,972 | (Question) is there a way to add css or custom class in the html.log? | Hi,
I've tried researching about this but I cant seem to find any information regarding this.
Basically, I'm trying to find out whether it is possible to add custom class to an existing css file that the html.logs use?
or even create a new css file and have the new generated html.logs use it?
Background: I am using the module json2html and I log it using the logger.info(html=true), json2html module allows it to accepts inline html styles or css class so I am hoping to add my own css class to my robotframework. | closed | 2023-12-09T00:18:51Z | 2023-12-11T18:59:48Z | https://github.com/robotframework/robotframework/issues/4972 | [] | DarrenVictorianoDEX | 2 |
OthersideAI/self-operating-computer | automation | 74 | Scrolling up and down not added | I just noticed that the model doesn't have access to scrolling up and down. Is this difficult to implement generally (asking mostly for Linux, but of course interested in Mac, and Windows)?
If so, I may try adding in a web mode and leverage Selenium to scroll. | open | 2023-12-04T06:44:45Z | 2023-12-07T02:35:27Z | https://github.com/OthersideAI/self-operating-computer/issues/74 | [] | klxu03 | 5 |
pytorch/vision | computer-vision | 8,839 | Flowers102 dataset does not include the class names | ### 🐛 Describe the bug
Many datasets have a `classes` attribute containing the list of class names. I expected the same for `Flowers102` but there's no such attribute:
```python
import torchvision
dataset = torchvision.datasets.Flowers102(root="datasets", download=True)
dataset.classes # AttributeError: 'Flowers102' object has no attribute 'classes'
```
I submitted PR #8838 to fix this.
### Versions
PyTorch version: 2.4.0.post101
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.7 (x86_64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.30.0
Libc version: N/A
Python version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 15:55:29) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-14.7-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0.post101
[pip3] torch-tb-profiler==0.4.3
[pip3] torchmetrics==1.5.1
[pip3] torchvision==0.19.1a0
[conda] libopenvino-pytorch-frontend 2024.4.0 h4398f7a_0 conda-forge
[conda] libtorch 2.4.0 cpu_mkl_h3542c91_101 conda-forge
[conda] mkl 2023.2.0 h54c2260_50500 conda-forge
[conda] numpy 1.26.4 py312he3a82b2_0 conda-forge
[conda] pytorch 2.4.0 cpu_mkl_py312h0c6306f_101 conda-forge
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torchmetrics 1.5.1 pyhe5570ce_0 conda-forge
[conda] torchvision 0.19.1 cpu_py312h2009d5a_0 conda-forge | open | 2025-01-07T09:28:34Z | 2025-01-07T09:28:34Z | https://github.com/pytorch/vision/issues/8839 | [] | ageron | 0 |
skforecast/skforecast | scikit-learn | 199 | Typos in "Recursive multi-step forecasting with exogenous variables" documentation | I have found a bug in the python code on this [page](https://joaquinamatrodrigo.github.io/skforecast/0.3/guides/autoregresive-forecaster-exogenous.html) "Recursive multi-step forecasting with exogenous variables".
If `exog = data_train[['exog_1', 'exog_2']].values` is run directly, it will give this error `Exception: "exog" must be "pd.Series" or "pd.DataFrame".`
Correction, this should be `exog = data_train[['exog_1', 'exog_2']]` without the `.values` since a pandas `pd.DataFrame` is expected. Please make the correction in the appropriate documentation page and code examples, thanks. | closed | 2022-07-25T16:43:31Z | 2022-08-04T03:13:23Z | https://github.com/skforecast/skforecast/issues/199 | [
"documentation"
] | kaionwong | 1 |
flasgger/flasgger | rest-api | 592 | 0.9.7.1 breaks `template_file` support | I have a bunch of definitions defined in my custom yaml file like so:
```
swagger = Swagger(template_file=path.join(path.dirname(__file__), "definitions.yaml"))
```
however, the swagger ui now shows a bunch of errors:
```
Resolver error at paths./api/1.0/experiment/{experiment_id}/metric_group/{metric_group_id}/ds/{ds}/query.get.parameters.0.$ref
Could not resolve reference: Could not resolve pointer: /definitions/ExperimentIDPath does not exist in document
``` | open | 2023-08-23T02:06:47Z | 2023-12-19T13:10:11Z | https://github.com/flasgger/flasgger/issues/592 | [] | johnjiang | 0 |
vllm-project/vllm | pytorch | 14,560 | [Usage]: CPU OOM during training | ### Your current environment
### Env
```text
/home/zhuofeng/miniconda3/envs/r1/lib/python3.9/site-packages/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm._version'
from vllm.version import __version__ as VLLM_VERSION
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 550.120
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7302 16-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3000.0000
CPU min MHz: 1500.0000
BogoMIPS: 6000.41
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled
Vulnerability Spec rstack overflow: Mitigation; SMT disabled
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnxruntime==1.19.2
[pip3] pyzmq==26.2.1
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[pip3] transformers==4.47.1
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-ml-py 12.570.86 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pyzmq 26.2.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] transformers 4.47.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: N/A (dev)
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV4 NODE NODE SYS SYS SYS SYS SYS SYS 0-15 0 N/A
GPU1 NV4 X NODE NODE SYS SYS SYS SYS SYS SYS 0-15 0 N/A
GPU2 NODE NODE X NV4 SYS SYS SYS SYS SYS SYS 0-15 0 N/A
GPU3 NODE NODE NV4 X SYS SYS SYS SYS SYS SYS 0-15 0 N/A
GPU4 SYS SYS SYS SYS X NV4 NODE NODE NODE NODE 16-31 1 N/A
GPU5 SYS SYS SYS SYS NV4 X NODE NODE NODE NODE 16-31 1 N/A
GPU6 SYS SYS SYS SYS NODE NODE X NV4 PHB PHB 16-31 1 N/A
GPU7 SYS SYS SYS SYS NODE NODE NV4 X NODE NODE 16-31 1 N/A
NIC0 SYS SYS SYS SYS NODE NODE PHB NODE X PIX
NIC1 SYS SYS SYS SYS NODE NODE PHB NODE PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
CUDA_PATH=/usr/local/cuda-12.1/
LD_LIBRARY_PATH=/home/zhuofeng/miniconda3/envs/r1/lib/python3.9/site-packages/cv2/../../lib64::/usr/local/cuda-12.1//lib64:/usr/local/cuda-12.1//lib64
CUDA_MODULE_LOADING=LAZY
```
### How would you like to use vllm
Hi, I would like to kindly ask how to avoid CPU OOM (Out of Memory) when training the Qwen 1.5B model on 4×48GB A6000 GPUs. This issue keeps occurring.
```
ray.exceptions.OutOfMemoryError: Task was killed due to the node running low on memory.
Memory on the node (IP: 129.97.152.19, ID: 693a78d7a2710c185877fb501c56394c57a1b2babb330988fe9d4537) where the task (task ID: 05f20e76a36f0e450fb1b3d09eb20ad1bfe61a7b01000000, name=main_task, pid=2628336, memory used=2.80GB) was running was 479.24GB / 503.75GB (0.951345), which exceeds the memory usage threshold of 0.95. Ray killed this worker (ID: 05c14416cb75ecc4d343b0b2f88c39da32b0eccfe26cd9576f508322) because it was the most recently scheduled task; to see more information about memory usage on this node, use `ray logs raylet.out -ip 129.97.152.19`. To see the logs of the worker, use `ray logs worker-05c14416cb75ecc4d343b0b2f88c39da32b0eccfe26cd9576f508322*out -ip 129.97.152.19. Top 10 memory users:
PID MEM(GB) COMMAND
2629274 28.07 ray::WorkerDict.actor_rollout_generate_sequences
2629833 28.04 ray::WorkerDict.actor_rollout_generate_sequences
2629835 28.02 ray::WorkerDict.actor_rollout_generate_sequences
2629834 28.01 ray::WorkerDict.actor_rollout_generate_sequences
2628336 2.80 ray::main_task
2560801 0.43 /home/zhuofeng/.cursor-server/cli/servers/Stable-906121b8c0bdf041c14a15dac228e66ab5505260/server/nod...
2625824 0.33 python3 -m verl.trainer.main_ppo do_search=false data.train_files=data/big_math/train.parquet data.v...
2559675 0.23 /home/zhuofeng/.cursor-server/cli/servers/Stable-906121b8c0bdf041c14a15dac228e66ab5505260/server/nod...
2787909 0.18 /home/zhuofeng/.cargo/bin/zellij --server /run/user/1055/zellij/0.41.2/tmp
2626056 0.15 /home/zhuofeng/miniconda3/envs/r1/lib/python3.9/site-packages/ray/core/src/ray/gcs/gcs_server --log_...
Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | closed | 2025-03-10T12:32:39Z | 2025-03-16T22:38:09Z | https://github.com/vllm-project/vllm/issues/14560 | [
"usage"
] | Zhuofeng-Li | 1 |
huggingface/datasets | machine-learning | 7,253 | Unable to upload a large dataset zip either from command line or UI | ### Describe the bug
Unable to upload a large dataset zip from command line or UI. UI simply says error. I am trying to a upload a tar.gz file of 17GB.
<img width="550" alt="image" src="https://github.com/user-attachments/assets/f9d29024-06c8-49c4-a109-0492cff79d34">
<img width="755" alt="image" src="https://github.com/user-attachments/assets/a8d4acda-7f02-4279-9c2d-b2e0282b4faa">
### Steps to reproduce the bug
Upload a large file
### Expected behavior
The file should upload without any issue.
### Environment info
None | open | 2024-10-26T13:17:06Z | 2024-10-26T13:17:06Z | https://github.com/huggingface/datasets/issues/7253 | [] | vakyansh | 0 |
hatchet-dev/hatchet | fastapi | 1,221 | Model mismatch for 403 error response on WorkflowCronDeleteWithResponse API method | It seems like the JSON403 object on the response for the WorkflowCronDeleteWithResponse API method returns a pointer to an APIError struct while all the other errors I've encountered return a pointer to an *APIErrors struct. I'm not sure if this is intentional or a bug.
 | open | 2025-01-26T01:00:15Z | 2025-01-26T01:00:15Z | https://github.com/hatchet-dev/hatchet/issues/1221 | [] | rob2244 | 0 |
pywinauto/pywinauto | automation | 548 | Problem with datePicker | In the application I need to click through with pywinauto, I have the Date Picker, which looks in the interface just like drop down box, which is empty on default, when you click the drop-down arrow, then calendar is showing up and you can select the date.
The controls detected by pywinauto look like this:
```
| | | | | | Pane - 'gcDateAndTime' (L230, T239, R615, B425)
| | | | | | ['gcDateAndTime', 'gcDateAndTimePane', 'Pane19']
| | | | | | child_window(title="gcDateAndTime", auto_id="gcDateAndTime", control_type="Pane")
| | | | | | |
| | | | | | | Pane - '' (L232, T260, R613, B423)
| | | | | | | ['10', 'Pane20']
| | | | | | | child_window(auto_id="searchDateAndTimeControl1", control_type="Pane")
| | | | | | | |
| | | | | | | | Pane - '' (L417, T279, R502, B299)
| | | | | | | | ['11', 'Pane21']
| | | | | | | | child_window(auto_id="datePickerTo", control_type="Pane")
| | | | | | | | |
| | | | | | | | | Pane - '' (L421, T282, R482, B296)
| | | | | | | | | ['12', 'Pane22']
| | | | | | | | | child_window(auto_id="4720152", control_type="Pane")
| | | | | | | | |
| | | | | | | | | Custom - '' (L421, T282, R482, B296)
| | | | | | | | | ['13', 'Custom29']
| | | | | | | |
| | | | | | | | Pane - '' (L320, T279, R405, B299)
| | | | | | | | ['14', 'Pane23']
| | | | | | | | child_window(auto_id="datePickerFrom", control_type="Pane")
| | | | | | | | |
| | | | | | | | | Pane - '' (L324, T282, R385, B296)
| | | | | | | | | ['15', 'Pane24']
| | | | | | | | | child_window(auto_id="5112538", control_type="Pane")
| | | | | | | | |
| | | | | | | | | Custom - '' (L324, T282, R385, B296)
| | | | | | | | | ['16', 'Custom30']`
```
Also in inspect.exe it looks strange for me. All 3 of the components (for example datePickerFrom and his children), have LegacyAccesible.Name and LegacyAccessible.Value set on "Friday, August 17, 2018" (today), and all of them have role "drop down", and state "focused,focusable". Poping up the Calendar doesn't seem to be noticed by inspect.exe. When I set the date, the values and names are changed for the components.
Is it normal that it looks like this, or it seems like a problem with access to the component? I am newbie to pywinauto, but I feel like some components are missing here.
I tried to use methods click/click_input/invoke/set_text on these components (datePickerFrom/To and their children), but nothing works. Is it possible to select the date with pywinauto in this case?
I'm using backend="uia", pywinauto ver.0.6.5, python 3.7.0
Please advice me something.
Thank you!
| closed | 2018-08-17T13:56:10Z | 2024-07-05T15:52:01Z | https://github.com/pywinauto/pywinauto/issues/548 | [
"question"
] | furby-eb | 7 |
pydata/xarray | numpy | 9,758 | perf improvement for interp: set `assume_sorted` automatically | ### What is your issue?
`assume_sorted` is False, so for vectorized interpolation across multiple dimensions, we end up lexsorting the coordinates all the time. For some reason, this can be quite slow with dask.
https://github.com/pydata/xarray/blob/6df8bd606a8a9a3378c7672c087e08ced00b2e15/xarray/core/dataset.py#L4081
Instead we should be able to do
```python
obj = self
# sort by slicing if we can
for coord in set(indexers) and set(self._indexes):
# TODO: better check for PandasIndex
if self.indexes[coord].is_monotonic_decreasing:
obj = obj.isel(coord: slice(None, None, -1))
# TODO: make None the new default
if assume_sorted is None:
# TODO: dims without coordinates are fine too
assume_sorted = all(self.indexes[coord].is_monotonic_increasing for coord in indexers)
```
I'll add a reproducible example later, but the problem I've been playing gets much faster for graph construction:
<img width="642" alt="image" src="https://github.com/user-attachments/assets/4badec3c-4672-4c08-bea9-ec3c507eaac6">
xref #6799
cc @mpiannucci @Illviljan
| open | 2024-11-09T05:44:46Z | 2024-11-09T18:50:14Z | https://github.com/pydata/xarray/issues/9758 | [
"topic-performance",
"topic-interpolation"
] | dcherian | 0 |
QingdaoU/OnlineJudge | django | 310 | 文件输入输出失效 | 使用文件输入输出提交程序会导致全部 RE,不知为何?
举例:网址 https://www.dreamoj.com/problem/1064
利用代码:
```cpp
#include <cstdio>
const int MaxN = 100005;
const int MaxSN = MaxN << 2;
const int BufferSize = 1 << 16;
char buffer[BufferSize];
char *head, *tail;
inline char nextChar() {
if (head == tail) {
int l = fread(buffer, 1, BufferSize, stdin);
tail = (head = buffer) + l;
}
return *head++;
}
inline int getint() {
char c;
while ((c = nextChar()) < '0' || c > '9')
;
int res = c - '0';
while ((c = nextChar()) >= '0' && c <= '9') res = res * 10 + c - '0';
return res;
}
int n;
int fa[MaxN];
struct halfEdge {
int v;
halfEdge *next;
};
halfEdge adj_pool[MaxN], *adj_tail = adj_pool;
halfEdge *adj[MaxN];
inline void addEdge(const int &u, const int &v) {
adj_tail->v = v, adj_tail->next = adj[u];
adj[u] = adj_tail++;
}
int size[MaxN];
int son[MaxN];
int dfn[MaxN], dfsCur = 0;
int bel[MaxN];
int dfs1(const int &u) {
size[u] = 1;
for (halfEdge *e = adj[u]; e; e = e->next) {
int w = dfs1(e->v);
size[u] += w;
if (w > size[son[u]])
son[u] = e->v;
}
return size[u];
}
void dfs2(const int &u) {
dfn[u] = ++dfsCur;
if (int v = son[u])
bel[v] = bel[u], dfs2(v);
for (halfEdge *e = adj[u]; e; e = e->next)
if (!dfn[e->v])
dfs2(bel[e->v] = e->v);
}
struct seg_info {
int l, r;
int sum, cov;
inline void tag_cover(const int &w) {
cov = w;
sum = (r - l + 1) * w;
}
};
seg_info seg[MaxSN];
inline void seg_update(const int &p) { seg[p].sum = seg[p << 1].sum + seg[p << 1 | 1].sum; }
inline void seg_tag_down(const int &p) {
if (~seg[p].cov) {
seg[p << 1 | 0].tag_cover(seg[p].cov);
seg[p << 1 | 1].tag_cover(seg[p].cov);
seg[p].cov = -1;
}
}
void seg_build(const int &p, const int &pL, const int &pR) {
seg[p].cov = -1;
seg[p].l = pL, seg[p].r = pR;
if (pL == pR) {
seg[p].sum = 1;
return;
}
int pM = pL + pR >> 1;
seg_build(p << 1 | 0, pL, pM);
seg_build(p << 1 | 1, pM + 1, pR);
seg_update(p);
}
int res;
void seg_cover(const int &p, const int &qL, const int &qR, const int &w) {
int pL = seg[p].l, pR = seg[p].r;
if (qL <= pL && qR >= pR) {
res += seg[p].sum;
seg[p].tag_cover(w);
return;
}
seg_tag_down(p);
int pM = pL + pR >> 1;
if (qL <= pM)
seg_cover(p << 1 | 0, qL, qR, w);
if (qR > pM)
seg_cover(p << 1 | 1, qL, qR, w);
seg_update(p);
}
int main() {
freopen("software.in","r",stdin);
freopen("software.ans","w",stdout);
n = getint();
for (int u = 2; u <= n; ++u) {
fa[u] = getint() + 1;
addEdge(fa[u], u);
}
bel[1] = 1;
dfs1(1), dfs2(1);
seg_build(1, 1, n);
int q = getint();
while (q--) {
char type;
while ((type = nextChar()) != 'i' && type != 'u')
;
if (type == 'i') {
int u = getint() + 1;
res = 0;
while (bel[u] != 1) {
seg_cover(1, dfn[bel[u]], dfn[u], 0);
u = fa[bel[u]];
}
seg_cover(1, 1, dfn[u], 0);
printf("%d\n", res);
} else {
int u = getint() + 1;
res = 0;
seg_cover(1, dfn[u], dfn[u] + size[u] - 1, 1);
printf("%d\n", size[u] - res);
}
}
return 0;
}
```
上交时就会出现RE的情况。
| closed | 2020-07-27T08:40:24Z | 2020-08-06T09:46:36Z | https://github.com/QingdaoU/OnlineJudge/issues/310 | [] | luosiwei-cmd | 1 |
graphql-python/graphene-django | django | 914 | Cannot order by two fields using django filter | Hi,
I am using `DjangoFilterConnectionField` in my project like this:
`all_sessions = DjangoFilterConnectionField(SessionNode, filterset_class=AgendaFilter)`
`SessionNode` is created based on `Session` model in my Django application. Now, I would like to be able to order these sessions by two fields: `start_date` and `start_time`.
To achieve that I've created the following filter:
```python
class AgendaFilter(FilterSet):
class Meta:
model = Session
exclude = []
order_by = OrderingFilter(
fields=(
("start_date", "start_date"),
("start_time", "start_time")
)
)
```
When I filter sessions by only one field using `orderBy` , the query results are ordered correctly as expected. When I try to use both fields in the filter (shown below), the results returned are not ordered according to either of them:
```graphql
{
allSessions(orderBy: "[start_date, start_time]") {
edges {
node {
id
startDate
startTime
}
}
}
}
```
I've tried different ways of passing the two fields to `orderBy`, but none of them worked for me. How can I correctly order by `start_date` and then by `start_time` in one query? According to the [graphene documentation](https://docs.graphene-python.org/projects/django/en/latest/filtering/), this is possible:
> Ordering
> You can use OrderFilter to define how you want your returned results to be ordered.
>
> Extend the tuple of fields if you want to order by more than one field.
Is this is a bug in graphene or am I doing something wrong?
| open | 2020-04-01T09:47:37Z | 2020-06-29T16:26:07Z | https://github.com/graphql-python/graphene-django/issues/914 | [] | martasd | 1 |
huggingface/datasets | pytorch | 6,644 | Support fsspec 2023.12 | Support fsspec 2023.12 by handling previous and new glob behavior. | closed | 2024-02-07T12:44:39Z | 2024-02-29T15:12:18Z | https://github.com/huggingface/datasets/issues/6644 | [
"enhancement"
] | albertvillanova | 1 |
biolab/orange3 | numpy | 6,451 | Save Data to more file format (and databases) | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
I would like a better format for my saved data, something that can be fast and where fields are well typed (in a csv, it's not). Some of the format I use or enjoy the most :
-hyper/qvx for Tableau and Qlik
-duckdb, an open-source embedded database that is quite fast (by the way, very nice project)
-several database (such as monetdb/snowflake/Apache Impala... ) that could be adressed through bulk load and/or odbc
-json/xml
**What's your proposed solution?**
Well, to support more file format for saved data
**Are there any alternative solutions?**
don't think so?
| closed | 2023-05-23T19:07:54Z | 2023-05-26T08:15:43Z | https://github.com/biolab/orange3/issues/6451 | [] | simonaubertbd | 1 |
PaddlePaddle/PaddleHub | nlp | 1,417 | 咨询一个文本分类多分类问题, paddle 2.0 | closed | 2021-05-19T06:16:49Z | 2021-05-19T06:20:39Z | https://github.com/PaddlePaddle/PaddleHub/issues/1417 | [] | 1205469665 | 0 | |
keras-team/keras | machine-learning | 20,675 | Keras API reference has not been updated yet | Even though Keras 3.7.0 has been released, it seems the API reference has not yet been updated.
For example, I couldn't find the CELU activation function listed on [the activations page.](https://keras.io/api/layers/activations/)
Please feel free to let me know if I have misunderstood something.
Thank you! | closed | 2024-12-20T14:14:30Z | 2024-12-22T05:19:07Z | https://github.com/keras-team/keras/issues/20675 | [] | shashaka | 2 |
PaddlePaddle/ERNIE | nlp | 494 | ernie-gen使用时,用cnn语料做生成式摘要的的模型启动`run_seq2seq.sh`报错 | ernie-gen使用时,用cnn生成式摘要的的模型启动`run_seq2seq.sh`报错
```
InvalidArgumentError: Broadcast dimension mismatch. Operands could not be broadcast together with the shape of X = [8, 20, 768] and the shape of Y = [0, 20, 768]. Received [8] in X is not equal to [0] in Y.
```
报错位置代码
```python
def _gen_input(self, emb_ids, input_mask):
emb_out = None
for emb_name, emb_id in emb_ids.items():
emb = fluid.layers.embedding(
input=emb_id,
size=[self._emb_vocab_size[emb_name], self._emb_size],
dtype=self._emb_dtype,
param_attr=fluid.ParamAttr(
name=emb_name, initializer=self._param_initializer))
logging.info("************************_gen_input_emb:"+emb_name+"******************************")
logging.info(emb.shape)
if emb_out:
logging.info(emb_out.shape)
emb_out = emb_out + emb if emb_out else emb
```
文件
`ernie-gen/model/ernie.py` | closed | 2020-06-12T04:04:12Z | 2020-06-12T14:03:20Z | https://github.com/PaddlePaddle/ERNIE/issues/494 | [] | cedar33 | 1 |
SYSTRAN/faster-whisper | deep-learning | 296 | Passing in an audio file to transcribe() | I am having hard time passing a wav file to a model through CLI argument.
@guillaumekln could you help me troubleshoot?
The file name is taken in by an argument flag and then passed to transcribe() like this:
segments, _ = audio_model.transcribe(audio_file)
File "/home/schoonover/.local/lib/python3.10/site-packages/faster_whisper/transcribe.py", line 239, in transcribe
audio = decode_audio(audio, sampling_rate=sampling_rate)
File "/home/schoonover/.local/lib/python3.10/site-packages/faster_whisper/audio.py", line 45, in decode_audio
with av.open(input_file, metadata_errors="ignore") as container:
File "av/container/core.pyx", line 401, in av.container.core.open
File "av/container/core.pyx", line 246, in av.container.core.Container.__cinit__
File "av/container/pyio.pyx", line 32, in av.container.pyio.PyIOFile.__cinit__
ValueError: I/O operation on closed file
| closed | 2023-06-12T18:16:18Z | 2023-06-22T14:29:31Z | https://github.com/SYSTRAN/faster-whisper/issues/296 | [] | arschoon | 3 |
ydataai/ydata-profiling | data-science | 1,335 | Reproduction tab doesn't correctly indicate version used | ### Current Behaviour

### Expected Behaviour
Version reported as 4.1.2, the version used.
### Data Description
N/A
### Code that reproduces the bug
_No response_
### pandas-profiling version
v4.1.2
### Dependencies
```Text
N/A
```
### OS
Linux
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2023-05-19T14:45:48Z | 2023-09-19T02:47:36Z | https://github.com/ydataai/ydata-profiling/issues/1335 | [
"information requested ❔"
] | gdevenyi | 2 |
microsoft/nlp-recipes | nlp | 422 | [BUG] Error in test deep dive bidaf q&a due to config file | ### Description
<!--- Describe your bug in detail -->
We are using a config file to define the parameters: https://github.com/microsoft/nlp/blob/staging/examples/question_answering/bidaf_config.json
In [the notebook](https://github.com/microsoft/nlp/blob/staging/examples/question_answering/bidaf_aml_deep_dive.ipynb) one of the parameters is `NUM_EPOCHS` that is used in the tests to reduce the computation time, this parameter is never used because it is shadowed in the config file.
### How do we replicate the bug?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for gpu -->
<!--- * Run unit test `test_timer.py` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for the timer should pass successfully. -->
We need to either remove or limit the config file and being able of adding parameters programmatically
### Other Comments
| closed | 2019-09-25T12:32:36Z | 2019-09-25T14:46:56Z | https://github.com/microsoft/nlp-recipes/issues/422 | [
"bug"
] | miguelgfierro | 1 |
coqui-ai/TTS | pytorch | 3,282 | [Feature request] In non-English models stress could be assigned incorrectly | Fix it plz https://github.com/coqui-ai/TTS/issues/3039
The problem persists and because of this, normal correct use is not possible. Also at the moment it kind of breaks off the phrase at the end of each sentence and it turns out a jerky reading. | closed | 2023-11-21T20:52:57Z | 2024-12-05T10:24:31Z | https://github.com/coqui-ai/TTS/issues/3282 | [
"wontfix",
"feature request"
] | DmitryVN | 9 |
ploomber/ploomber | jupyter | 442 | Detect when tests create files in the current working directory | Many tests have to create files to work (e.g., create a `pipeline.yaml`) file. To isolate tests, we have a `tmp_directory` fixture that creates a temporary directory, moves the test to that directory, runs the text, cleans up the directory and goes back to the original working directory.
However, if a new test is included and the author forgets to include the `tmp_directory` fixture, and creates some files, it may contaminate the environment and break other tests.
I think a good way to solve it is to have a pytest fixture that should run every time and checks that no files were created outside a temp directory by the current test.
Another alternative is to have an `autouse=True` fixture that always creates a temporary directory, but this might have an performance impact (although I don't think this will be important), and it might break tests that require to be executed in certain locations | closed | 2021-12-17T21:43:04Z | 2023-06-21T21:40:15Z | https://github.com/ploomber/ploomber/issues/442 | [] | edublancas | 2 |
charlesq34/pointnet | tensorflow | 130 | Ask for semantic segmentation dataset | Hi Charles,
Thank you for sharing your code.
Could you share me your scene semantic segmentation dataset? | open | 2018-08-22T14:26:57Z | 2018-08-22T14:26:57Z | https://github.com/charlesq34/pointnet/issues/130 | [] | minhncsocial | 0 |
sinaptik-ai/pandas-ai | data-science | 1,011 | pandasai 2.0 broke `modin` support | ### 🐛 Describe the bug
Since #657, each module of pandasai **must** import `pandasai.pandas` as `pd` to make `pandasai` compatible with `modin`. | closed | 2024-03-08T23:03:58Z | 2024-03-08T23:48:34Z | https://github.com/sinaptik-ai/pandas-ai/issues/1011 | [] | mspronesti | 0 |
wagtail/wagtail | django | 12,241 | Admin UI performance testing & benchmark | We have [Admin UI performance improvements #80](https://github.com/wagtail/roadmap/issues/80) on the Wagtail roadmap, which is currently very open-ended. We need to do a benchmark of the Wagtail admin interface so we can plan possible improvements from a more curated backlog – and have a way to better quantify the impact of known issues.
In [RFC 101](https://github.com/wagtail/rfcs/pull/101), we’re proposing to do this benchmark in time for Wagtail 6.3\*, with the improvements scheduled for v6.4\* (February 2025).
## Scope
To be confirmed. Most likely using a tiered approach, where we’d do more manual testing for high-value user flows, automation only elsewhere. See [wagtail-tooling](https://github.com/thibaudcolas/wagtail-tooling) for examples of scripted testing of the admin UI, and [Wagtail | 5.1 UI overview](https://docs.google.com/spreadsheets/d/1FMSA_BI3ZvkeAvuaIL2QtqRTgMNwz_vhfKBeyx2Onnk/edit?gid=1962441802#gid=1962441802) for a recent recording of all admin functionality.
## Methodology
To be confirmed. Most likely:
- Manual testing on bakerydemo
- Automated testing with Wagtail’s test suite (same as existing integration tests)
- CI setup: manual trigger
- Run once before RC release and final release
- Backend package to collect backend metrics (number of queries)
- A suite of tools to run manual checks with (DevTools runtime Performance panel?)
- Lighthouse to run automated checks across a wider group of views
- TBC tool set up to run ongoing benchmark tests (Lighthouse CI?)
## Reporting
To be confirmed. Possible options:
- CI setup
- Spreadsheet
- GitHub issues with labels
- Github Project board (with issues)
## Related work
In Wagtail:
- [WCAG 2.2 AAA* audit of the Wagtail admin – Nov 2023 #11180](https://github.com/wagtail/wagtail/discussions/11180)
- [RFC 78: Adopt Stimulus](https://github.com/wagtail/rfcs/blob/main/text/078-adopt-stimulus-js.md)
- [CSP compatibility issues #1288](https://github.com/wagtail/wagtail/issues/1288)
- [Remove support for Safari 15 #11257](https://github.com/wagtail/wagtail/issues/11257)
Other projects:
- [django-asv](https://github.com/django/django-asv) – Python benchmarks for Django over time, done with airspeed velocity
- [wagtail-bakerydemo-archive](https://github.com/thibaudcolas/wagtail-bakerydemo-archive) to test past versions of the admin UI.
- [DEP 84: Rejuvenate form media #84](https://github.com/django/deps/pull/84)
## Tasks
- Planning
- [x] Discuss one-off audit / tests vs. setup of repeatable benchmarks: one-off audit in depth,
- [x] Plan the auditing / testing scope: bakerydemo manually and in CI
- [x] Tooling review: Lighthouse CI tentatively
- [x] Plan auditing / benchmarking methodology
- [x] Metrics to record: FCP, TBT, page weight, Lighthouse performance score, number of DOM nodes
- [x] Tentative for manual testing: Page energy usage (Firefox energy profiler)
- [x] Manual testing: FPS, memory usage, event listener leaks, DOM elements
- [x] Plan reporting format: tool-dependent for benchmark, Google Docs or Sheets for manual testing. Review & open issues
- [ ] Review Django-aware packages to collect relevant metrics (DDT number of queries to render the view?) @laymonage
- [x] @thibaudcolas prototype Lighthouse CI setup
- Manual testing
- [x] @thibaudcolas Go through common interactions on target pages / component with a performance profiler opened
- [x] @thibaudcolas Metrics to observe: FPS, memory usage, event listener leaks, DOM elements
- [x] @thibaudcolas Tentative: Page energy usage (Firefox energy profiler)
- Benchmarking
- [ ] (Consider how this might be reusable for general-purpose integration tests, for example with a crawler checking for error responses)
- [ ] Set up manual CI job with bakerydemo test site, ideally with option to pass a specific branch to check out
- [x] Figure out where to store CI-level benchmark runs data
- [ ] @thibaudcolas Decide and configure which Lighthouse performance audits to set as pass/fail
- [ ] @thibaudcolas Decide and configure appropriate performance budgets for Lighthouse-collected metrics. Proposed: FCP, TBT, page weight
- Reporting
- [x] Collate all identified issues separately from existing open issues
- [x] Triage / connect with open issues
- [ ] @thibaudcolas Impact-effort matrix
- [x] Update [Admin UI performance improvements #80](https://github.com/wagtail/roadmap/issues/80)
## Working on this
Assigning this provisionally to myself and @laymonage, but we could use support from others. Either to test more parts of the Wagtail admin, or incorporate better methodologies in our testing. Or for the initial planning.
| closed | 2024-08-19T11:38:10Z | 2025-01-02T13:18:48Z | https://github.com/wagtail/wagtail/issues/12241 | [
"Documentation",
"🚀 Performance"
] | thibaudcolas | 2 |
ipython/ipython | jupyter | 14,311 | Move backend mapping to Matplotlib | I wanted to draw your attention to matplotlib/matplotlib#27663, about moving the Matplotlib backend mappings out of IPython and into Matplotlib.
The primary use case it so support Matplotlib widgets (`ipympl` and `matplotlib-inline`) registering themselves as Matplotlib backends without requiring additional code in IPython and/or Matplotlib. The secondary use case is to support backends in IPython using Matplotlib's `module://name.of.the.backend` syntax, e.g.
```
%matplotlib module://mplcairo.backend
```
which one can already do using `matplotlib.use(...)` but not directly via the `%matplotlib` magic.
Whilst doing this it seems sensible to bring all of the backend registering and mapping together in one place, and that should be Matplotlib rather than IPython. I am not sure how easy (or even possible!) it will be to remove all the related hard-coded stuff in IPython, but I am willing to start and see how it goes. | closed | 2024-01-30T12:05:56Z | 2024-04-12T12:39:34Z | https://github.com/ipython/ipython/issues/14311 | [
"matplotlib",
"magics"
] | ianthomas23 | 4 |
ultralytics/ultralytics | python | 19,803 | [Inferencing] How to initialize model before .predict() | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'm using simpel model.predict() command to do inference.
However, I'm using Multiprocessing to split the dataset and try to inference in a concurrent way.
```
def process_safe_predict(batch_size, image_path, output_file):
logger = get_thread_logger(output_file) # My custom logger for each process
# Process results
results = global_model.predict(image_path, imgsz=480, stream=True, batch=batch_size)
total_time = 0
count = 0
for i, result in enumerate(results):
# Extract inference time
inference_time = result.speed["inference"] # In milliseconds (ms)
preprocess_time = result.speed["preprocess"] # Preprocessing time (ms)
postprocess_time = result.speed["postprocess"] # Postprocessing time (ms)
total_time += inference_time
count += 1
logger.info(f"Image {i}: \nPreprocess time {preprocess_time} \nInference time {inference_time} ms\nPostprocess time {postprocess_time} ms")
logger.info(f"Average Inferencing Time: {(total_time / count):.2f} ms")
```
I launch this function for each process
```
process = multiprocessing.Process(target=process_safe_predict, args=(args.batch_size, output_folder+f'/part_{i+1}', inference_file))
```
I noticed that the command model.predict(...) initializes and fetches the actual model in `/engine/model.py`
```
if not self.predictor:
self.predictor = (predictor or self._smart_load("predictor"))(overrides=args, _callbacks=self.callbacks)
self.predictor.setup_model(model=self.model, verbose=is_cli)
else: # only update args if predictor is already setup
self.predictor.args = get_cfg(self.predictor.args, args)
if "project" in args or "name" in args:
self.predictor.save_dir = get_save_dir(self.predictor.args)
if prompts and hasattr(self.predictor, "set_prompts"): # for SAM-type models
self.predictor.set_prompts(prompts)
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
```
Is there any way I can load the model before calling model.predict()?
I would like to load first so that processes can share the same model on memory to do inference
### Additional
_No response_ | open | 2025-03-20T18:32:26Z | 2025-03-21T00:06:25Z | https://github.com/ultralytics/ultralytics/issues/19803 | [
"question",
"detect"
] | longtran1904 | 2 |
unionai-oss/pandera | pandas | 1,359 | pydantic validation to raise ValidationError instead of ValueError | ### Is your feature request related to a problem? Please describe.
The current Pandera validation error messages differ in structure from Pydantic, which makes it challenging for users, especially those using Pandera and Pydantic, to quickly identify the field causing the error.
### Describe the solution you'd like
I suggest aligning Pandera's validation error with Pydantic's structure. Specifically, include the field name inside the "loc" as the second value of the tuple, similar to Pydantic. This would enhance consistency and user-friendliness.
**Pydantic example (contains field name inside "loc")**
```python
RequestValidationError(model='Request', errors=[{'loc': ('body', 'account_number'), 'msg': ..., 'type': ...}])
```
**Pandera example (missing field name on "loc")**
```python
RequestValidationError(model='Request', errors=[{'loc': ('body',), 'msg': ..., 'type': ...}])
```
**Pandera example (feature request)**
```python
RequestValidationError(model='Request', errors=[{'loc': ('body', 'account_number'), 'msg': ..., 'type': ...}])
```
### Additional context
I'm using FastAPI with Pydantic and Pandera. For Pandera i'm using mainly DataFrameModels
**Example**
```python
...
@router.post("/register_customers")
def register_customers(customers: DataFrame[Customers]):
...
```
### Describe alternatives you've considered
As of now i'm using a middleware that identifies if the "loc" is missing the field name and extracting the field name from the msg. | open | 2023-09-30T20:06:56Z | 2023-10-01T21:44:05Z | https://github.com/unionai-oss/pandera/issues/1359 | [
"enhancement"
] | WilianZilv | 2 |
SciTools/cartopy | matplotlib | 1,865 | Incorrect link at the top of the cartopy Gallery | The _getting started_ link at the top of the [Gallery](https://scitools.org.uk/cartopy/docs/latest/gallery/index.html), in
_For more examples, tutorials, and guides on how to use Cartopy, see the [getting started](https://docs.python.org/3/library/unittest.mock-examples.html#getting-started) section_.
is obviously wrong. I guess it should be a link to https://scitools.org.uk/cartopy/docs/latest/index.html#getting-started | closed | 2021-09-15T13:45:46Z | 2021-09-15T15:29:03Z | https://github.com/SciTools/cartopy/issues/1865 | [] | jypeter | 3 |
deepfakes/faceswap | machine-learning | 630 | ERROR Caught exception in thread: 'training_0' | When i run this bot on ubuntu1604 to train the model, some problem were happened. Installing different versions of tensorflow won't work either. I have no idea to solve it. whether professor can give some suggestions? The problem are as follows:
Using TensorFlow backend.
02/27/2019 19:36:19 INFO Model A Directory: /home/csy/Documents/faceswap/data/trump
02/27/2019 19:36:19 INFO Model B Directory: /home/csy/Documents/faceswap/data/cage
02/27/2019 19:36:19 INFO Training data directory: /home/csy/Documents/faceswap/models
02/27/2019 19:36:19 INFO ===============================================
02/27/2019 19:36:19 INFO - Starting -
02/27/2019 19:36:19 INFO - Press 'ENTER' to save and quit -
02/27/2019 19:36:19 INFO - Press 'S' to save model weights immediately -
02/27/2019 19:36:19 INFO ===============================================
02/27/2019 19:36:20 INFO Loading data, this may take a while...
02/27/2019 19:36:20 INFO Loading Model from Original plugin...
02/27/2019 19:36:23 INFO Loading config: '/home/csy/Documents/faceswap/config/train.ini'
02/27/2019 19:36:23 WARNING No existing state file found. Generating.
02/27/2019 19:36:23 WARNING Failed loading existing training data. Generating new models
02/27/2019 19:36:23 INFO Loading Trainer from Original plugin...
02/27/2019 19:36:24 INFO Enabled TensorBoard Logging
02/27/2019 19:36:26 CRITICAL Error caught! Exiting...
02/27/2019 19:36:26 ERROR Caught exception in thread: 'training_0'
You are using pip version 10.0.1, however version 19.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
02/27/2019 19:36:29 ERROR Got Exception on main handler:
Traceback (most recent call last):
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1323, in _do_call
return fn(*args)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1302, in _run_fn
status, run_metadata)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value Adam/iterations
[[Node: Adam/iterations/read = Identity[T=DT_INT64, _class=["loc:@Adam/iterations"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Adam/iterations)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/csy/Documents/faceswap/lib/cli.py", line 90, in execute_script
process.process()
File "/home/csy/Documents/faceswap/scripts/train.py", line 97, in process
self.end_thread(thread, err)
File "/home/csy/Documents/faceswap/scripts/train.py", line 122, in end_thread
thread.join()
File "/home/csy/Documents/faceswap/lib/multithreading.py", line 179, in join
raise thread.err[1].with_traceback(thread.err[2])
File "/home/csy/Documents/faceswap/lib/multithreading.py", line 117, in run
self._target(*self._args, **self._kwargs)
File "/home/csy/Documents/faceswap/scripts/train.py", line 148, in training
raise err
File "/home/csy/Documents/faceswap/scripts/train.py", line 138, in training
self.run_training_cycle(model, trainer)
File "/home/csy/Documents/faceswap/scripts/train.py", line 210, in run_training_cycle
trainer.train_one_step(viewer, timelapse)
File "/home/csy/Documents/faceswap/plugins/train/trainer/_base.py", line 138, in train_one_step
loss[side] = batcher.train_one_batch(is_preview_iteration)
File "/home/csy/Documents/faceswap/plugins/train/trainer/_base.py", line 212, in train_one_batch
loss = self.model.predictors[self.side].train_on_batch(*batch)
File "/home/csy/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "/home/csy/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2721, in __call__
return self._legacy_call(inputs)
File "/home/csy/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2693, in _legacy_call
**self.session_kwargs)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value Adam/iterations
[[Node: Adam/iterations/read = Identity[T=DT_INT64, _class=["loc:@Adam/iterations"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Adam/iterations)]]
Caused by op 'Adam/iterations/read', defined at:
File "/home/csy/anaconda3/lib/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
File "/home/csy/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/csy/Documents/faceswap/lib/multithreading.py", line 117, in run
self._target(*self._args, **self._kwargs)
File "/home/csy/Documents/faceswap/scripts/train.py", line 136, in training
model = self.load_model()
File "/home/csy/Documents/faceswap/scripts/train.py", line 162, in load_model
preview_scale=self.args.preview_scale)
File "/home/csy/Documents/faceswap/plugins/train/model/original.py", line 24, in __init__
super().__init__(*args, **kwargs)
File "/home/csy/Documents/faceswap/plugins/train/model/_base.py", line 75, in __init__
self.build()
File "/home/csy/Documents/faceswap/plugins/train/model/_base.py", line 124, in build
self.compile_predictors()
File "/home/csy/Documents/faceswap/plugins/train/model/_base.py", line 202, in compile_predictors
optimizer = Adam(lr=5e-5, beta_1=0.5, beta_2=0.999, clipnorm=1.0)
File "/home/csy/anaconda3/lib/python3.6/site-packages/keras/optimizers.py", line 462, in __init__
self.iterations = K.variable(0, dtype='int64', name='iterations')
File "/home/csy/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 402, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 213, in __init__
constraint=constraint)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 356, in _init_from_args
self._snapshot = array_ops.identity(self._variable, name="read")
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 125, in identity
return gen_array_ops.identity(input, name=name)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2071, in identity
"Identity", input=input, name=name)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/home/csy/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value Adam/iterations
[[Node: Adam/iterations/read = Identity[T=DT_INT64, _class=["loc:@Adam/iterations"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Adam/iterations)]]
02/27/2019 19:36:29 CRITICAL An unexpected crash has occurred. Crash report written to /home/csy/Documents/faceswap/crash_report.2019.02.27.193626340426.log. Please verify you are running the latest version of faceswap before reporting
terminate called without an active exception
Aborted (core dumped)
Running environment are:
python 3.6
cuda 8
cudnn 6
tensorflow-gpu 1.4.0
how can i fix this.
thank you.
| closed | 2019-02-27T11:59:50Z | 2019-02-27T12:03:17Z | https://github.com/deepfakes/faceswap/issues/630 | [] | changyunke | 1 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 1,318 | Type (typehint) error for `db.relationship` | ## Problem Description
The typehint of
```python
db.relationship("...", secondary=..., back_populates="...")
```
should be `sq_orm.Relationship[...]`, not `sq_orm.RelationshipProperty[...]`.
The mismatch of the typehint causes the manual annotation supported by `sqlalchemy` fails:
<img width="830" alt="image" src="https://github.com/pallets-eco/flask-sqlalchemy/assets/32186723/c219e153-8d1e-492b-ba61-de6afaa22cd6">
## How to fix it
Go here:
https://github.com/pallets-eco/flask-sqlalchemy/blob/42a36a3cb604fd39d81d00b54ab3988bbd0ad184/src/flask_sqlalchemy/extension.py#L953-L963
Make this modification:
```diff
def relationship(
self, *args: t.Any, **kwargs: t.Any
- ) -> sa_orm.RelationshipProperty[t.Any]:
+ ) -> sa_orm.Relationship[t.Any]:
"""A :func:`sqlalchemy.orm.relationship` that applies this extension's
```
Things will get corrected.
It is also recommended to modify this place:
https://github.com/pallets-eco/flask-sqlalchemy/blob/42a36a3cb604fd39d81d00b54ab3988bbd0ad184/src/flask_sqlalchemy/extension.py#L977-L979
But the following place should **NOT** be changed, because it is consistent with `sq_orm`:
https://github.com/pallets-eco/flask-sqlalchemy/blob/42a36a3cb604fd39d81d00b54ab3988bbd0ad184/src/flask_sqlalchemy/extension.py#L965-L967
## Codes with typehint errors when using `flask-sqlalchemy`
```python
# -*- coding: UTF-8 -*-
try:
from typing import List
except ImportError:
from builtins import list as List
from flask_sqlalchemy import SQLAlchemy
import sqlalchemy as sa
from sqlalchemy.orm import Mapped, mapped_column
from sqlalchemy.orm import DeclarativeBase, MappedAsDataclass
class Base(DeclarativeBase, MappedAsDataclass):
"""The base class for creating SQLAlchemy models.
All mixins are defined in the mro list.
All metadata of are defined as attributes.
"""
db = SQLAlchemy(model_class=Base)
roles = db.Table(
"role_users",
sa.Column("user_id", sa.ForeignKey("user.id"), primary_key=True),
sa.Column("role_id", sa.ForeignKey("role.id"), primary_key=True),
)
class User(db.Model):
id: Mapped[int] = mapped_column(primary_key=True, init=False)
# Expression of type "RelationshipProperty[Any]" cannot be assigned to declared type "Mapped[List[Role]]"
# "RelationshipProperty[Any]" is incompatible with "Mapped[List[Role]]"Pylance[reportAssignmentType]
# (https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportAssignmentType)
roles: Mapped[List["Role"]] = db.relationship(
"Role", secondary=roles, back_populates="users", default_factory=list
)
class Role(db.Model):
id: Mapped[int] = mapped_column(primary_key=True, init=False)
# Expression of type "RelationshipProperty[Any]" cannot be assigned to declared type "Mapped[List[User]]"
# "RelationshipProperty[Any]" is incompatible with "Mapped[List[User]]"Pylance[reportAssignmentType]
# (https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportAssignmentType)
users: Mapped[List["User"]] = db.relationship(
"User", secondary=roles, back_populates="roles", default_factory=list
)
```
## Codes working perfectly if only using `sqlalchemy`
```python
# -*- coding: UTF-8 -*-
try:
from typing import List
except ImportError:
from builtins import list as List
import sqlalchemy as sa
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.orm import DeclarativeBase, MappedAsDataclass
class Base(DeclarativeBase, MappedAsDataclass):
"""The base class for creating SQLAlchemy models.
All mixins are defined in the mro list.
All metadata of are defined as attributes.
"""
roles = sa.Table(
"role_users",
Base.metadata,
sa.Column("user_id", sa.ForeignKey("user.id"), primary_key=True),
sa.Column("role_id", sa.ForeignKey("role.id"), primary_key=True),
)
class User(Base):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True, init=False)
roles: Mapped[List["Role"]] = relationship(
"Role", secondary=roles, back_populates="users", default_factory=list
)
class Role(Base):
__tablename__ = "roles"
id: Mapped[int] = mapped_column(primary_key=True, init=False)
users: Mapped[List["User"]] = relationship(
"User", secondary=roles, back_populates="roles", default_factory=list
)
```
Environment:
- Python version: `3.10.13`
- Flask-SQLAlchemy version: `3.1.1`
- SQLAlchemy version: `2.0.28`
| open | 2024-03-26T16:45:50Z | 2024-11-13T21:08:40Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1318 | [] | cainmagi | 5 |
dynaconf/dynaconf | fastapi | 308 | examples.md .env section - DYNACONF prefix missing? | In [examples.md under .env](https://github.com/rochacbruno/dynaconf/blob/44ac11b2cb95396af5646a2ac36ecedae5321dfd/docs/guides/examples.md#env) the environment variables do not have a prefix. Is the DYNACONF prefix missing or does this assume `ENVVAR_PREFIX=false` is defined? | closed | 2020-02-29T10:25:18Z | 2020-09-12T04:14:26Z | https://github.com/dynaconf/dynaconf/issues/308 | [
"question"
] | kshahar | 1 |
biolab/orange3 | scikit-learn | 6,996 | Scoring Sheet Viewer: Refactor | **What's wrong?**
_class_combo_changed (https://github.com/biolab/orange3/blob/master/Orange/widgets/visualize/owscoringsheetviewer.py#L446C9-L446C29) checks whether the class indeed changed and if so, they (indirectly) call https://github.com/biolab/orange3/blob/master/Orange/widgets/visualize/owscoringsheetviewer.py#L459, which just negates some coefficients and subtracts risks from 100. I don't think this is very safe.
Switching back and forth can easily go wrong. The widget should remember the values for one target and use them to compute that values for the shown target. I suspect this may be the reason for failing test (see e.g. #6995).
**How can we reproduce the problem?**
Test fails randomly, but apparently only on github CI, not locally.
| open | 2025-01-19T09:09:59Z | 2025-01-24T09:17:27Z | https://github.com/biolab/orange3/issues/6996 | [
"bug"
] | janezd | 0 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,006 | [Feature Request]: Stable Diffusion 3 Medium support | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Support Stable Diffusion 3 Medium
https://huggingface.co/stabilityai/stable-diffusion-3-medium
### Proposed workflow
Just like other SD models
### Additional information
_No response_ | open | 2024-06-12T13:25:15Z | 2024-06-22T07:27:24Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16006 | [
"enhancement"
] | bendikv | 44 |
thtrieu/darkflow | tensorflow | 471 | Bottleneck: GPU is mostly idling, then suddenly spikes. Superslow training | Hi, thankyou for this rep. However, im suspicious. During training, I get 22 steps a minute on my geforce 1070 with batch size 16 and HD training-images. I dont know if that is bad, however im suspicious because the GPU is mostly idling. Its at 0% all the time, but each 5 second it spikes to 70%, but only for a short millisecond. Its almost like the GPU is doing everything correct for that second, but since its idling most of the time, i think there must be some pre-processing bottleneck somewhere. I was thinking, perhaps some image transformation is slow? My system is using a new SSD, so the bottleneck is not the hard drive. Is this "idling" behaviour normal?
| open | 2017-12-16T22:50:41Z | 2017-12-18T14:36:36Z | https://github.com/thtrieu/darkflow/issues/471 | [] | zungam | 2 |
vaexio/vaex | data-science | 1,475 | [BUG-REPORT] datetime64 objects save in feather/arrow format incorrectly when converted from pandas | **Description**
There appears to be an issue reloading a saved dataframe in arrow/feather format after converting a pandas dataframe containing a datetime64 column/index. This does not occur when saving as hdf5. The general workflow to reproduce the error is:
1. Create a pandas dataframe with a datetime64 column/index
2. Convert using `from_pandas`
3. Export using either `export_feather` or `export_arrow`
4. Use `open` on the file and perform any operation on the datetime64 column
A short script to reproduce the error is:
```python
import pandas as pd
import vaex as vx
df = pd.DataFrame({'float': [1.0],
'int': [1],
'datetime': [pd.Timestamp('20180310')],
'string': ['foo']})
new_df = vx.from_pandas(df)
new_df.export_feather('test.arrow')
new_df.export_hdf5('test.hdf5')
hdf5_df = vx.open('test.hdf5')
feather_df = vx.open('test.arrow')
hdf5_df.datetime.max() # this works fine
feather_df.datetime.max() # this is where the error is thrown
```
The stack trace is:
```bash
Traceback (most recent call last):
File "/mnt/c/Users/user/Documents/Python Scripts/finance/vaex_text.py", line 29, in <module>
print(feather_df.datetime.max())
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/expression.py", line 677, in max
return self.ds.max(**kwargs)
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/dataframe.py", line 1362, in max
return self._compute_agg('max', expression, binby, limits, shape, selection, delay, edges, progress, array_type=array_type)
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/dataframe.py", line 773, in _compute_agg
return self._delay(delay, var)
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/dataframe.py", line 1537, in _delay
self.execute()
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/dataframe.py", line 375, in execute
just_run(self.execute_async())
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/asyncio.py", line 35, in just_run
return loop.run_until_complete(coro)
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/nest_asyncio.py", line 70, in run_until_complete
return f.result()
File "/home/user/anaconda3/envs/py39/lib/python3.9/asyncio/futures.py", line 201, in result
raise self._exception
File "/home/user/anaconda3/envs/py39/lib/python3.9/asyncio/tasks.py", line 256, in __step
result = coro.send(None)
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/dataframe.py", line 380, in execute_async
await self.executor.execute_async()
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/execution.py", line 176, in execute_async
task._parts = [encoding.decode('task-part-cpu', spec, df=run.df) for i in range(self.thread_pool.nthreads)]
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/execution.py", line 176, in <listcomp>
task._parts = [encoding.decode('task-part-cpu', spec, df=run.df) for i in range(self.thread_pool.nthreads)]
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/encoding.py", line 449, in decode
decoded = self.registry[typename].decode(self, value, **kwargs)
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/cpu.py", line 38, in decode
return cls.decode(encoding, spec, df)
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/cpu.py", line 551, in decode
dtypes = encoding.decode_dict('dtype', spec['dtypes'])
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/encoding.py", line 469, in decode_dict
decoded = {key: self.registry[typename].decode(self, value, **kwargs) for key, value in values.items()}
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/encoding.py", line 469, in <dictcomp>
decoded = {key: self.registry[typename].decode(self, value, **kwargs) for key, value in values.items()}
File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/vaex/encoding.py", line 247, in decode
return DataType(np.dtype(type_spec))
TypeError: data type 'timestamp[ns]' not understood
```
**Software information**
Vaex and pandas were installed using Pip and using Python 3.9.2. I am running on a Windows machine under WSL. I can reproduce on multiple operating systems if that would be helpful.
```bash
Ubuntu 20.04.1 LTS (GNU/Linux 4.19.128-microsoft-standard x86_64)
Python 3.9.2
Name: vaex
Version: 4.3.0
Summary: Out-of-Core DataFrames to visualize and explore big tabular datasets
Home-page: https://www.github.com/vaexio/vaex
Author: Maarten A. Breddels
Author-email: maartenbreddels@gmail.com
License: MIT
Location: /home/user/anaconda3/envs/py39/lib/python3.9/site-packages
Requires: vaex-core, vaex-server, vaex-hdf5, vaex-ml, vaex-astro, vaex-jupyter, vaex-viz
Required-by:
Name: pandas
Version: 1.3.1
Summary: Powerful data structures for data analysis, time series, and statistics
Home-page: https://pandas.pydata.org
Author: The Pandas Development Team
Author-email: pandas-dev@python.org
License: BSD-3-Clause
Location: /home/user/anaconda3/envs/py39/lib/python3.9/site-packages
Requires: python-dateutil, numpy, pytz
Required-by: xarray, vaex-core, bqplot, alpaca-trade-api
```
**Additional information**
I attempted to try a few things like attempting to make a deep copy of the data and saving as an hdf5, opening and resaving as an arrow/feather. I have not found a workaround other than to just use hdf5.
| closed | 2021-07-25T13:49:36Z | 2021-08-13T11:51:44Z | https://github.com/vaexio/vaex/issues/1475 | [
"bug"
] | Nicholas-Schaub | 1 |
ploomber/ploomber | jupyter | 644 | missing cloud API key causing verbose output | if the user has no API key, the same message displays many times:
```
(ploomber) Edu@MBP dev/cloud-demo (dev) » ploomber build --force
Loading pipeline...
No cloud API key was found
No cloud API key was found
Executing: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:04<00:00, 2.40cell/s]
Building task 'fit': 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:17<00:00, 4.32s/it]
name Ran? Elapsed (s) Percentage
-------- ------ ------------- ------------
get True 0.025908 0.502377
features True 0.058383 1.13209
join True 0.032068 0.621825
fit True 5.04072 97.7437
No cloud API key was found
(ploomber) Edu@MBP dev/cloud-demo (dev) »
```
I think if the user does not have a key, the message should not display, as they may think it's an error
| closed | 2022-03-06T21:45:36Z | 2022-03-07T00:53:00Z | https://github.com/ploomber/ploomber/issues/644 | [] | edublancas | 0 |
ageitgey/face_recognition | machine-learning | 1,579 | Does locating the face first then encoding the face have higher accuracy than just encoding the face? | Since I noticed that there's this attribute `number_of_times_to_upsample` in the function `face_locations`, I wonder if locating the face first have a slower outcome but a higher accuracy?
e.g. locating the face first then encoding the face
```python
face_locations = face_recognition.face_locations(image,
number_of_times_to_upsample=2,
model='hog')
face_encodings_in_image = face_recognition.face_encodings(image,
known_face_locations=face_locations,
num_jitters=1,
model='small')
```
e.g. just encoding the face
```python
face_encodings_in_image = face_recognition.face_encodings(image,
known_face_locations=face_locations,
num_jitters=1,
model='small')
``` | open | 2024-08-02T02:25:28Z | 2024-08-02T02:26:15Z | https://github.com/ageitgey/face_recognition/issues/1579 | [] | Ann5t | 0 |
blacklanternsecurity/bbot | automation | 2,067 | host_header module runs for an incredible long time | **Describe the bug**
host_header module waiting 15+ minutes on an operation
**Expected behavior**
Shoudln't take 15+ minutes to run
**BBOT Command**
Example: `bbot -p preset.yml -t targets.txt -o ~/scans/`
**OS, BBOT Installation Method + Version**
`OS: Arch Linux, Installation poetry shell, dev
**BBOT Config**
```
config:
interactsh_server: redacted
interactsh_disable: false
interactsh_token: redacted
exclude_modules:
- bypass403
- columbus
- hunt
- iis_shortnames
- smuggler
- url_manipulation
- dastardly
flags:
- email-enum
- subdomain-enum
- web-thorough
modules:
- baddns
- badsecrets
- dotnetnuke
- gowitness
- httpx
- robots
- telerik
output_modules:
- csv
- json
- subdomains
- txt
```
**Logs**
```
[DBUG] host_header:
[DBUG] - host_header.handle_event(HTTP_RESPONSE("{'url': 'http://redacted.com/', 'timestamp': '2024-12-09T01:07:24.209677114Z',...", module=httpx, tags={'ip-89-44-80-26', 'in-s
cope', 'dir', 'status-302', 'http-title-object-moved'})) running for 16 minutes, 11 seconds:
``` | closed | 2024-12-09T01:29:59Z | 2025-02-13T00:10:23Z | https://github.com/blacklanternsecurity/bbot/issues/2067 | [
"bug",
"low priority",
"cant-reproduce"
] | aconite33 | 6 |
onnx/onnx | scikit-learn | 6,694 | _get_initializer_tensors() gets attribute tensors from functions instead of initializer tensors | # Bug Report
### Describe the bug
Reading the code in [external_data_helper.py](https://github.com/onnx/onnx/blob/0277a1f62550c0b9edc3e1016a50a42dc4c73cf1/onnx/external_data_helper.py#L254), it looks like there's an error:
```
def _get_initializer_tensors(onnx_model_proto: ModelProto) -> Iterable[TensorProto]:
"""Create an iterator of initializer tensors from ONNX model."""
yield from _get_initializer_tensors_from_graph(onnx_model_proto.graph)
for function in onnx_model_proto.functions:
yield from _get_attribute_tensors_from_graph(function) # <--- here
```
It seems like the last line of this function should be
```
yield from _get_initializer_tensors_from_graph(function)
``` | closed | 2025-02-06T17:39:07Z | 2025-03-06T03:36:17Z | https://github.com/onnx/onnx/issues/6694 | [] | lg8080 | 1 |
ivy-llc/ivy | numpy | 28,442 | [Bug]: Ivy tests use deprecated module `jax.config` | ### Bug Explanation
`ivy_tests/__init__.py` imports `jax.config`, which is deprecated as of February 16, 2024. Since `jax.config` no longer provides a `config` object, this breaks testing with the jax backend and produces the following error:
`AttributeError: module 'jax.config' has no attribute 'x64_enabled'`
### Steps to Reproduce Bug
1. Update jax to the latest version (0.4.25).
2. Run any Ivy test with the jax backend.
### Environment
Arch Linux using Emacs and venv
### Ivy Version
commit 69fdbac
### Backend
- [ ] NumPy
- [ ] TensorFlow
- [ ] PyTorch
- [X] JAX
### Device
_No response_ | closed | 2024-02-27T21:27:35Z | 2024-04-11T18:34:15Z | https://github.com/ivy-llc/ivy/issues/28442 | [
"Bug Report"
] | jacksondm33 | 1 |
ydataai/ydata-profiling | jupyter | 1,722 | Bug Report - Correlation.corr() fails when input DataFrame is empty in Spark | ### Current Behaviour
When using ydata-profiling with Spark, if the dataset is empty after filtering numeric columns, an exception is raised due to Correlation.corr() not handling empty DataFrames properly. This issue occurs in _compute_spark_corr_natively when converting a DataFrame into a feature vector and then computing correlation.
The code does not check if df_vector is empty before calling Correlation.corr(), leading to a RuntimeException from Spark.
The process crashes instead of handling the case properly.
### Expected Behaviour
If the DataFrame is empty after filtering, Correlation.corr() should be skipped gracefully instead of raising an exception.
The function _compute_spark_corr_natively should check if df_vector is empty before calling Correlation.corr().
### Data Description
when there are cols whose cells are all empty or reaches 98~99% missing, in Spark DataFrame
(I didn't see such error when I converted to Pandas dataframe)
### Code that reproduces the bug
```Python
# Sample 10%
df = spark.sql(
"select * from @@@@.@@@@ where rand() < 0.1"
).cache()
# type casting 1
df_casted = df.select(
[
(
col(field.name).cast("string").alias(field.name)
if isinstance(field.dataType, (DateType, TimestampType))
else col(field.name)
)
for field in df.schema
]
)
# type casting 2
complex_columns = [
field.name
for field in df.schema.fields
if isinstance(field.dataType, (ArrayType, MapType, StructType))
]
for col_name in complex_columns:
df_casted = df_casted.withColumn(col_name, to_json(col(col_name)))
profile = ProfileReport(df_casted, title=app_name, explorative=True)
profile.to_file(f"/tmp/ydata.html")
```
### pandas-profiling version
v2.2.3
### Dependencies
```Text
dependencies:
- bzip2=1.0.8
- ca-certificates=2025.1.31
- conda-pack=0.8.1
- libffi=3.4.2
- liblzma=5.6.4
- libsqlite=3.49.1
- libzlib=1.3.1
- ncurses=6.5
- openssl=3.4.1
- pip=25.0.1
- pyspark=3.5.3
- python=3.9.21
- readline=8.2
- setuptools=75.8.2
- tk=8.6.13
- wheel=0.45.1
- pip:
- executing==2.2.0
- fastjsonschema==2.21.1
- great-expectations==0.18.22
- jupyter-events==0.12.0
- notebook-shim==0.2.4
- pandocfilters==1.5.1
- phik==0.12.4
- pydantic-core==2.27.2
- python-json-logger==3.2.1
- ruamel-yaml-clib==0.2.12
- soupsieve==2.6
- stack-data==0.6.3
- tzdata==2025.1
- ydata-profiling==4.12.2
```
### OS
macos
### Checklist
- [x] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [x] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [x] The issue has not been resolved by the entries listed under [Common Issues](https://docs.profiling.ydata.ai/latest/support-contribution/contribution_guidelines/). | open | 2025-03-06T13:15:48Z | 2025-03-10T22:28:22Z | https://github.com/ydataai/ydata-profiling/issues/1722 | [
"spark :zap:"
] | minseokim12 | 0 |
keras-team/keras | machine-learning | 20,706 | Loaded Keras Model Throws Error While Predicting (Likely Issues with Masking) | I am currently developing and testing a RNN that relies upon a large amount of data for training, and so have attempted to separate my training and testing files. I have one file where I create, train, and save a tensorflow.keras model to a file 'model.keras' I then load this model in another file and predict some values, but get the following error: Failed to convert elements of {'class_name': '__tensor__', 'config': {'dtype': 'float64', 'value': [0.0, 0.0, 0.0, 0.0]}} to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes
By the way, I have tried running model.predict with this exact same data in the file where I train the model, and it works smoothly. The model loading must be the problem, not the data used to predict.
This mysterious float64 tensor is the value I passed into the masking layer. I don't understand why keras is unable to recognize this JSON object as a Tensor and apply the masking operation as such. I have included snippets of my code below, edited for clarity and brevity:
model_generation.py:
```
# Create model
model = tf.keras.Sequential([
tf.keras.layers.Input((352, 4)),
tf.keras.layers.Masking(mask_value=tf.convert_to_tensor(np.array([0.0, 0.0, 0.0, 0.0]))),
tf.keras.layers.GRU(50, return_sequences=True, activation='tanh'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GRU(50,activation='tanh'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(units=1, activation='sigmoid')])
# Compile Model...
# Train Model...
model.save('model.keras')
model.predict(data) # Line works here
```
model_testing.py
```
model = tf.keras.models.load_model('model.keras')
model.predict(data) # this line generates the error
```
I have tried to re-load the model in the `model_generation.py` file and I get the exact same issue. | closed | 2024-12-31T20:46:49Z | 2025-01-23T21:57:52Z | https://github.com/keras-team/keras/issues/20706 | [
"type:Bug"
] | JoeDoyle12 | 4 |
newpanjing/simpleui | django | 264 | 左侧一级菜单切换点击时,会有白色光线闪过。 | 如题。菜单栏是深背景色的时候会出现此现象。
| closed | 2020-05-19T12:41:37Z | 2020-05-20T10:47:05Z | https://github.com/newpanjing/simpleui/issues/264 | [
"bug"
] | cnbillow | 1 |
scikit-learn/scikit-learn | data-science | 30,742 | `y`, and `groups` parameters to`StratifiedGroupKFold.split()` are optional | ### Describe the bug
`StratifiedGroupKFold.split` has the signature `(self, X, y=None, groups=None)` indicating that both `y`, and `groups` may not be specified when calling `split`.
However, omitting only `groups` results in `TypeError: iteration over a 0-d array`. Also, when omitting both `y` and `groups`, or only `y` the result is `ValueError: Supported target types are: ('binary', 'multiclass'). Got 'unknown' instead.` This indicates, contrary to the signature, that `y` and `groups are required and not optional.
I would instead expect consisted behavior with e.g. `StratifiedKFold`, where the `y` parameter to `split` is not optional.
`StratifiedKFold` and `StratifiedGroupKFold` both inherit from `_BaseKFold`, which provides `.split`. However `StratifiedKFold` implements its own `split` method, instead of using `_BaseKFold` like `StratifiedGroupKFold` does.
### Steps/Code to Reproduce
```
import numpy as np
from sklearn.model_selection import StratifiedGroupKFold
rng = np.random.default_rng()
X = rng.normal(size=(10, 3))
y = np.concatenate((np.ones(5, dtype=int), np.zeros(5, dtype=int)))
g = np.tile([1, 0], 5)
sgkf = StratifiedGroupKFold(n_splits=5)
next(sgkf.split(X=X, y=y, groups=None)) # TypeError
sgkf = StratifiedGroupKFold(n_splits=5)
next(sgkf.split(X=X, y=None, groups=None)) # ValueError
sgkf = StratifiedGroupKFold(n_splits=5)
next(sgkf.split(X=X, y=None, groups=g)) # ValueError
```
### Expected Results
Either no error if `y`, `groups`, or both are not specified. Or remove the default of `None` for both parameters from the function signature.
### Actual Results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 2
1 sgkf = StratifiedGroupKFold(n_splits=5)
----> 2 next(sgkf.split(X=X, y=y, groups=None)) # TypeError
File /<PATH>/lib/python3.12/site-packages/sklearn/model_selection/_split.py:411, in _BaseKFold.split(self, X, y, groups)
403 if self.n_splits > n_samples:
404 raise ValueError(
405 (
406 "Cannot have number of splits n_splits={0} greater"
407 " than the number of samples: n_samples={1}."
408 ).format(self.n_splits, n_samples)
409 )
--> 411 for train, test in super().split(X, y, groups):
412 yield train, test
File /<PATH>/lib/python3.12/site-packages/sklearn/model_selection/_split.py:142, in BaseCrossValidator.split(self, X, y, groups)
140 X, y, groups = indexable(X, y, groups)
141 indices = np.arange(_num_samples(X))
--> 142 for test_index in self._iter_test_masks(X, y, groups):
143 train_index = indices[np.logical_not(test_index)]
144 test_index = indices[test_index]
File /<PATH>/ib/python3.12/site-packages/sklearn/model_selection/_split.py:154, in BaseCrossValidator._iter_test_masks(self, X, y, groups)
149 def _iter_test_masks(self, X=None, y=None, groups=None):
150 """Generates boolean masks corresponding to test sets.
151
152 By default, delegates to _iter_test_indices(X, y, groups)
153 """
--> 154 for test_index in self._iter_test_indices(X, y, groups):
155 test_mask = np.zeros(_num_samples(X), dtype=bool)
156 test_mask[test_index] = True
File /<PATH>/lib/python3.12/site-packages/sklearn/model_selection/_split.py:1035, in StratifiedGroupKFold._iter_test_indices(self, X, y, groups)
1031 _, groups_inv, groups_cnt = np.unique(
1032 groups, return_inverse=True, return_counts=True
1033 )
1034 y_counts_per_group = np.zeros((len(groups_cnt), n_classes))
-> 1035 for class_idx, group_idx in zip(y_inv, groups_inv):
1036 y_counts_per_group[group_idx, class_idx] += 1
1038 y_counts_per_fold = np.zeros((self.n_splits, n_classes))
TypeError: iteration over a 0-d array
```
---
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[13], line 2
1 sgkf = StratifiedGroupKFold(n_splits=5)
----> 2 next(sgkf.split(X=X, y=None, groups=g))
File /<PATH>/lib/python3.12/site-packages/sklearn/model_selection/_split.py:411, in _BaseKFold.split(self, X, y, groups)
403 if self.n_splits > n_samples:
404 raise ValueError(
405 (
406 "Cannot have number of splits n_splits={0} greater"
407 " than the number of samples: n_samples={1}."
408 ).format(self.n_splits, n_samples)
409 )
--> 411 for train, test in super().split(X, y, groups):
412 yield train, test
File /<PATH>/lib/python3.12/site-packages/sklearn/model_selection/_split.py:142, in BaseCrossValidator.split(self, X, y, groups)
140 X, y, groups = indexable(X, y, groups)
141 indices = np.arange(_num_samples(X))
--> 142 for test_index in self._iter_test_masks(X, y, groups):
143 train_index = indices[np.logical_not(test_index)]
144 test_index = indices[test_index]
File /<PATH>/lib/python3.12/site-packages/sklearn/model_selection/_split.py:154, in BaseCrossValidator._iter_test_masks(self, X, y, groups)
149 def _iter_test_masks(self, X=None, y=None, groups=None):
150 """Generates boolean masks corresponding to test sets.
151
152 By default, delegates to _iter_test_indices(X, y, groups)
153 """
--> 154 for test_index in self._iter_test_indices(X, y, groups):
155 test_mask = np.zeros(_num_samples(X), dtype=bool)
156 test_mask[test_index] = True
File /<PATH>/lib/python3.12/site-packages/sklearn/model_selection/_split.py:1008, in StratifiedGroupKFold._iter_test_indices(self, X, y, groups)
1006 allowed_target_types = ("binary", "multiclass")
1007 if type_of_target_y not in allowed_target_types:
-> 1008 raise ValueError(
1009 "Supported target types are: {}. Got {!r} instead.".format(
1010 allowed_target_types, type_of_target_y
1011 )
1012 )
1014 y = column_or_1d(y)
1015 _, y_inv, y_cnt = np.unique(y, return_inverse=True, return_counts=True)
ValueError: Supported target types are: ('binary', 'multiclass'). Got 'unknown' instead.
```
### Versions
```shell
System:
python: 3.12.4 (main, Jul 23 2024, 09:14:16) [GCC 14.1.1 20240522]
executable: /<PATH>/bin/python
machine: Linux-6.12.9-arch1-1-x86_64-with-glibc2.40
Python dependencies:
sklearn: 1.6.1
pip: 24.3.1
setuptools: None
numpy: 2.2.2
scipy: 1.15.1
Cython: None
pandas: 2.2.3
matplotlib: 3.10.0
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 8
prefix: libscipy_openblas
filepath: /<PATH>/lib/python3.12/site-packages/numpy.libs/libscipy_openblas64_-6bb31eeb.so
version: 0.3.28
threading_layer: pthreads
architecture: SkylakeX
user_api: blas
internal_api: openblas
num_threads: 8
prefix: libscipy_openblas
filepath: /<PATH>/lib/python3.12/site-packages/scipy.libs/libscipy_openblas-68440149.so
version: 0.3.28
threading_layer: pthreads
architecture: SkylakeX
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libgomp
filepath: /<PATH>/lib/python3.12/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0
version: None
``` | open | 2025-01-31T10:09:10Z | 2025-02-18T06:08:00Z | https://github.com/scikit-learn/scikit-learn/issues/30742 | [
"Documentation",
"Validation"
] | Teagum | 8 |
dask/dask | scikit-learn | 11,544 | `.rechunk()`-Issue with dask 2024.11.2 | I'm loading [a big bunch of tomographic data](https://github.com/habi/laminitis), crop out small regions, rechunk those with `chunks='auto'` and want to save them to disk with `.to_zarr`.
`dask.__version__='2024.11.2'` complains about doing so, while `dask.__version__='2024.9.1'` does so without issue.
**Minimal Complete Verifiable Example**:
```python
# Load reconstructions
Reconstructions = [dask_image.imread.imread(os.path.join(f, '*rec*.png')) for f in Data.Folder]
# Save to zarr
Data['OutputNameZarr'] = [f + '.zarr' for f in Data.Folder]
for c, row in Data.iterrows():
Reconstructions[c].rechunk(chunks='auto').to_zarr(row['OutputNameZarr'])
# Load zarr files back in
Reconstructions = [dask.array.from_zarr(file) for file in Data['OutputNameZarr']]
# Select one reconstruction and crop a small region
whichsample = 12
center=(2000,2000,1250)
sidelength=333
Crop = Reconstructions[whichsample][center[2] - (sidelength // 2):center[2] + ((1+sidelength) // 2), center[1] - (sidelength // 2):center[1] + ((1+sidelength) // 2), center[0] - (sidelength // 2):center[0] + ((1+sidelength) // 2)]
Crop.rechunk(chunks='auto').to_zarr(OutputNameCrop)
```
The code above works nicely with dask=2024.9.1 (on one machine), but with dask=2024.11.2 (on another machine), I get
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[43], line 2
1 whichsample = 2
----> 2 Cutout = cutout(whichsample,
3 center=(2000,2000,1250),
4 sidelength=333,
5 verbose=True)
Cell In[42], line 39, in cutout(whichsample, center, sidelength, verbose)
35 Crop = Reconstructions[whichsample][center[2] - (sidelength // 2):center[2] + ((1+sidelength) // 2),
36 center[1] - (sidelength // 2):center[1] + ((1+sidelength) // 2),
37 center[0] - (sidelength // 2):center[0] + ((1+sidelength) // 2)]
38 # Save to disk
---> 39 Crop.rechunk().to_zarr(OutputNameCrop)
40 if verbose:
41 for d, direction in enumerate(directions):
File [~\AppData\Local\anaconda3\envs\laminitis\Lib\site-packages\dask\array\core.py:3015](http://localhost:8888/~/AppData/Local/anaconda3/envs/laminitis/Lib/site-packages/dask/array/core.py#line=3014), in Array.to_zarr(self, *args, **kwargs)
3004 def to_zarr(self, *args, **kwargs):
3005 """Save array to the zarr storage format
3006
3007 See https://zarr.readthedocs.io for details about the format.
(...)
3013 dask.array.to_zarr : equivalent function
3014 """
-> 3015 return to_zarr(self, *args, **kwargs)
File ~\AppData\Local\anaconda3\envs\laminitis\Lib\site-packages\dask\array\core.py:3875, in to_zarr(arr, url, component, storage_options, overwrite, region, compute, return_stored, **kwargs)
3872 raise ValueError("Cannot use `region` keyword when url is not a `zarr.Array`.")
3874 if not _check_regular_chunks(arr.chunks):
-> 3875 raise ValueError(
3876 "Attempt to save array to zarr with irregular "
3877 "chunking, please call `arr.rechunk(...)` first."
3878 )
3880 storage_options = storage_options or {}
3882 if storage_options:
ValueError: Attempt to save array to zarr with irregular chunking, please call `arr.rechunk(...)` first.
```
If I change the last line to `Crop.rechunk((100, 100, 100)).to_zarr(OutputNameCrop)`, my code works, but I thus "loose" the auto-chunk-size selection, which worked quite nicely for my use case...
The cropping/rechunking code is part of a function, you can `CTRL + f` for 'cropper' on https://nbviewer.org/github/habi/Laminitis/blob/main/DistanceTransformation.ipynb to find it. | closed | 2024-11-20T15:00:22Z | 2024-12-06T12:23:23Z | https://github.com/dask/dask/issues/11544 | [
"array"
] | habi | 7 |
ranaroussi/yfinance | pandas | 2,231 | Hey If Possible for screener class can their be new Quote Types of cryptos , currency exchanges, ETF , Indices and futures ? | cuz i was trying some logic to fetch data using exiting ones it didn't work. and i think their might be some way as i want some amount of data i would be working with for my project and other stuff. | open | 2025-01-20T04:25:04Z | 2025-02-02T16:09:38Z | https://github.com/ranaroussi/yfinance/issues/2231 | [] | Ronit26Mehta | 7 |
litestar-org/litestar | pydantic | 3,040 | Bug: `ResponseDataExtractor` merges per cookie attributes | ### Description
Currently `ResponseDataExtractor` merges all values of cookies into a single dictionary, however cookies in the `Set-Cookie` header can have different flags, paths, domains, etc. set which are currently lost.
### URL to code causing the issue
https://github.com/litestar-org/litestar/blob/e5f2b6446a1b14ff064a593472d6f54fb8b564ec/litestar/data_extractors.py#L407-L410 Test showing how per cookie attributes (all attributes apart from key and value) are combined right now: https://github.com/litestar-org/litestar/blob/e5f2b6446a1b14ff064a593472d6f54fb8b564ec/tests/unit/test_data_extractors.py#L110
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
v2.5.1
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-01-28T15:39:04Z | 2025-03-20T15:54:22Z | https://github.com/litestar-org/litestar/issues/3040 | [
"Bug :bug:"
] | floxay | 1 |
Lightning-AI/pytorch-lightning | pytorch | 19,957 | Logging Hyperparameters for list of dicts | ### Bug description
Currently, when hyper parameters are logged with `log_hyperparams` the function calls [_flatten_dict](https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/fabric/utilities/logger.py#L75-L106) to collapse the dict to a single level. However, when the config contains a list of dicts, this gets flattened to a single string. Instead I would propose to log the list as `[key/0/item, key/1/item]` etc.
A fix could be simple:
```python
def _flatten_dict(params: MutableMapping[Any, Any], delimiter: str = "/", parent_key: str = "") -> Dict[str, Any]:
"""Flatten hierarchical dict, e.g. ``{'a': {'b': 'c'}} -> {'a/b': 'c'}``.
Args:
params: Dictionary containing the hyperparameters
delimiter: Delimiter to express the hierarchy. Defaults to ``'/'``.
Returns:
Flattened dict.
Examples:
>>> _flatten_dict({'a': {'b': 'c'}})
{'a/b': 'c'}
>>> _flatten_dict({'a': {'b': 123}})
{'a/b': 123}
>>> _flatten_dict({5: {'a': 123}})
{'5/a': 123}
>>> _flatten_dict({"dl": [{"a": 1, "c": 3}, {"b": 2, "d": 5}], "l": [1, 2, 3, 4]})
{'dl/0/a': 1, 'dl/0/c': 3, 'dl/1/b': 2, 'dl/1/d': 5, 'l': [1, 2, 3, 4]}
"""
result: Dict[str, Any] = {}
for k, v in params.items():
new_key = parent_key + delimiter + str(k) if parent_key else str(k)
if is_dataclass(v):
v = asdict(v)
elif isinstance(v, Namespace):
v = vars(v)
if isinstance(v, MutableMapping):
result = {**result, **_flatten_dict(v, parent_key=new_key, delimiter=delimiter)}
# Also handle the case where v is a list of dictionaries
elif isinstance(v, list) and all(isinstance(item, MutableMapping) for item in v):
for i, item in enumerate(v):
result = {**result, **_flatten_dict(item, parent_key=f"{new_key}/{i}", delimiter=delimiter)}
else:
result[new_key] = v
return result
```
### What version are you seeing the problem on?
master
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | closed | 2024-06-07T13:55:51Z | 2025-03-14T10:32:52Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19957 | [
"bug",
"needs triage",
"ver: 2.2.x"
] | vork | 0 |
plotly/dash-bio | dash | 450 | Use color vision deficiency friendly colormap as default for Clustergram | Inability to distinguish red and green is the most common color deficiecy in the population, affecting nearly 10% of people. Although red-green colormaps were once common in heatmaps, especially for microarray data, the color combinations is increasingly avoided for more accessible color combinations such as blue/teal to red/orange/yellow/brown (Colorbrewer has [some more examples](http://colorbrewer2.org/#type=diverging&scheme=BrBG&n=3), check the "colorblind safe" box).
When https://github.com/plotly/plotly.py/issues/1681 lands, I think `px.colors.diverging.RdBu_r` could be a good option for diverging values, or a map similar to 'coolwarm' from [colorcet](https://colorcet.pyviz.org/). For sequential values, viridis, or maybe single or dual colors (e.g. Orange to Red or Reds) could be good.
Related:
http://geog.uoregon.edu/datagraphics/color_scales.htm
www.kennethmoreland.com/color-maps/ColorMapsExpanded.pdf
https://matplotlib.org/3.1.1/tutorials/colors/colormaps.html
| closed | 2019-11-24T08:38:31Z | 2021-11-08T22:31:13Z | https://github.com/plotly/dash-bio/issues/450 | [
"size: 1",
"dash-type-enhancement"
] | joelostblom | 0 |
openapi-generators/openapi-python-client | rest-api | 541 | Generate also models that are children of models used in endpoints | **Is your feature request related to a problem? Please describe.**
Yes, my problem is that some of my endpoints use contract (component) that is of type `QueryCondition`. But `QueryCondition` has subclasses like `AttributeQueryCondition`, `SpatialQueryCondition`, and others. The `openapi-python-client` does not generate the subclasses in the `models` folder.
**Describe the solution you'd like**
I think this happens because `parser/openapi.py` walks around all endpoints and generates a model when it finds one. But this way it skips the subclasses of models used in endpoints. I would like all models present in the openapi document to be generated. Alternatively, which classes to include or exclude could be controlled by the the `--config` file.
I would also like the inheritance of models to be preserved in the generated models.
**Describe alternatives you've considered**
I've considered writing my own generator but I would much rather use one that exists and `openapi-python-client` seems most promising.
**Additional context**
Here is one of the openapi document I am working with: [https://develop.mike-cloud.com/core-gis-prod/v2](https://develop.mike-cloud.com/core-gis-prod/v2)
| closed | 2021-11-29T16:01:03Z | 2024-09-29T18:33:25Z | https://github.com/openapi-generators/openapi-python-client/issues/541 | [
"✨ enhancement"
] | filipkral | 3 |
plotly/dash | data-science | 2,492 | DataTable the first used filter will not be displayed when changed | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
No need, can be recreated right from Dash tutorial here [DataTable Filtering](https://dash.plotly.com/datatable/filtering)
**Describe the bug**
**Only on a newly loaded site please!**
in Advanced filter usage section, let's say you enter "> 10000000" without quotes (10 million) in **pop** column
The page is down by half, good, now copy the string "{pop} s> 10000000" without quotes
Switch to Write to filter_query option, and paste it in
Now if you increase or decrease the zeroes, the filter section on the table gets updated EXCEPT at 10 million!
The filtering still occurs (pages change), but it's not displayed in the data table filter zone
**Expected behavior**
a change in data table update filter_query, but a write in filter_query at that initial value does not update the table.
**Screenshots**
[See for yourself](https://imgur.com/a/gtBkpUn)
If you go to 100 million and go back to 10 million, filter still shows 100 million
If you go to 1 million and go back to 10 million, filter still shows 1 million
**Basically, if you change the filter in datatable first, then change it from other controls, the filter in datatable will not update at the exact query you used in the datatable.** | open | 2023-04-01T03:58:19Z | 2024-08-13T19:30:01Z | https://github.com/plotly/dash/issues/2492 | [
"bug",
"dash-data-table",
"P3"
] | DarkCTO | 0 |
pennersr/django-allauth | django | 3,540 | How do I prevent user login after registration | Hi everyone,
I have a custom user model and signup form and everything works well except that users are automatically logged in after creating their account. **How do I disable this?**
Right now, I redirect users to a custom page after signup but I don't want them logged in by the time they get to this page. | closed | 2023-11-29T10:53:37Z | 2023-11-29T17:28:40Z | https://github.com/pennersr/django-allauth/issues/3540 | [] | josylad | 0 |
opengeos/leafmap | jupyter | 662 | .edit_vector's behavior is wired | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.30.1
- solara: 1.25.1
- Python version: 3.11.3
- Operating System: MacOS. 13.6.3
### Description
- A widget for the user to select polygon from a list to edit,
- however, after adding one polygon, the next one doesn't appear but the 3rd go back.
- seems leafmap "eats" one polygon
### What I Did
create a solara app as below
$ solara run example.py
```
import solara
import solara as sl
from solara.components.file_drop import FileInfo
from solara import Reactive, reactive
import leafmap
import os, tempfile, sys
from io import BytesIO
from typing import Union
import random, numpy as np
from ipywidgets import widgets
import geojson, json
from shapely.geometry import shape
import shapely.wkt
import pandas as pd
import time
BUTTON_KWARGS = dict(color="primary", text=True, outlined=True)
class State:
zoom = reactive(20)
center = reactive((None, None))
enroll_wkt = reactive(None)
def wkt_to_featurecollection(wkt):
geom = shapely.wkt.loads(wkt)
return {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": geom.__geo_interface__,
}
],
}
aoi = 'POLYGON ((-91.16138525535435 37.81442211215915, -91.16138525535435 37.73515728531591, -90.85526326612401 37.73515728531591, -90.85526326612401 37.81442211215915, -91.16138525535435 37.81442211215915))'
wkt_list = ['POLYGON ((-91.15796462083297 37.806056428087615, -91.15796462083297 37.79771581956473, -90.86679686670833 37.79771581956473, -90.86679686670833 37.806056428087615, -91.15796462083297 37.806056428087615))',
'POLYGON ((-91.11222224140039 37.792622288824845, -91.11222224140039 37.76260439211525, -91.02064573377882 37.76260439211525, -91.02064573377882 37.792622288824845, -91.11222224140039 37.792622288824845))',
'POLYGON ((-91.00305251600666 37.79041596911006, -91.0496745431024 37.79041596911006, -91.0496745431024 37.74730356543847, -91.00305251600666 37.74730356543847, -91.00305251600666 37.79041596911006)))']
def widget_droplist(options, desc, width = "270px", padding = "0px 0px 0px 5px", **kwargs):
return widgets.Dropdown(
options=[""] + options,
description=desc,
style={"description_width": "initial"},
layout=widgets.Layout(width=width, padding=padding),
**kwargs)
def add_widgets(m, padding = "0px 0px 0px 5px"):
style = {"description_width": "initial"}
geom_sel = widget_droplist(['1','2','3'], "geometry:")
export_button = widgets.Button(description="Click 'Save' before Export", layout=widgets.Layout(width="200px"))
reset_button = widgets.Button(
description="clear", layout=widgets.Layout(width="50px"), button_style="info"
)
func_box = widgets.HBox([export_button, reset_button])
output = widgets.Output()
# zoom to the footprint
m.add_geojson(
wkt_to_featurecollection(aoi),
layer_name="Footprint",
zoom_to_layer=True,
# hover_style={'opacity':0.9},
style_callback=lambda feat: {"color": "red","opacity":0.9, 'hover_style':{'opacity':0.9}},
)
def select_boundary(change):
m.remove_layer(m.find_layer("Footprint"))
m.draw_control.clear()
m.draw_features = []
# m.user_rois = None
# m.user_roi = None
# time.sleep(0.1)
if change.new == "1":
feature_collection = wkt_to_featurecollection(wkt_list[0])
m.edit_vector(feature_collection)#, layer_name="Footprint")
elif change.new == "2":
feature_collection = wkt_to_featurecollection(wkt_list[1])
m.edit_vector(feature_collection)#, layer_name="Footprint2")
elif change.new == "3":
feature_collection = wkt_to_featurecollection(wkt_list[2])
m.edit_vector(feature_collection)#, layer_name="Footprint2")
else: # "empty"
# m.draw_control.clear()
pass
# output.append_stdout(State.series_df.value.iloc[0]['mask'])
output.append_stdout(change.new)
geom_sel.observe(select_boundary, names="value")
def export_wkt(e):
# -1: latest saved edits
g1 = shape(m.draw_features[-1]['geometry'])
output.outputs = ()
output.append_stdout(g1.wkt)
export_button.on_click(export_wkt)
def reset_output(e):
output.outputs = ()
reset_button.on_click(reset_output)
box = widgets.VBox(
[
geom_sel,
func_box,
output,
]
)
m.add_widget(box, position="topright", add_header=False)
class Map(leafmap.Map):
def __init__(self, **kwargs):
kwargs["toolbar_control"] = False
super().__init__(**kwargs)
basemap = {
"url": "https://mt1.google.com/vt/lyrs=s&x={x}&y={y}&z={z}",
"attribution": "Google",
"name": "Google Satellite",
}
self.add_tile_layer(**basemap, shown=True)
add_widgets(self)
@sl.component
def Page() -> None:
solara.Markdown("""- A widget for the user to select polygon from a list to edit, \n- however, after adding one polygon, the next one doesn't appear but the 3rd go back.\n- seems leafmap "eats" one polygon""")
Map.element( # type: ignore
zoom=State.zoom.value,
scroll_wheel_zoom=True,
toolbar_ctrl=False,
data_ctrl=False,
height="780px",
)
if __name__ == "__main__":
Page()
```
| closed | 2024-01-18T14:05:45Z | 2024-01-23T17:57:57Z | https://github.com/opengeos/leafmap/issues/662 | [
"bug"
] | suredream | 9 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,867 | I'm unable to download the chrome_driver after packaging with Nuitka. | When I use Nuitka for packaging, it encounters a bug here.

Here is my packaging command:
`nuitka --mingw64 --include-package-data=selenium --enable-plugin=tk-inter --standalone --onefile --windows-icon-from-ico=main.ico --output-dir="D:\Silly\dist" "D:\Silly\Cl.py"`
| open | 2024-05-05T08:11:49Z | 2024-05-06T17:10:39Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1867 | [] | LINKlang | 2 |
tableau/server-client-python | rest-api | 1,459 | Getting error while login into Tableau Cloud using tableauserverclient | **Describe the bug**
I'm trying to login into Tableau Cloud using tableauserverclient but getting below error
`urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='90su.online.tableau.com', port=443): Max retries exceeded with url: //api/2.4/auth/signin (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f41f2cc08d0>: Failed to establish a new connection: [Errno -2] Name or service not known'))`
**Versions**
Details of your environment, including:
- Tableau Online Version - 2024.2.0
- Python version - 3.7.16
- TSC library version - 0.31
**To Reproduce**
Steps to reproduce the behavior. Please include a code snippet where possible.
```py
import tableauserverclient as TSC
TOKEN_NAME = 'tokentest'
TOKEN = 'token generated from tableau online'
-- Altered pod name due to compliance issue
SERVER = 'https://90y.online.tableau.com/'
-- tried without s in https also but getting same error
SITE = 'sitename'
tableau_token = TSC.PersonalAccessTokenAuth(TOKEN_NAME,TOKEN,site_id=SITE)
tableau_server = TSC.Server(SERVER, use_server_version= True)
tableau_server.auth.sign_in(tableau_token)
```
**Results**
What are the results or error messages received?
```
**self.parent_srv.http_options, allow_redirects=False
File "/export/test/sam/.local/lib/python3.7/site-packages/requests/sessions.py", line 637, in post
return self.request("POST", url, data=data, json=json, **kwargs)
File "/export/test/sam/.local/lib/python3.7/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/export/test/sam/.local/lib/python3.7/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/export/test/sam/.local/lib/python3.7/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='90y.online.tableau.com', port=443): Max retries exceeded with url: //api/2.4/auth/signin (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f41f2cc08d0>: Failed to establish a new connection: [Errno -2] Name or service not known'))
```
| closed | 2024-09-09T15:45:14Z | 2024-09-18T06:59:41Z | https://github.com/tableau/server-client-python/issues/1459 | [
"help wanted"
] | suman4apr | 7 |
serengil/deepface | machine-learning | 1,162 | Sending base64 encoded image to API server | Hi, thank you for this project. I am trying to access the `analyze` endpoint that is hosted locally (`localhost:5000/analyze`) by providing the base64 encoded image. However, the web server does not seem to recognize the base64 encoded image (unless I am missing something).
Request body:
```json
{
"img_path": "<base64 string of image>"
}
```
The server returns:
```json
{
"error": "Exception while analyzing: Confirm that <base64 string of image> exists"
}
```
Is there something I am missing here? A separate value to signal that I'm sending a base64 encoded image? | closed | 2024-04-02T07:44:43Z | 2024-04-02T08:55:11Z | https://github.com/serengil/deepface/issues/1162 | [
"question"
] | kk-min | 3 |
desec-io/desec-stack | rest-api | 1,008 | Feature request: Keep search term intact over page changes | I sometimes have to do changes to **many of my domains** like changing the AAAA record. As I don't (want to) use the API for those changes I use the GUI instead. Currently I have over 30 domains in my account so I usually use the search function to filter these entries.
After coming back from on one the domain detail pages where I changed stuff the search term on the main page **is empty again** showing me all domain entries. It would be very helpful if the search term (at least on the main page where all domains are listed) would be **left intact for the session**.
I'll give you an example:
Let's assume I have some domains starting with `party-` like `party-cool.de`, `party-great.de` and so on.
Searching for `party-` just lists me those domains and I can click them domain by domain. Currently I have to enter the search term every time I return from one of the domain detail pages again which is quite annoying.
If the search term would be session-permanent it would be much easier for me. | closed | 2025-01-07T17:03:36Z | 2025-01-25T03:07:41Z | https://github.com/desec-io/desec-stack/issues/1008 | [
"enhancement",
"gui"
] | codiflow | 1 |
flairNLP/flair | nlp | 3,019 | Workshop, Slack Channel, Meetup, News and other forms of Collaboration . . ? | First and foremost: thanks for making Flair available and continuing to improve it - I am a regular user =)
How about enhancing the opportunity to collaborate for those fond of flair ?
While the landing page does point to a cornucopia of resources there seems to be an absence of collaboration mechanism . . . other than logging a question or issue.
Some suggestions:
1. **Slack Channel** - At the very least, how about a Slack Channel(s)? Many other Apache projects I work with have them and they are a create way to increase and accelerate interest.
2. **Meetup** - Zoom based meet ups are not hard to execute and would be a great mechanism for sharing how people are using Flair or demonstrating how newer features of Flair can be used. Some Apache projects even hold monthly **How to Contribute** Zoom sessions to help those on the verge of contributing get over the hump and on board !
3. **Workshop** - There are numerous Conferences across the globe on related material - why not run a workshop before or after one. For example BerlinBuzzwords is in June in Berlin Germany.
4. **News** - How about have a formal page / blog on news about Flair: everything from when is the next anticipated release to planned collaborative events
5. **Other Forms** - to enhance the opportunity to collaborate, engage and share about Flair . . ?
Thoughts @alanakbik @helpmefindaname @whoisjones @stefan-it . . ?
| closed | 2022-12-11T18:54:05Z | 2023-06-11T11:25:45Z | https://github.com/flairNLP/flair/issues/3019 | [
"wontfix"
] | None-Such | 1 |
pyjanitor-devs/pyjanitor | pandas | 816 | [INF] Ensure tests pass on dev branch in auto release | # Brief Description
- .github/workflows/auto-release.yml
- See https://github.com/marketplace/actions/wait-on-check
- Have every push to dev trigger a test suite run | open | 2021-03-25T10:52:30Z | 2021-03-25T10:52:30Z | https://github.com/pyjanitor-devs/pyjanitor/issues/816 | [] | loganthomas | 0 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 226 | 如何切换v2以及是否考虑增加一个自动抓取最新视频的功能? | 已经部署好了 现在只有单一解析 没太看懂那个付费的api 购买之后如何替换?我是一键部署到linux 可否简单指导下
另外是否考虑定时自动抓取某一用户的最新视频,我现在用的一个微博爬虫,定时运行并将之前爬到的结果记录跳过,感觉这个功能很有用 | closed | 2023-07-20T14:16:04Z | 2024-04-23T05:05:24Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/226 | [
"enhancement"
] | AIEOV | 3 |
krish-adi/barfi | jupyter | 2 | Add support for streamlit password st.text_input type | For the st.text_input widget, streamlit supports password masking using the type='password' param (https://docs.streamlit.io/library/api-reference/widgets/st.text_input). Any way to get support for this? | closed | 2022-07-09T21:37:47Z | 2022-07-25T09:17:29Z | https://github.com/krish-adi/barfi/issues/2 | [
"enhancement"
] | zabrewer | 5 |
piskvorky/gensim | nlp | 2,694 | Use linesentence to stream corpus from file | I have used gensim to train word embedding about 6 months ago. At that time, the code used to stream data works just fine, but now I used it again, I meet so many error, one of them from fasttext model using too much ram. I think the document have not been updated
```
corpus_file = datapath('sample.txt')
model = FT_gensim(size=100)
# build the vocabulary
model.build_vocab(corpus_file=corpus_file)
# train the model
model.train(
corpus_file=corpus_file, epochs=model.epochs,
total_examples=model.corpus_count, total_words=model.corpus_total_words
)
print(model)
``` | closed | 2019-12-03T09:23:13Z | 2019-12-03T11:27:36Z | https://github.com/piskvorky/gensim/issues/2694 | [
"need info"
] | TQuy | 4 |
flasgger/flasgger | api | 623 | Can syntax highlighting be supported? | In future planning, can syntax highlighting be used in description? | open | 2024-08-13T04:21:10Z | 2024-08-13T04:23:01Z | https://github.com/flasgger/flasgger/issues/623 | [] | rice0524168 | 0 |
psf/black | python | 3,760 | Hatchling version is incorrect in toml file | <!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch.
3. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `black` like you did last time.
-->
**Describe the bug**
ERROR: Could not find a version that satisfies the requirement hatchling>=1.8.0 (from versions: none)
**To Reproduce**
<!--
Minimal steps to reproduce the behavior with source code and Black's configuration.
-->
on black installation
For example, take this code:
```python
this = "code"
```
And run it with these arguments:
```sh
$ black file.py --target-version py39
```
The resulting error is:
> cannot format file.py: INTERNAL ERROR: ...
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Environment**
<!-- Please complete the following information: -->
- Black's version: <!-- e.g. [main] -->
- OS and Python version: <!-- e.g. [Linux/Python 3.7.4rc1] -->
**Additional context**
<!-- Add any other context about the problem here. -->
| closed | 2023-07-04T11:13:59Z | 2023-07-05T12:56:38Z | https://github.com/psf/black/issues/3760 | [
"T: bug"
] | KiraUnderwood | 2 |
miguelgrinberg/microblog | flask | 54 | elasticsearch.exceptions.ConnectionError | Cannot seem to get a connection to the elasticsearch server started. I have tried the example setup in the python shell, import the module, pass the server address and port to the instance of es, however, I get the following error:
```
elasticsearch.exceptions.ConnectionError
elasticsearch.exceptions.ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x73fc8a50>: Failed to establish a new connection: [Errno 111] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x73fc8a50>: Failed to establish a new connection: [Errno 111] Connection refused)
```
I am running Microblog off a Raspberry Pi 3 server that I have connected remotely through SSH. I checked to see if the port was open with nmap through SSH, and it states:
```
Starting Nmap 7.40 ( https://nmap.org ) at 2017-12-22 13:36 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00029s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
9200/tcp closed wap-wsp
Nmap done: 1 IP address (1 host up) scanned in 0.35 seconds
```
All of my testing that I have done is through an SSH shell into the Pi. I also have checked with `sudo netstat -tuplen` to see if port 9200 was open, and it was not shown.
I installed elasticsearch as described in Chapter 16 through pip.
Any assistance would be appreciated. | closed | 2017-12-22T13:44:36Z | 2019-09-09T01:19:26Z | https://github.com/miguelgrinberg/microblog/issues/54 | [
"question"
] | theodeyle | 5 |
apache/airflow | python | 47,496 | AIP-38 | Connections Add | ### Body
Alongside with #43703 a form needs to be available allowing to add a new connection.
Similar to the old / legacy UI depending on the connection type some `extra` fields need to be displayed. For this the "FlexibleForm" from the trigger UI should be re-used as a component.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | closed | 2025-03-07T14:33:30Z | 2025-03-20T17:30:48Z | https://github.com/apache/airflow/issues/47496 | [
"kind:meta",
"area:UI"
] | jscheffl | 2 |
shaikhsajid1111/facebook_page_scraper | web-scraping | 83 | download pictures and videos | hello my question is it possible to download pictures and images of the post?,
| open | 2023-08-04T08:53:13Z | 2023-08-05T13:17:23Z | https://github.com/shaikhsajid1111/facebook_page_scraper/issues/83 | [] | ihabpalamino | 1 |
aminalaee/sqladmin | sqlalchemy | 334 | Hide bulk delete action button when can_delete=False | ### Discussed in https://github.com/aminalaee/sqladmin/discussions/331
<div type='discussions-op-text'>
<sup>Originally posted by **94929** September 24, 2022</sup>

As seen on the screenshot, I have an `Actions` button in the middle of the admin page.
However, the official demo https://sqladmin-demo.aminalaee.dev/admin/profile/list does not include that.
Thanks in advance.</div> | closed | 2022-09-24T13:07:24Z | 2022-09-24T15:51:28Z | https://github.com/aminalaee/sqladmin/issues/334 | [] | aminalaee | 1 |
gradio-app/gradio | python | 9,939 | Dropdown and LinePlot buggy interaction | ### Describe the bug
Interactive dropdowns (```gr.Dropdown(options, interactive=True)```) do not work if a LinePlot (probably similar with ScatterPlot and others, but untested) is provided in the same block. This also happens if the plot is in other columns and rows. I did not check if it also happens with other components, but below you can find a very minimal reproducer, in which the dropdown is not interactible. If the plot is removed, the dropdown works (as shown in [this comment](https://github.com/gradio-app/gradio/issues/6103#issuecomment-1790205932)
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
my_list = ["World", "Gradio", "World2", "abc ", "You"]
with gr.Blocks() as demo:
drop1 = gr.Dropdown(choices=my_list, label="simple", value=my_list[0], interactive=True)
plt = gr.LinePlot() # Comment this out and the dropdown can be interacted with
demo.launch(share=True)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
I am using gradio 5.5.0, I'll paste the environment output:
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.5.0
gradio_client version: 1.4.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts: 0.2.1
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.4.2 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.7.3
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.13.0
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Can work around using other components (but not with LinePlots) | closed | 2024-11-11T14:51:32Z | 2025-02-07T18:16:33Z | https://github.com/gradio-app/gradio/issues/9939 | [
"bug"
] | nestor98 | 3 |
graphql-python/graphene | graphql | 1,274 | cannot import name 'ObjectType' from partially initialized module 'graphene' | **Note: for support questions, please use stackoverflow**. This repository's issues are reserved for feature requests and bug reports.
* **What is the current behavior?**
When I simply run `import graphene`, I get the following error:
```
>>> import graphene
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home//test-graphene/graphene.py", line 1, in <module>
from graphene import ObjectType, String, Schema
ImportError: cannot import name 'ObjectType' from partially initialized module 'graphene' (most likely due to a circular import)
```
* **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** via a github repo, https://repl.it or similar.
Clone this repo:
https://github.com/edalongeville/test-graphene
Run:
`python3 -m venv .venv`
`source .venv/bin/activate`
`pip install -r requirements.txt`
`python3 graphene.py`
I've been able to reproduce on both Ubuntu Linux and MacOS.
* **What is the expected behavior?**
The code should execute without an issue.
* **What is the motivation / use case for changing the behavior?**
Graphene simply doesn't work.
* **Please tell us about your environment:**
- Version: Python 3.8.5
- Platform: Linux (Ubuntu 20.10) and MacOS (latest)
* **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow)
| closed | 2020-10-08T15:06:59Z | 2020-10-09T07:44:29Z | https://github.com/graphql-python/graphene/issues/1274 | [
"🐛 bug"
] | edalongeville | 1 |
allenai/allennlp | nlp | 4,669 | Officially support Python 3.8 | closed | 2020-09-24T17:07:51Z | 2020-09-25T20:27:24Z | https://github.com/allenai/allennlp/issues/4669 | [] | epwalsh | 0 | |
huggingface/diffusers | pytorch | 10,674 | FluxPipeline is not working with GGUF :( | ### Describe the bug
cpu offload is not working for Flux-GGUF, Works fine for AuraFlow-GGUF pipeline.
### Reproduction
```
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel
from diffusers import GGUFQuantizationConfig
model_id = "ostris/Flex.1-alpha"
dtype = torch.bfloat16
transformer_path = "https://huggingface.co/hum-ma/Flex.1-alpha-GGUF/blob/main/Flex.1-alpha-Q4_K_M.gguf"
transformer = FluxTransformer2DModel.from_single_file(
transformer_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=dtype,
)
pipe = FluxPipeline.from_pretrained(
model_id,
transformer=transformer,
torch_dtype=dtype,
)
# pipe.enable_sequential_cpu_offload()
pipe.enable_model_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()
inference_params = {
"prompt": "An oak tree",
"negative_prompt": "",
"height": 512,
"width": 512,
"guidance_scale": 1.0,
"num_inference_steps": 20,
"generator": torch.Generator(device="cuda").manual_seed(0),
"max_sequence_length":512,
}
image = pipe(**inference_params).images[0]
image.save("image.png")
```
### Logs
```shell
(venv) C:\aiOWN\diffuser_webui>python Flex1alpha-gguf.py
Loading checkpoint shards: 100%|████████████████████████████████████| 2/2 [00:00<00:00, 4.07it/s]
Loading pipeline components...: 57%|█████████████████▋ | 4/7 [00:01<00:00, 4.45it/s]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading pipeline components...: 100%|███████████████████████████████| 7/7 [00:01<00:00, 4.36it/s]
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\Flex1alpha-gguf.py", line 20, in <module>
pipe.enable_model_cpu_offload()
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1095, in enable_model_cpu_offload
self.to("cpu", silence_dtype_warnings=True)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 467, in to
module.to(device, dtype)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 1191, in to
return super().to(*args, **kwargs)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1340, in to
return self._apply(convert)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
module._apply(fn)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
module._apply(fn)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\torch\nn\modules\module.py", line 927, in _apply
param_applied = fn(param)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1333, in convert
raise NotImplementedError(
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.27.1
- Transformers version: 4.48.1
- Accelerate version: 1.4.0.dev0
- PEFT version: not installed
- Bitsandbytes version: 0.45.1
- Safetensors version: 0.5.2
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 4060 Laptop GPU, 8188 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | closed | 2025-01-28T18:50:40Z | 2025-02-06T11:01:16Z | https://github.com/huggingface/diffusers/issues/10674 | [
"bug"
] | nitinmukesh | 8 |
huggingface/diffusers | deep-learning | 10,514 | Sana 4k with use_resolution_binning not supported due to sample_size 128 | ### Describe the bug
Using the new 4k model fails with defaults values. Specifically with use_resolution_binning=True which is the default.
```
Traceback (most recent call last):
File "/home/rockerboo/code/others/sana-diffusers/main.py", line 28, in <module>
image = pipe(
^^^^^
File "/home/rockerboo/code/others/sana-diffusers/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/rockerboo/code/others/sana-diffusers/.venv/lib/python3.11/site-packages/diffusers/pipelines/pag/pipeline_pag_sana.py", line 736, in __call__
raise ValueError("Invalid sample size")
ValueError: Invalid sample size
```
Specifically https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py#L728-L736 limits the binning which doesn't support the 4k
https://huggingface.co/Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers/blob/main/transformer/config.json#L20 the sample size is 128
Should just be a matter of adding the binning information for 4k.
### Reproduction
https://huggingface.co/Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers#1-how-to-use-sanapipeline-with-%F0%9F%A7%A8diffusers PAG or the non-PAG instructions here.
```python
# run `pip install git+https://github.com/huggingface/diffusers` before use Sana in diffusers
import torch
from diffusers import SanaPipeline
pipe = SanaPipeline.from_pretrained(
"Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers",
variant="bf16",
torch_dtype=torch.bfloat16,
)
pipe.to("cuda")
pipe.vae.to(torch.bfloat16)
pipe.text_encoder.to(torch.bfloat16)
# for 4096x4096 image generation OOM issue
if pipe.transformer.config.sample_size == 128:
from patch_conv import convert_model
pipe.vae = convert_model(pipe.vae, splits=32)
prompt = 'A cute 🐼 eating 🎋, ink drawing style'
image = pipe(
prompt=prompt,
height=4096,
width=4096,
guidance_scale=5.0,
num_inference_steps=20,
generator=torch.Generator(device="cuda").manual_seed(42),
)[0]
image[0].save("sana.png")
```
### Logs
```shell
A mixture of bf16 and non-bf16 filenames will be loaded.
Loaded bf16 filenames:
[vae/diffusion_pytorch_model.bf16.safetensors, transformer/diffusion_pytorch_model.bf16.safetensors, text_encoder/model.bf16-00002-of-00002.safetensors, text_encoder/model.bf16-00001-of-00002.safetensors]
Loaded non-bf16 filenames:
[transformer/diffusion_pytorch_model-00001-of-00002.safetensors, transformer/diffusion_pytorch_model-00002-of-00002.safetensors
If this behavior is not expected, please check your folder structure.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.97it/s]
Loading pipeline components...: 100%|█████████████████████████████████████████████████| 5/5 [00:03<00:00, 1.65it/s]
Traceback (most recent call last):
File "/home/rockerboo/code/others/sana-diffusers/main.py", line 28, in <module>
image = pipe(
^^^^^
File "/home/rockerboo/code/others/sana-diffusers/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/rockerboo/code/others/sana-diffusers/.venv/lib/python3.11/site-packages/diffusers/pipelines/pag/pipeline_pag_sana.py", line 736, in __call__
raise ValueError("Invalid sample size")
ValueError: Invalid sample size
```
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-6.12.6-arch1-1-x86_64-with-glibc2.40
- Running on Google Colab?: No
- Python version: 3.11.10
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.27.1
- Transformers version: 4.47.1
- Accelerate version: 1.2.1
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.5.2
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 2080, 8192 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@yiyixuxu @DN6 | open | 2025-01-09T23:16:20Z | 2025-02-12T15:03:30Z | https://github.com/huggingface/diffusers/issues/10514 | [
"bug",
"stale"
] | rockerBOO | 5 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 317 | Applying TripletMarginMiner To Image Segmentation | Hello, I'm attempting to train a segmentation network via triplet loss on the COCO dataset and am running into problems in regards to mining triplet samples from my segmentation masks. I'm using an FCN network (AlexNet with the last layer removed), and the embeddings when I pass my data in is of size [N, 1000] with N = batch size. I want my labels to be segmentation masks but as stated in your documentation, labels must be of size [N]. My labels are of size [N, mask width, mask height]. Even reshaping my segmentation mask from [mask width, mask height] to [mask width* mask height] wouldn't help with the mining function accepting my input as labels as it still is of size [N, mask width*mask height]. I've tried other triplet loss libraries and they seem to require the same input size of [N] for labels. Is there anything I'm missing from this library that could help with my issue? | closed | 2021-04-27T10:49:00Z | 2021-04-28T11:14:38Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/317 | [
"question"
] | H0PP3R | 4 |
keras-team/keras | tensorflow | 20,627 | GlobalAveragePooling1D data_format Question | My rig
- Ubuntu 24.04 VM , RTX3060Ti with driver nvidia 535
- tensorflow-2.14-gpu/tensorflow-2.18 , both pull from docker
- Nvidia Container Toolkit if running in gpu version
About[ this example](https://keras.io/examples/timeseries/timeseries_classification_transformer/)
The transformer blocks of this example contain 2 Conv1D layer, and therefore we have to reshape the input matrix to add the channel dimension at the end.
There is a GlobalAveragePooling1D layer after the transformer blocks:
x = layers.GlobalAveragePooling1D(data_format="channels_last")(x)
which should be correct since our channel is added at the last.
However, if running these example, the summary at the last third line will not have 64,128 Params
dense (Dense) │ (None, 128) │ 64,128 │ global_average_pool…
Instead it will just have 256 parameters and making the total params way less, the model will also have an accuracy of ~50% only

this happen no matter i am running tensorflow-2.14-gpu, or just using the CPU version tensorflow-2.18
However, if changing the data_format="channels_first" everything become fine. The number of params in the GlobalAveragePooling1D layer become 64,128. The total params also match. The training accuracy also more than 90%.
I discover that as i find a very similar model [here](https://github.com/mxochicale/intro-to-transformers/blob/main/tutorials/time-series-classification/timeseries_transformer_classification.ipynb).
The only difference is the data_format
But isn't data_format="channels_last" is the right choice ?
So whats wrong ? | open | 2024-12-11T05:51:31Z | 2024-12-13T06:28:57Z | https://github.com/keras-team/keras/issues/20627 | [
"type:Bug"
] | cptang2007 | 0 |
ned2/slapdash | plotly | 27 | incomplete installation? | I must be doing something wrong during installation. I set up a virtualenv and then followed all the commands, with defaults when prompted. I can get `run-slapdashed-app-dev` to work fine, but when I try to run `run-slapdashed-app-prod` I get `ModuleNotFoundError: No module named 'slapdash'` (and also mod_wsgi errors). Also, I don't see any reference to `project_slug.wsgi` as mentioned in the docs.
Here's exactly what I did:
```
python3.6 -m venv slap_env
slap_env/bin/python -m pip install cookiecutter
slap_env/bin/cookiecutter https://github.com/ned2/slapdash
slap_env/bin/python -m pip install -e slapdashed_app/
```
| closed | 2019-09-23T20:32:39Z | 2022-10-19T12:38:24Z | https://github.com/ned2/slapdash/issues/27 | [] | chubukov | 3 |
lukas-blecher/LaTeX-OCR | pytorch | 212 | Papers about LaTeX-OCR, want to learn more principles and innovate | Dear lukas:
Long time no see, I used your project about a year ago when I was working on an image recognition project, and you solved a lot of my problems at that time. After a year, I am very happy to see that your project is getting better and better, and the attention is also increasing. At present, I am in the stage of graduation thesis, and I am very interested in LaTeX-OCR. I would like to ask you, are there any papers that can learn related principles? I want to use LaTeX-OCR as my graduation thesis research direction and try to innovate and improve, or even Postgraduate research direction.
Any help and response will be of great help, thanks!
Best wishes,
Xiaoyang Liu | closed | 2022-10-25T04:16:04Z | 2022-10-25T08:58:49Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/212 | [] | XiaoyangLiu-sjtu | 1 |
HumanSignal/labelImg | deep-learning | 538 | error opening file: make sure xxx.jpg is a valid image file | when i open labelimg it shows below:
error opening file: make sure xxx.jpg is a valid image file?
how to sovle this problem? Thanks | open | 2020-01-03T12:50:02Z | 2020-01-23T02:59:50Z | https://github.com/HumanSignal/labelImg/issues/538 | [] | china56321 | 1 |
xlwings/xlwings | automation | 2,017 | Folders from OneDrive resolve incorrect fullname if subpaths | #### OS (e.g. Windows 10 or macOS Sierra)
macOS Monterey 12.5.1
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
xlwings 0.27.14
Python 3.10.6
Excel 16.64
#### Describe your issue (incl. Traceback!)
In OneDrive (for business) I am looking at some files from a colleague. The files of interest are in a path like `Bob/Project/Project1`. I am only interested in `Project1`, so I used the `Add shortcut to my files` button in OneDrive to get that directory to appear on my locally synced OneDrive. This works just fine. I can see within my local OneDrive root, a folder called `Project1`.
However, when I try and use `xlwings` to read an Excel document in there, I get an error because the fullname that has been resolved for this file is (I debugged the Python code):
```
/Users/dpwrussell/Documents/OneDrive - MyBusiness/Projects/Project1/sheet.xlsx
```
when I think it should have been:
```
/Users/dpwrussell/Documents/OneDrive - MyBusiness/Project1/sheet.xlsx
```
Basically, because it's OneDrive, the filename has been resolved into a URL, then back to a file path, but the file path assumes a full path which is not the case here because only `Project1` has been added to my OneDrive, not `Projects`. It isn't possible to `Add shortcut to my files` from `Projects` so that isn't even possible as a workaround (assuming that works).
```python
Traceback (most recent call last):
File "/Users/dpwrussell/Code/project1code/main.py", line 13, in <module>
wb = xw.Book(file_name)
File "/Users/dpwrussell/Code/project1code/.venv/lib/python3.10/site-packages/xlwings/main.py", line 867, in __init__
wb.fullname.lower() == fullname
File "/Users/dpwrussell/Code/project1code/.venv/lib/python3.10/site-packages/xlwings/main.py", line 1118, in fullname
return self.impl.fullname
File "/Users/dpwrussell/Code/project1code/.venv/lib/python3.10/site-packages/xlwings/_xlmac.py", line 538, in fullname
return fullname_url_to_local_path(
File "/Users/dpwrussell/Code/project1code/.venv/lib/python3.10/site-packages/xlwings/utils.py", line 546, in fullname_url_to_local_path
raise xlwings.XlwingsError(
xlwings.XlwingsError: Couldn't find your local OneDrive for Business file, see: xlwings.org/error
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
import xlwings as xw
wb = xw.Book('/Users/dpwrussell/Documents/OneDrive - MyBusiness/Project1/sheet.xlsx')
```
| open | 2022-09-14T09:34:09Z | 2022-09-14T12:17:04Z | https://github.com/xlwings/xlwings/issues/2017 | [
"bug"
] | dpwrussell | 3 |
Kav-K/GPTDiscord | asyncio | 418 | Clean up max_token selection | Currently, when setting max_tokens for a conversation buffer memory within langchain, we use simple string selection to set the token limit to 29,000 if the model is a gpt-4 model, and 100,000 if the model is one of the preview models (these are 128k context)
It would be nicer to have some sort of `get_max_conversation_tokens` where it would return the correct bound for a conversation buffer memory given the model name | open | 2023-11-16T23:19:20Z | 2023-11-16T23:19:20Z | https://github.com/Kav-K/GPTDiscord/issues/418 | [] | Kav-K | 0 |
sczhou/CodeFormer | pytorch | 363 | error "No module named 'setuptools'" | when i input command "python basicsr/setup.py develop"
Traceback (most recent call last):
File "C:\Users\krisl\Desktop\CodeFormer\basicsr\setup.py", line 3, in <module>
from setuptools import find_packages, setup
ModuleNotFoundError: No module named 'setuptools' | open | 2024-03-29T13:50:31Z | 2024-03-29T13:50:31Z | https://github.com/sczhou/CodeFormer/issues/363 | [] | hkcitizens9527 | 0 |
ets-labs/python-dependency-injector | flask | 336 | Optional dependencies | I am trying to create a container that optionally takes a dependency, and otherwise provides a value derived from another provider. The (IMO) hacky solution I have so far is a custom provider which either provides a Callable, or a default value if the callable has an error. Then I use this with the Callable being the dependency provider.
My questions are (1) is there a better way? and (2) even using this method, `DefaultCallable` defined below seems like a hack -- how can I improve?
```python
T = TypeVar("T")
class DefaultCallable(providers.Provider):
__slots__ = ("_callable", "_default")
def __init__(
self, callable: Callable[..., T], default: T, *args, **kwargs
):
self._default = default
self._callable = providers.Callable(callable, *args, **kwargs)
super().__init__()
def __deepcopy__(self, memo):
copied = memo.get(id(self))
if copied is not None:
return copied
# TODO: type?
copied = self.__class__(
cast(Callable[..., T], self._callable.provides),
providers.deepcopy(self._default, memo),
*providers.deepcopy(self._callable.args, memo),
**providers.deepcopy(self._callable.kwargs, memo),
)
self._copy_overridings(copied, memo)
return copied
def _provide(self, args, kwargs):
try:
return self._callable(*args, **kwargs)
except Exception:
# TODO: why do we need to check if is provider?
# type?
if getattr(cast(Any, self._default), "__IS_PROVIDER__", False):
return cast(Any, self._default)()
else:
return self._default
# Used like
class Foo(containers.DeclarativeContainer):
#: specify dv for pattern discovery (optional)
dv_in: Provider[xr.DataArray] = providers.Dependency(
instance_of=xr.DataArray
)
#: dv for pattern discovery (specified or default)
dv: Provider[xr.DataArray] = DefaultCallable(
# cast(Callable[..., xr.DataArray], dv_in), type??
cast(Any, dv_in),
problem.training.provided["dv"],
)
```
| closed | 2020-12-14T15:28:08Z | 2021-01-30T00:18:29Z | https://github.com/ets-labs/python-dependency-injector/issues/336 | [
"question",
"feature"
] | shaunc | 12 |
SYSTRAN/faster-whisper | deep-learning | 1,020 | Does this project not support Jetson deployment yet | When I tried to deploy on Jetson, the following error occurred:

The deployment code is as follows:

The Jetson environment is:

| closed | 2024-09-24T07:45:54Z | 2025-03-06T23:10:25Z | https://github.com/SYSTRAN/faster-whisper/issues/1020 | [] | litao-zhx | 5 |
pytorch/pytorch | numpy | 149,284 | No examples in documentation for masked_fill and masked_fill_ | ### 📚 The doc issue
No examples in documentation for masked_fill and masked_fill_
masked_fill - https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill.html
masked_fill_- https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill_.html
### Suggest a potential alternative/fix
add example functions for both of them
cc @svekars @sekyondaMeta @AlannaBurke | open | 2025-03-17T00:56:48Z | 2025-03-17T15:33:18Z | https://github.com/pytorch/pytorch/issues/149284 | [
"module: docs",
"triaged",
"topic: docs"
] | julurisaichandu | 0 |
huggingface/diffusers | deep-learning | 10,184 | flux fill cannot use lora(flux turbo lora) | ### Describe the bug
I want to use flux fill pipeline with turbo lora, but when I load pipeline and load lora model, than gives error
### Reproduction
```
from diffusers import FluxFillPipeline
def model_fn(model_dir: str) -> FluxFillPipeline:
pipe = FluxFillPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Fill-dev", torch_dtype=torch.bfloat16
).to("cuda")
pipe.load_lora_weights(f"alimama-creative/FLUX.1-Turbo-Alpha")
pipe.fuse_lora()
return pipe
```
### Logs
```shell
NotImplementedError: Only LoRAs with input/output features higher than the current module's input/output features are currently supported. The provided LoRA contains in_features=64 and out_features=3072, which are lower than module_in_features=384 and module_out_features=3072. If you require support for this please open an issue at https://github.com/huggingface/diffusers/issues.
```
### System Info
latest(github version diffusers), python3.10, ubuntu with nvidia gpu
### Who can help?
@sayakpaul | closed | 2024-12-11T07:33:21Z | 2024-12-23T09:53:34Z | https://github.com/huggingface/diffusers/issues/10184 | [
"bug",
"lora"
] | Suprhimp | 19 |
sherlock-project/sherlock | python | 2,051 | In ubuntu is not working ? | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [ ] I'm reporting a bug in Sherlock's functionality
- [ ] The bug I'm reporting is not a false positive or a false negative
- [ ] I've verified that I'm running the latest version of Sherlock
- [ ] I've checked for similar bug reports including closed ones
- [ ] I've checked for pull requests that attempt to fix this bug
## Description
<!--
Provide a detailed description of the bug that you have found in Sherlock.
Provide the version of Sherlock you are running.
-->
WRITE DESCRIPTION HERE
| closed | 2024-03-18T13:59:49Z | 2024-03-18T14:07:40Z | https://github.com/sherlock-project/sherlock/issues/2051 | [
"bug"
] | Day21cyber | 1 |
PaddlePaddle/models | nlp | 4,776 | arcface loss | 请问度量学习中的arcmargin loss是arcface loss么?? | closed | 2020-07-29T14:14:35Z | 2020-07-30T02:36:21Z | https://github.com/PaddlePaddle/models/issues/4776 | [
"user"
] | endy-see | 2 |
dynaconf/dynaconf | fastapi | 927 | [RFC] Support for username and password with vault | **Is your feature request related to a problem? Please describe.**
Dynaconf currently does not support connecting to Hashicorp Vault using username and password.
**Describe the solution you'd like**
I suggest adding support for username and password.
**Describe alternatives you've considered**
**Additional context**
hvac already supports username and password and I've tested an implementation locally and could provide pull request if you like the approach.
| closed | 2023-04-26T12:52:39Z | 2023-04-29T19:13:23Z | https://github.com/dynaconf/dynaconf/issues/927 | [
"Not a Bug",
"RFC"
] | hansharhoff | 1 |
darrenburns/posting | rest-api | 179 | Toggle collection browser shortcut (ctrl+h) not working | Posting Version: 2.3.0
OS: Ubuntu 22.04
Terminal: Alacritty
I'll try to debug it later running from source. I'll let you know I find anything
 | open | 2025-02-07T14:27:35Z | 2025-03-07T22:03:03Z | https://github.com/darrenburns/posting/issues/179 | [] | felipebueno | 2 |
encode/apistar | api | 341 | Allow app.py to be in a package | I'd prefer to have app.py as a module in a package than to have it at the root. It would also allow for its name to be changed (and the package to be installed by `setup.py`).
Maybe I missed something but installing "app.py" from a setup.py would conflict between apistar projects or is there a way to "alias" a module from setup.py at install time? I almost never install modules, only packages.
Having app.py as a module fixes some issues:
- It allows the package to be installed
- It allows the installed package to be easily imported from tests
- And to be imported by linters (which may be started by a path different than the root)
- It allows the file to be renamed
- It allows one to specify which project to start as an apistar argument, like apistar run mypackage.app (fixes https://github.com/encode/apistar/issues/258)
I tried it by moving my `app.py` to my package, and now my app.py at the root contains only `from mypackage.app import app`.
From my tests I can now import `from mypackage.app import app` instead of `import app`.
Now `pytest` works (yes apistar tests already worked and still work), and my editor stopped complaining about app not being in the path (yup it starts `pylint test_app.py` not `pylint tests/test_app.py`).
It looks like it can be done without breaking the current behavior, but we may reintroduce an `apistar new --layout something` to allow one to have a simple "two files and it works", and allowing others to build a full hierarchy with packages, tests, setup.py, and a one-line, optional, app.py maybe containing something like "this file can be safely removed if you're willing to pass the package name each time you run apistar".
| closed | 2017-10-25T10:01:08Z | 2018-03-23T23:00:48Z | https://github.com/encode/apistar/issues/341 | [] | JulienPalard | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.