repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 691 | HRNet代码中是否没必要再判断kps_weights[kp_id] > 0.5 | **System information**
* Have I written custom code: No
* OS Platform(e.g., window10 or Linux Ubuntu 16.04): Ubuntu20.04
* Python version: 3.8
* Deep learning framework and version(e.g., Tensorflow2.1 or Pytorch1.3): Pytorch1.11
* Use GPU or not: GPU
* CUDA/cuDNN version(if you use GPU): CUDA
* The network you trained(e.g., Resnet34 network): HRNet
**Describe the current behavior**
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/blob/5d35dd4782968f817e08d866f6e304f483d704ae/pytorch_keypoint/HRNet/transforms.py#L404-L433
请问Up,在HRNet代码中的`430行:if kps_weights[kp_id] > 0.5:`是不是可以去掉呢?
因为之前在`405行`已经判断过`v < 0.5`就跳过了,能运行到后面代码段的肯定是满足`kps_weights[kp_id] >= 0.5`条件的(除非等于0.5的情况),即使在`418`行被重新赋值为0,但也会跳过。
| closed | 2022-11-18T02:47:07Z | 2022-11-20T03:23:14Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/691 | [] | DaMiBear | 1 |
Guovin/iptv-api | api | 632 | 接口在IOS Senplayer软件上无法导入 | 如题,使用url或文件无论是m3u还是txt均无法导入
提示 url解析失败,请检查网络
另外一个项目的m3u导入正常
如可能辛苦作者看看为啥 | closed | 2024-12-07T06:07:47Z | 2024-12-12T01:56:37Z | https://github.com/Guovin/iptv-api/issues/632 | [
"question"
] | gxterry | 5 |
neuml/txtai | nlp | 815 | Add interface for agent memory | Add interface for agent memory.
Related to #791. | open | 2024-11-20T14:23:50Z | 2025-03-21T16:44:44Z | https://github.com/neuml/txtai/issues/815 | [] | davidmezzetti | 0 |
opengeos/leafmap | plotly | 83 | Address JOSS reviewer comments | This issue is for addressing the JOSS reviewer comments:
- https://github.com/openjournals/joss-reviews/issues/3414#issuecomment-877210440
- https://github.com/openjournals/joss-reviews/issues/3414#issuecomment-879660026 | closed | 2021-07-11T21:29:45Z | 2021-07-27T03:18:46Z | https://github.com/opengeos/leafmap/issues/83 | [
"enhancement"
] | giswqs | 1 |
benbusby/whoogle-search | flask | 772 | [BUG] Fly deployment Redux | Something may have changed since [this issue](https://github.com/benbusby/whoogle-search/issues/468) but it appears a Whoogle deployment on Fly.io requires a paid tier. I didn't understand a lot of what was discussed in issue 468, but I followed fly.io's guide.
First: "fly apps create --org personal --port 5000" is not correct, as issue 468 says, but it hasn't been changed in the README. Following Fly's [guide](https://fly.io/docs/hands-on/create-app/) and, after all preceding steps issuing the command: `flyctl launch --image benbusby/whoogle-search:latest' one eventually is asked if he wants to deploy a postgresql database.
Choosing yes and continuing results in being informed via email the deployment requires a pay tier. Destroying the app with 'flyctl apps destroy' and repeating the process and choosing no results in being informed via email the deployment requires a pay tier.
I do not know how to verify a Whoogle deployment on Fly requires a paid tier, and if it does it is not a really a bug, and I apologize for calling it one. However, I don't code; thus, the attempt at a simple deployment of a promising privacy search engine I control.
| closed | 2022-06-02T16:41:13Z | 2022-07-18T16:22:31Z | https://github.com/benbusby/whoogle-search/issues/772 | [
"bug"
] | vcg3rd | 1 |
K3D-tools/K3D-jupyter | jupyter | 125 | index.js:380 TypeError: Cannot read property 'resizeHelper' of undefined | Here is an error I have trying to use K3D on JLab:
```
K3D: (UNMASKED_VENDOR_WEBGL) NVIDIA Corporation
K3D: (UNMASKED_RENDERER_WEBGL) GeForce GTX 1050 Ti/PCIe/SSE2
index.js:380 TypeError: Cannot read property 'resizeHelper' of undefined
at child.handleResize (labplugin.js:7725)
at child.processPhosphorMessage (labplugin.js:7700)
at JupyterPhosphorWidget.push.rynU.JupyterPhosphorWidget.processMessage (widget.js:660)
at invokeHandler (index.js:433)
at Object.sendMessage (index.js:169)
at layout.js:233
at Object.each (iter.js:60)
at PanelLayout.push.yNaG.Layout.onResize (layout.js:232)
at PanelLayout.push.yNaG.Layout.processParentMessage (layout.js:156)
at WidgetRenderer.push.FmDU.Widget.notifyLayout (widget.js:568)
at WidgetRenderer.push.FmDU.Widget.processMessage (widget.js:484)
at invokeHandler (index.js:433)
at Object.sendMessage (index.js:169)
at layout.js:233
at Object.each (iter.js:60)
at PanelLayout.push.yNaG.Layout.onResize (layout.js:232)
exceptionHandler @ index.js:380
index.js:380 TypeError: Cannot read property 'resizeHelper' of undefined
at child.handleResize (labplugin.js:7725)
at child.processPhosphorMessage (labplugin.js:7700)
at JupyterPhosphorWidget.push.rynU.JupyterPhosphorWidget.processMessage (widget.js:660)
at invokeHandler (index.js:433)
at sendMessage (index.js:169)
at runMessageLoop (index.js:483)
exceptionHandler @ index.js:380
index.js:380 TypeError: Cannot read property 'resizeHelper' of undefined
at child.handleResize (labplugin.js:7725)
at child.processPhosphorMessage (labplugin.js:7700)
at JupyterPhosphorWidget.push.rynU.JupyterPhosphorWidget.processMessage (widget.js:660)
at invokeHandler (index.js:433)
at Object.sendMessage (index.js:169)
at layout.js:233
at Object.each (iter.js:60)
at PanelLayout.push.yNaG.Layout.onResize (layout.js:232)
at PanelLayout.push.yNaG.Layout.processParentMessage (layout.js:156)
at WidgetRenderer.push.FmDU.Widget.notifyLayout (widget.js:568)
at WidgetRenderer.push.FmDU.Widget.processMessage (widget.js:484)
at invokeHandler (index.js:433)
at Object.sendMessage (index.js:169)
at layout.js:233
at Object.each (iter.js:60)
at PanelLayout.push.yNaG.Layout.onResize (layout.js:232)
exceptionHandler @ index.js:380
6index.js:380 TypeError: Cannot read property 'resizeHelper' of undefined
at child.handleResize (labplugin.js:7725)
at child.processPhosphorMessage (labplugin.js:7700)
at JupyterPhosphorWidget.push.rynU.JupyterPhosphorWidget.processMessage (widget.js:660)
at invokeHandler (index.js:433)
at Object.sendMessage (index.js:169)
at layout.js:233
at Object.each (iter.js:60)
at PanelLayout.push.yNaG.Layout.onResize (layout.js:232)
at PanelLayout.push.yNaG.Layout.processParentMessage (layout.js:156)
at WidgetRenderer.push.FmDU.Widget.notifyLayout (widget.js:568)
at WidgetRenderer.push.FmDU.Widget.processMessage (widget.js:484)
at invokeHandler (index.js:433)
at Object.sendMessage (index.js:169)
at layout.js:233
at Object.each (iter.js:60)
at PanelLayout.push.yNaG.Layout.onResize (layout.js:232)
``` | closed | 2018-12-03T00:01:01Z | 2019-10-23T11:10:28Z | https://github.com/K3D-tools/K3D-jupyter/issues/125 | [
"JupyterLab"
] | hadim | 2 |
mage-ai/mage-ai | data-science | 5,636 | [BUG] MSSQL export is not supporting method=multi when fast_execute=true | ### Mage version
0.9.73
### Describe the bug
I am trying to use mssql export to temp table through multiple dynamic child blocks. when I use fast execute = true, I m facing below error even though data is very less only 6 rows.
I tried sql alchemy code in my python blocks directly and if specify method="multi", the memoryError goes away.
Please add an option in export method for MSSQL to provide extra args for sqlalchemy such as method="multi" and chunksize in df.to_sql method.
```
File "/usr/local/lib/python3.10/site-packages/mage_ai/io/sql.py", line 297, in __process
self.upload_dataframe_fast(
File "/usr/local/lib/python3.10/site-packages/mage_ai/io/mssql.py", line 246, in upload_dataframe_fast
df.to_sql(
File "/usr/local/lib/python3.10/site-packages/pandas/core/generic.py", line 2987, in to_sql
return sql.to_sql(
File "/usr/local/lib/python3.10/site-packages/pandas/io/sql.py", line 695, in to_sql
return pandas_sql.to_sql(
File "/usr/local/lib/python3.10/site-packages/pandas/io/sql.py", line 1738, in to_sql
total_inserted = sql_engine.insert_records(
File "/usr/local/lib/python3.10/site-packages/pandas/io/sql.py", line 1325, in insert_records
return table.insert(chunksize=chunksize, method=method)
File "/usr/local/lib/python3.10/site-packages/pandas/io/sql.py", line 946, in insert
num_inserted = exec_insert(conn, keys, chunk_iter)
File "/usr/local/lib/python3.10/site-packages/pandas/io/sql.py", line 853, in _execute_insert
result = conn.execute(self.table.insert(), data)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1385, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
return connection._execute_clauseelement(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1577, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1953, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2138, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1890, in _execute_context
self.dialect.do_executemany(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/mssql/pyodbc.py", line 649, in do_executemany
super(MSDialect_pyodbc, self).do_executemany(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 733, in do_executemany
cursor.executemany(statement, parameters)
MemoryError
```
### To reproduce
_No response_
### Expected behavior
_No response_
### Screenshots
_No response_
### Operating system
_No response_
### Additional context
_No response_ | closed | 2025-01-07T07:53:21Z | 2025-01-10T09:45:54Z | https://github.com/mage-ai/mage-ai/issues/5636 | [
"bug"
] | tech-dev-ip | 0 |
yzhao062/pyod | data-science | 255 | How to set the threshold an ensemble detector? | I'm playing with an ensemble of detectors (i.e. the [Model Combination example](https://pyod.readthedocs.io/en/latest/example.html#model-combination-example)).
It's not clear how to go from the averaged anomaly scores `comb_by_average` to a prediction. Is there a utility function for computing the threshold of an ensemble? Or do I need to just copy the code from [`_process_decision_scores()`](https://pyod.readthedocs.io/en/latest/_modules/pyod/models/base.html)? | open | 2020-12-03T06:36:11Z | 2020-12-03T06:36:11Z | https://github.com/yzhao062/pyod/issues/255 | [] | kennysong | 0 |
exaloop/codon | numpy | 354 | Defining a nested class causes a segfault | The following code defined a nested classclass B. Then Codon reports a segfault.
test.py
```
class A:
a = A()
class B:
class C(B): pass
```
The actual output:
`Segmentation Fault`
Reproduce step:
> download the Pre-built binaries for Linux
> Type " codon run --release test.py" in the console.
Environment:
Ubuntu 18.04
Codon v0.16.0
| closed | 2023-04-16T17:07:51Z | 2025-02-26T04:16:06Z | https://github.com/exaloop/codon/issues/354 | [
"bug"
] | xiaxinmeng | 5 |
predict-idlab/plotly-resampler | plotly | 119 | fig.update_xaxes(range=[start,end]) and fig.update_yaxes(range=[start,end]) is not working | Here is my code:
I am trying to show only the selected range values using the custom slider.
```import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from plotly_resampler import FigureResampler
x = np.arange(2_000)
noisy_sin = (3 + np.sin(x / 200) + np.random.randn(len(x)) / 10) * x / 1_00
fig = FigureResampler(go.Figure())
fig.add_trace(go.Scattergl(name="exp",x=x,y=noisy_sin))
fig.update_layout(xaxis_range=[20,300]) | closed | 2022-09-15T15:01:04Z | 2022-10-22T16:22:26Z | https://github.com/predict-idlab/plotly-resampler/issues/119 | [
"bug"
] | muntakim1 | 4 |
ets-labs/python-dependency-injector | asyncio | 334 | doc: override by derived container class | I would like a derived container class to be able to provide a dependency for a base container. It would seem that this isn't possible now:
```python
class Base(containers.DeclarativeContainer):
to_override = providers.Dependency(instance_of=str)
# method 1 -- override attribute
class Derived1(Base):
other_dependency = providers.Dependency(instance_of=Foo)
to_override = other_dependency.provided.some_string_attr
# method 2 -- use constructor
class Derived2(Base):
def __init__(self, other_dependency: Foo, **kw):
super_kw = kw.copy()
if 'to_override' not in super_kw:
super_kw['to_override'] = other_dependency.some_string_attr
super().__init__(other_dependency=other_dependency, **super_kw)
other_dependency = providers.Dependency(instance_of=Foo)
```
In either case I get `dependency_injector.errors.Error: Dependency is not defined` when I build a derived object and attempt to access `to_override`. | closed | 2020-12-13T15:55:02Z | 2021-02-19T14:12:19Z | https://github.com/ets-labs/python-dependency-injector/issues/334 | [
"question",
"docs"
] | shaunc | 7 |
ultralytics/yolov5 | machine-learning | 12,910 | Exploring Data Augmentation in YOLO-based Networks | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello friends,
In the training of this yolo, as you know, there is a series of parameters for set data augmentation method. One of the methods is _mosaic_, which greatly slows down the learning process. Because the DataLoader must select 4 random images in each iteration and put these four images together. The question I have is, is it possible to perform the Data Augmentation Process once for dataset and then do the training without the data augmentation methods?
### Additional
_No response_ | closed | 2024-04-12T18:26:27Z | 2024-10-20T19:43:36Z | https://github.com/ultralytics/yolov5/issues/12910 | [
"question"
] | BehdadSDP | 4 |
faif/python-patterns | python | 369 | observer.py imports typing.Protocol which is not in Python 3.7 | I got an error when I run `tox`
E ImportError: cannot import name 'Protocol' from 'typing' (/Users/yhay81/.pyenv/versions/3.7.9/lib/python3.7/typing.py)
It is because observer.py import typing.Progocol and typing.Progocol is new in Python version 3.8.
https://docs.python.org/3/library/typing.html#typing.Protocol
(It is from #345)
I think this is one of a solution.
- Drop testing 3.7 in tox.
- Write notation comment in observer.py which says it is new feature of 3.8
I can make PR if this is okay. | closed | 2021-01-24T04:25:22Z | 2021-01-26T18:54:17Z | https://github.com/faif/python-patterns/issues/369 | [
"bug"
] | yhay81 | 2 |
rthalley/dnspython | asyncio | 1,128 | Rdata.to_wire() return type is wrong | **Describe the bug**
The return type of `Rdata.to_wire()` is `Optional[bytes]`, as it returns `None` if the `file` parameter is not `None`, and a `bytes` otherwise.
**Context (please complete the following information):**
- dnspython version [e.g. 2.2.1]
- Python version [e.g. 3.10.0]
- OS: [e.g. macOS Monterey]
| closed | 2024-09-08T18:17:22Z | 2024-09-10T15:11:29Z | https://github.com/rthalley/dnspython/issues/1128 | [
"Bug",
"Fixed"
] | rthalley | 1 |
flavors/django-graphql-jwt | graphql | 272 | request.user in classical django views always AnonymousUser | Hello everyone,
I have an application working mainly with graphql, but I also have some "classical" django views to download files. graphql_jwt works great with graphql queries and mutations, but in an http view, the request.user is always AnonymousUser.
This is how I defined my middlewares and authentication backends:
`MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]`
`GRAPHENE = {
'MIDDLEWARE': [
'graphql_jwt.middleware.JSONWebTokenMiddleware',
],
}`
`AUTHENTICATION_BACKENDS = [
'django.contrib.auth.backends.AllowAllUsersModelBackend',
'graphql_jwt.backends.JSONWebTokenBackend',
]`
I tried using the `from graphql_jwt.decorators.login_required` decorator on my http views, but the decorator crashes.
Is it a normal behaviour? Shouldn't the request know the user is logged in if there's a token with the request (stored in a cookie in my case )?
Have a good day :-)
| closed | 2021-06-03T14:03:32Z | 2021-08-10T23:09:52Z | https://github.com/flavors/django-graphql-jwt/issues/272 | [] | merodrem | 2 |
httpie/cli | api | 613 | Requesting data from a filename always yields Content-Type: application/json | According to https://httpie.org/doc#request-data-from-a-filename
> It has the advantage that the Content-Type header is automatically set to the appropriate value based on the filename extension. For example, the following request sends the verbatim contents of that XML file with `Content-Type: application/xml`
I think it doesn't, because I always see `application/json`:
**YAML**
Sample file:
```yaml
test: true
foobar:
- foo
- bar
```
Result:
```bash
$ http --debug httpbin.org/post < example.yml
HTTPie 0.9.9
Requests 2.12.3
Pygments 2.1.3
Python 3.6.2 (default, Jul 17 2017, 16:44:45)
[GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)]
/usr/local/Cellar/httpie/0.9.9/libexec/bin/python3.6
Darwin 16.7.0
<Environment {
"colors": 256,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/docs#config",
"httpie": "0.9.9"
},
"default_options": "[]"
},
"config_dir": "/Users/user/.httpie",
"is_windows": false,
"stderr": "<_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>",
"stderr_isatty": true,
"stdin": "<_io.TextIOWrapper name='<stdin>' mode='r' encoding='UTF-8'>",
"stdin_encoding": "UTF-8",
"stdin_isatty": false,
"stdout": "<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>",
"stdout_encoding": "UTF-8",
"stdout_isatty": true
}>
>>> requests.request(**{
"allow_redirects": false,
"auth": "None",
"cert": "None",
"data": "test: true\nfoobar:\n - foo\n - bar",
"files": {},
"headers": {
"Accept": "application/json, */*",
"Content-Type": "application/json",
"User-Agent": "HTTPie/0.9.9"
},
"method": "post",
"params": {},
"proxies": {},
"stream": true,
"timeout": 30,
"url": "http://httpbin.org/post",
"verify": true
})
HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 446
Content-Type: application/json
Date: Fri, 29 Sep 2017 09:52:05 GMT
Server: meinheld/0.6.1
Via: 1.1 vegur
X-Powered-By: Flask
X-Processed-Time: 0.000854969024658
{
"args": {},
"data": "test: true\nfoobar:\n - foo\n - bar",
"files": {},
"form": {},
"headers": {
"Accept": "application/json, */*",
"Accept-Encoding": "gzip, deflate",
"Connection": "close",
"Content-Length": "34",
"Content-Type": "application/json",
"Host": "httpbin.org",
"User-Agent": "HTTPie/0.9.9"
},
"json": null,
"origin": "1.1.1.1",
"url": "http://httpbin.org/post"
}
```
**XML**
Sample file:
```xml
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend!</body>
</note>
```
Result:
```bash
$ http --debug httpbin.org/post < example.xml
HTTPie 0.9.9
Requests 2.12.3
Pygments 2.1.3
Python 3.6.2 (default, Jul 17 2017, 16:44:45)
[GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)]
/usr/local/Cellar/httpie/0.9.9/libexec/bin/python3.6
Darwin 16.7.0
<Environment {
"colors": 256,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/docs#config",
"httpie": "0.9.9"
},
"default_options": "[]"
},
"config_dir": "/Users/user/.httpie",
"is_windows": false,
"stderr": "<_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>",
"stderr_isatty": true,
"stdin": "<_io.TextIOWrapper name='<stdin>' mode='r' encoding='UTF-8'>",
"stdin_encoding": "UTF-8",
"stdin_isatty": false,
"stdout": "<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>",
"stdout_encoding": "UTF-8",
"stdout_isatty": true
}>
>>> requests.request(**{
"allow_redirects": false,
"auth": "None",
"cert": "None",
"data": "<note>\n <to>Tove</to>\n <from>Jani</from>\n <heading>Reminder</heading>\n <body>Don't forget me this weekend!</body>\n</note>",
"files": {},
"headers": {
"Accept": "application/json, */*",
"Content-Type": "application/json",
"User-Agent": "HTTPie/0.9.9"
},
"method": "post",
"params": {},
"proxies": {},
"stream": true,
"timeout": 30,
"url": "http://httpbin.org/post",
"verify": true
})
HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 548
Content-Type: application/json
Date: Fri, 29 Sep 2017 09:48:22 GMT
Server: meinheld/0.6.1
Via: 1.1 vegur
X-Powered-By: Flask
X-Processed-Time: 0.00126218795776
{
"args": {},
"data": "<note>\n <to>Tove</to>\n <from>Jani</from>\n <heading>Reminder</heading>\n <body>Don't forget me this weekend!</body>\n</note>",
"files": {},
"form": {},
"headers": {
"Accept": "application/json, */*",
"Accept-Encoding": "gzip, deflate",
"Connection": "close",
"Content-Length": "133",
"Content-Type": "application/json",
"Host": "httpbin.org",
"User-Agent": "HTTPie/0.9.9"
},
"json": null,
"origin": "1.1.1.1",
"url": "http://httpbin.org/post"
}
``` | closed | 2017-09-29T09:53:42Z | 2023-11-07T23:59:52Z | https://github.com/httpie/cli/issues/613 | [
"bug"
] | tobilg | 3 |
LibrePhotos/librephotos | django | 979 | manual date/time entry not operand | With 23/08/01 pull version, submit button to modify time /date field is not operand.
I made a test with demo2, same result. | closed | 2023-08-01T17:29:17Z | 2023-08-12T14:31:31Z | https://github.com/LibrePhotos/librephotos/issues/979 | [
"bug"
] | loulou91 | 1 |
plotly/dash | data-science | 2,419 | allow send_data_frame to send df.to_csv using send_bytes | [pandas df.to_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html) supports encodings other than utf-8 in versions > 1.2 but only if the path_or_buf is a binary file type so if I want to do `dcc.send_data_frame(df.to_csv, "mydf.csv", encoding='utf-8-sig')`, the encoding will still be `utf-8` unless I modify [this line ](https://github.com/plotly/dash/blob/66c7847440ab0b96c02d8ad24e694ea20b242916/components/dash-core-components/dash_core_components_base/express.py#L95)with this monkey patch so the file is sent as bytes:
```
dcc.express._data_frame_senders['to_csv'] = dcc.express.send_bytes
```
It would be great if there was an option so that send_data_frame used bytes if pandas > 1.2 and encoding was one of the kwargs.
Simple example to test:
```from dash import Dash, dcc, html, Input, Output
import pandas as pd
# uncomment this line so that the specified encoding works
# dcc.express._data_frame_senders['to_csv'] = dcc.express.send_bytes
app = Dash(__name__)
app.layout = html.Div(
[
html.Button("Download CSV", id="btn_csv"),
dcc.Download(id="download-dataframe-csv"),
]
)
df = pd.DataFrame({"a": ['á', 'é', 'å', 'ñ'], "b": [2, 1, 5, 6], "c": ["x", "x", "y", "y"]})
@app.callback(
Output("download-dataframe-csv", "data"),
Input("btn_csv", "n_clicks"),
prevent_initial_call=True,
)
def func(n_clicks):
df.to_csv("code.csv") # the output file is 43 bytes using utf-8
return dcc.send_data_frame(df.to_csv, "mydf.csv", encoding='utf-8-sig') # the output file is 46 bytes using utf-8-sig
if __name__ == "__main__":
app.run_server(debug=True)
```
| open | 2023-02-10T18:06:07Z | 2024-08-13T19:26:10Z | https://github.com/plotly/dash/issues/2419 | [
"feature",
"P3"
] | michaelbabyn | 0 |
pydantic/logfire | pydantic | 393 | pip-compile logfire with opentelemetry-instrumentation depending on setuptools | ### Question
Dont know if this is an issue or not so opened a question. I'm trying to instrument a fastapi app. That means that you need to pip install logfire[fastapi], which pip installs logfire and opentelemetry-instrumentation-fastapi. But looking at the requirements that I am building from requirements.in I see the following warning:
```
# The following packages are considered to be unsafe in a requirements file:
setuptools==73.0.1 \
--hash=sha256:b208925fcb9f7af924ed2dc04708ea89791e24bde0d3020b27df0e116088b34e \
--hash=sha256:d59a3e788ab7e012ab2c4baed1b376da6366883ee20d7a5fc426816e3d7b1193
# via opentelemetry-instrumentation
```
Which when trying to instrument the app (`logfire.instrument_fastapi(app)`) is giving the following error below. Now I am running the app in bazel. I know that bazels sandbox environment is highly restrictive, and certain packages might be expecting files that are not properly bundled or accessible in this environment. But the Lorem ipsum.txt file being referenced in the error is part of a test or example data within the jaraco.text package, which should not typically be required at runtime. Now I do seem to see the file as existing in setuptools: https://github.com/pypa/setuptools/blob/main/setuptools/_vendor/jaraco/text/Lorem%20ipsum.txt.
```
Traceback (most recent call last):
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/execroot/_main/bazel-out/darwin_x86_64-fastbuild/bin/src/chats/entrypoints/app.runfiles/_main/src/chats/entrypoints/app.py", line 20, in <module>
logfire.instrument_fastapi(app)
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/execroot/_main/bazel-out/darwin_x86_64-fastbuild/bin/src/chats/entrypoints/app.runfiles/rules_python~~pip~pip_311_logfire/site-packages/logfire/_internal/main.py", line 867, in instrument_fastapi
from .integrations.fastapi import instrument_fastapi
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/execroot/_main/bazel-out/darwin_x86_64-fastbuild/bin/src/chats/entrypoints/app.runfiles/rules_python~~pip~pip_311_logfire/site-packages/logfire/_internal/integrations/fastapi.py", line 23, in <module>
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/execroot/_main/bazel-out/darwin_x86_64-fastbuild/bin/src/chats/entrypoints/app.runfiles/rules_python~~pip~pip_311_opentelemetry_instrumentation_fastapi/site-packages/opentelemetry/instrumentation/fastapi/__init__.py", line 199, in <module>
from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/execroot/_main/bazel-out/darwin_x86_64-fastbuild/bin/src/chats/entrypoints/app.runfiles/rules_python~~pip~pip_311_opentelemetry_instrumentation/site-packages/opentelemetry/instrumentation/instrumentor.py", line 27, in <module>
from opentelemetry.instrumentation.dependencies import (
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/execroot/_main/bazel-out/darwin_x86_64-fastbuild/bin/src/chats/entrypoints/app.runfiles/rules_python~~pip~pip_311_opentelemetry_instrumentation/site-packages/opentelemetry/instrumentation/dependencies.py", line 4, in <module>
from pkg_resources import (
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/execroot/_main/bazel-out/darwin_x86_64-fastbuild/bin/src/chats/entrypoints/app.runfiles/rules_python~~pip~pip_311_setuptools/site-packages/pkg_resources/__init__.py", line 98, in <module>
from jaraco.text import drop_comment, join_continuation, yield_lines
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/execroot/_main/bazel-out/darwin_x86_64-fastbuild/bin/src/chats/entrypoints/app.runfiles/rules_python~~pip~pip_311_setuptools/site-packages/setuptools/_vendor/jaraco/text/__init__.py", line 231, in <module>
files(__name__).joinpath('Lorem ipsum.txt').read_text(encoding='utf-8')
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/external/rules_python~~python~python_3_11_x86_64-apple-darwin/lib/python3.11/pathlib.py", line 1058, in read_text
with self.open(mode='r', encoding=encoding, errors=errors) as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/external/rules_python~~python~python_3_11_x86_64-apple-darwin/lib/python3.11/pathlib.py", line 1044, in open
return io.open(self, mode, buffering, encoding, errors, newline)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/private/var/tmp/_bazel_user/5cf272d3549b214b63c8a304b6dc81fa/execroot/_main/bazel-out/darwin_x86_64-fastbuild/bin/src/chats/entrypoints/app.runfiles/rules_python~~pip~pip_311_setuptools/site-packages/setuptools/_vendor/jaraco/text/Lorem ipsum.txt'
Logfire project URL: https://logfire.pydantic.dev/company/company-app
```
Generally I feel that the opentelemetry-instrumentation dependency on setuptools is degrading you build stability. | closed | 2024-08-21T14:16:45Z | 2024-08-27T13:08:31Z | https://github.com/pydantic/logfire/issues/393 | [
"Question"
] | robert-moyai | 4 |
Buuntu/fastapi-react | fastapi | 191 | Frontend build faild | Frontend build faild, don't know why
```bash
/app/run.sh: line 2: $'\r': command not found
/app/run.sh: line 3: syntax error near unexpected token `$'in\r''
'app/run.sh: line 3: `case $1 in
``` | open | 2022-06-06T16:23:03Z | 2022-08-02T04:27:43Z | https://github.com/Buuntu/fastapi-react/issues/191 | [] | buaaflyaway | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 730 | error | Last Error Received:
Error Received while processing "gamevoice.wav":
Process Method: VR Architecture
If this error persists, please contact the developers with the error details.
Traceback Error: " File "inference_v5.py", line 611, in main
File "lib_v5\spec_utils.py", line 77, in wave_to_spectrogram_mt
"
NameError: "name 'spec_left' is not defined"
Error Time Stamp [2023-08-10 17:04:29]
| open | 2023-08-10T09:10:40Z | 2023-08-10T09:10:40Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/730 | [] | ValerianMa1 | 0 |
huggingface/transformers | tensorflow | 36,660 | [FEAT] [non-CUDA]: Support alternative implementation for `constraints.positive_definite.check` | ### Feature request
Could there be an alternative implementation for
```
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2470: in _init_added_embeddings_weights_with_mean
is_covariance_psd = constraints.positive_definite.check(epsilon * covariance).all()
```
the `torch.linalg.cholesky` only exists for CUDA in pytorch.
### Motivation
To support vision language embedding model (llava model) on vLLM for ROCm.
When I am trying to enable vision_language embedding model support on vLLM for ROCm, I encounter this issue.
```
tests/models/embedding/vision_language/test_llava_next.py:134:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/models/embedding/vision_language/test_llava_next.py:63: in _run_test
hf_model.model.resize_token_embeddings(
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2109: in resize_token_embeddings
model_embeds = self._resize_token_embeddings(new_num_tokens, pad_to_multiple_of, mean_resizing)
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2134: in _resize_token_embeddings
new_embeddings = self._get_resized_embeddings(
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2291: in _get_resized_embeddings
self._init_added_embeddings_weights_with_mean(
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2470: in _init_added_embeddings_weights_with_mean
is_covariance_psd = constraints.positive_definite.check(epsilon * covariance).all()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PositiveDefinite()
value = tensor([[ 8.4661e-14, -9.3146e-17, 5.4274e-16, ..., -1.2541e-16,
8.1008e-16, 2.6355e-16],
[-9.314... [ 2.6355e-16, -5.6042e-16, 5.1984e-16, ..., -1.9993e-16,
-2.7124e-16, 8.5429e-14]], device='cuda:0')
def check(self, value):
sym_check = super().check(value)
if not sym_check.all():
return sym_check
> return torch.linalg.cholesky_ex(value).info.eq(0)
E RuntimeError: Calling torch.linalg.cholesky on a CUDA tensor requires compiling PyTorch with MAGMA. Please use PyTorch built with MAGMA support.
```
the `torch.linalg.cholesky` only exists for CUDA in pytorch.
### Your contribution
By helping to test on AMD GPUs with the fixed and providing feedback. | open | 2025-03-12T09:38:30Z | 2025-03-15T18:19:37Z | https://github.com/huggingface/transformers/issues/36660 | [
"Feature request"
] | tjtanaa | 10 |
aleju/imgaug | deep-learning | 33 | May you add the script to generate 'examples_grid.jpg' image? | May you add the script to generate ' examples_grid.jpg' image?
Thanks! | closed | 2017-05-21T00:18:20Z | 2017-05-21T00:33:54Z | https://github.com/aleju/imgaug/issues/33 | [] | panovr | 1 |
ultralytics/ultralytics | python | 18,754 | Interpretation of Yolo10's and Yolo11's output using OpenVINO | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
# Question
I have converted Yolo10 and Yolo11 nano models to OpenVINO model. When I use `YOLO` function and pass these models to detect objects in an image, the output is understandable. But, if I use OpenVINO's API to infer the model, the output is weird.
Consider the following image:

The Yolo10n model finds five cars and Yolo11 finds five cars and two trucks in the image, but the shape of the output is (1,300,6) for Yolo10 and (1,84,8400) for Yolo11.
**First:** Does YOLO10 need nms?
Moreover, the output data is not meaningful. Consider Yolo10: I guess the third dimension in Yolo10's output should correspond to `(x, y, w, h, confident score, class number)`, but the output data shows something else:
* The 5th column (which should corresponds to confident score) is always 1.0.
* The 6th column (which should corresponds to class number) contains 41,67 and 73 (not 2).
Note that car class number is 2 in pretrained Yolo models.
**Second:** Could someone please explain the structure of the output of Yolo10 model when I use OpenVINO's API.
# Code for converting Yolo model to OpenVINO
```
from ultralytics import YOLO
#model = YOLO("yolov10n.pt")
model = YOLO("yolo11n.pt")
result = model.export(format="openvino", device='cpu', dynamic=False, half=False, imgsz=(640,640))
```
# Code for inferring images using Yolo classes with OpenVINO models
```
from ultralytics import YOLO
#model = YOLO("./yolov10n_openvino_model/")
model = YOLO("./yolo11n_openvino_model/")
source = "../car.jpg"
result = model.predict(source, show=True, show_labels=True, show_conf=True, save=True, imgsz=(640,640), conf=0.2, device='cpu')
```
# Code for inferring images using OpenVINO api
```
import cv2
import numpy as np
import openvino as ov
core = ov.Core()
model_path = "./yolov10n_openvino_model/yolov10n.xml"
#model_path = "./yolo11n_openvino_model/yolo11n.xml"
image_path = "../car.jpg"
device_name = "CPU"
# ------------- read model -------------
model = core.read_model(model_path)
# ------------- setup input -------------
# Read input image
image = cv2.imread(image_path)
# Add N dimension
input_tensor = np.expand_dims(image, 0)
# ------------- apply preprocessing -------------
ppp = ov.preprocess.PrePostProcessor(model)
_, h, w, _ = input_tensor.shape
# 1) Set input tensor information:
# - input() provides information about a single model input
# - reuse precision and shape from already available `input_tensor`
# - layout of data is 'NHWC'
ppp.input().tensor() \
.set_shape(input_tensor.shape) \
.set_element_type(ov.Type.u8) \
.set_layout(ov.Layout('NHWC')) # noqa: ECE001, N400
# 2) Adding explicit preprocessing steps:
# - apply linear resize from tensor spatial dims to model spatial dims
ppp.input().preprocess().resize(ov.preprocess.ResizeAlgorithm.RESIZE_LINEAR)
# 3) Here we suppose model has 'NCHW' layout for input
ppp.input().model().set_layout(ov.Layout('NCHW'))
# 4) Set output tensor information:
# - precision of tensor is supposed to be 'f32'
ppp.output().tensor().set_element_type(ov.Type.f32)
# 5) Apply preprocessing modifying the original 'model'
model = ppp.build()
# ------------- Load model to device -------------
compiled_model = core.compile_model(model, device_name) #load model with preprocessing
# ------------- Create infer request and do inference synchronously -------------
results = compiled_model.infer_new_request({0: input_tensor})
# ------------- Process output -------------
predictions = results[0]
print(predictions)
```
### Additional
_No response_ | open | 2025-01-18T13:48:21Z | 2025-02-16T14:31:59Z | https://github.com/ultralytics/ultralytics/issues/18754 | [
"question",
"detect",
"exports"
] | nikpayam | 20 |
nltk/nltk | nlp | 2,969 | Testing offline | (I am sorry for bothering you on the issue tracker with something which probably isn't a bug, but I haven't any other functional means of communication with the project)
While packaging NLTK for openSUSE, I would like to make tests running. The problem is that our build system (as build systems for all distributions) are isolated from Internet, so I need to make running test suite possible without touching the network. So, I have downloaded all ntlk_data, set NTLK_DATA variable accordingly. Unfortunately, the result is not good:
```
[ 78s] + cd /home/abuild/rpmbuild/BUILD
[ 78s] + cd nltk-3.7
[ 78s] ++ readlink -f ./ntlk_data/
[ 78s] + export NLTK_DATA=/home/abuild/rpmbuild/BUILD/nltk-3.7/ntlk_data
[ 78s] + NLTK_DATA=/home/abuild/rpmbuild/BUILD/nltk-3.7/ntlk_data
[ 78s] ++ '[' -f _current_flavor ']'
[ 78s] ++ cat _current_flavor
[ 78s] + last_flavor=python38
[ 78s] + '[' -z python38 ']'
[ 78s] + '[' python38 '!=' python39 ']'
[ 78s] + '[' -d build ']'
[ 78s] + mv build _build.python38
[ 78s] + '[' -d _build.python39 ']'
[ 78s] + mv _build.python39 build
[ 78s] + echo python39
[ 78s] + python_flavor=python39
[ 78s] + PYTHONPATH=/home/abuild/rpmbuild/BUILDROOT/python-nltk-3.7-0.x86_64/usr/lib/python3.9/site-packages
[ 78s] + PYTHONDONTWRITEBYTECODE=1
[ 78s] + pytest-3.9 --ignore=_build.python39 --ignore=_build.python310 --ignore=_build.python38 -v
[ 79s] ============================= test session starts ==============================
[ 79s] platform linux -- Python 3.9.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /usr/bin/python3.9
[ 79s] cachedir: .pytest_cache
[ 79s] rootdir: /home/abuild/rpmbuild/BUILD/nltk-3.7
[ 79s] plugins: cov-3.0.0, mock-3.6.1
[ 95s] collecting ... collected 424 items / 3 errors / 421 selected
[ 95s]
[ 95s] ==================================== ERRORS ====================================
[ 95s] _______________ ERROR collecting nltk/test/unit/test_corpora.py ________________
[ 95s] nltk/corpus/util.py:84: in __load
[ 95s] root = nltk.data.find(f"{self.subdir}/{zip_name}")
[ 95s] nltk/data.py:583: in find
[ 95s] raise LookupError(resource_not_found)
[ 95s] E LookupError:
[ 95s] E **********************************************************************
[ 95s] E Resource ptb not found.
[ 95s] E Please use the NLTK Downloader to obtain the resource:
[ 95s] E
[ 95s] E >>> import nltk
[ 95s] E >>> nltk.download('ptb')
[ 95s] E
[ 95s] E For more information see: https://www.nltk.org/data.html
[ 95s] E
[ 95s] E Attempted to load corpora/ptb.zip/ptb/
[ 95s] E
[ 95s] E Searched in:
[ 95s] E - '/home/abuild/rpmbuild/BUILD/nltk-3.7/ntlk_data'
[ 95s] E - '/home/abuild/nltk_data'
[ 95s] E - '/usr/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/local/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/local/lib/nltk_data'
[ 95s] E **********************************************************************
[ 95s]
[ 95s] During handling of the above exception, another exception occurred:
[ 95s] nltk/test/unit/test_corpora.py:186: in <module>
[ 95s] ???
[ 95s] nltk/corpus/util.py:121: in __getattr__
[ 95s] self.__load()
[ 95s] nltk/corpus/util.py:86: in __load
[ 95s] raise e
[ 95s] nltk/corpus/util.py:81: in __load
[ 95s] root = nltk.data.find(f"{self.subdir}/{self.__name}")
[ 95s] nltk/data.py:583: in find
[ 95s] raise LookupError(resource_not_found)
[ 95s] E LookupError:
[ 95s] E **********************************************************************
[ 95s] E Resource ptb not found.
[ 95s] E Please use the NLTK Downloader to obtain the resource:
[ 95s] E
[ 95s] E >>> import nltk
[ 95s] E >>> nltk.download('ptb')
[ 95s] E
[ 95s] E For more information see: https://www.nltk.org/data.html
[ 95s] E
[ 95s] E Attempted to load corpora/ptb
[ 95s] E
[ 95s] E Searched in:
[ 95s] E - '/home/abuild/rpmbuild/BUILD/nltk-3.7/ntlk_data'
[ 95s] E - '/home/abuild/nltk_data'
[ 95s] E - '/usr/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/local/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/local/lib/nltk_data'
[ 95s] E **********************************************************************
[ 95s] _______________ ERROR collecting nltk/test/unit/test_nombank.py ________________
[ 95s] nltk/corpus/util.py:84: in __load
[ 95s] root = nltk.data.find(f"{self.subdir}/{zip_name}")
[ 95s] nltk/data.py:583: in find
[ 95s] raise LookupError(resource_not_found)
[ 95s] E LookupError:
[ 95s] E **********************************************************************
[ 95s] E Resource nombank.1.0 not found.
[ 95s] E Please use the NLTK Downloader to obtain the resource:
[ 95s] E
[ 95s] E >>> import nltk
[ 95s] E >>> nltk.download('nombank.1.0')
[ 95s] E
[ 95s] E For more information see: https://www.nltk.org/data.html
[ 95s] E
[ 95s] E Attempted to load corpora/nombank.1.0.zip/nombank.1.0/
[ 95s] E
[ 95s] E Searched in:
[ 95s] E - '/home/abuild/rpmbuild/BUILD/nltk-3.7/ntlk_data'
[ 95s] E - '/home/abuild/nltk_data'
[ 95s] E - '/usr/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/local/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/local/lib/nltk_data'
[ 95s] E **********************************************************************
[ 95s]
[ 95s] During handling of the above exception, another exception occurred:
[ 95s] nltk/test/unit/test_nombank.py:10: in <module>
[ 95s] nombank.nouns()
[ 95s] nltk/corpus/util.py:121: in __getattr__
[ 95s] self.__load()
[ 95s] nltk/corpus/util.py:86: in __load
[ 95s] raise e
[ 95s] nltk/corpus/util.py:81: in __load
[ 95s] root = nltk.data.find(f"{self.subdir}/{self.__name}")
[ 95s] nltk/data.py:583: in find
[ 95s] raise LookupError(resource_not_found)
[ 95s] E LookupError:
[ 95s] E **********************************************************************
[ 95s] E Resource nombank.1.0 not found.
[ 95s] E Please use the NLTK Downloader to obtain the resource:
[ 95s] E
[ 95s] E >>> import nltk
[ 95s] E >>> nltk.download('nombank.1.0')
[ 95s] E
[ 95s] E For more information see: https://www.nltk.org/data.html
[ 95s] E
[ 95s] E Attempted to load corpora/nombank.1.0
[ 95s] E
[ 95s] E Searched in:
[ 95s] E - '/home/abuild/rpmbuild/BUILD/nltk-3.7/ntlk_data'
[ 95s] E - '/home/abuild/nltk_data'
[ 95s] E - '/usr/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/local/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/local/lib/nltk_data'
[ 95s] E **********************************************************************
[ 95s] _______________ ERROR collecting nltk/test/unit/test_wordnet.py ________________
[ 95s] nltk/corpus/util.py:84: in __load
[ 95s] root = nltk.data.find(f"{self.subdir}/{zip_name}")
[ 95s] nltk/data.py:583: in find
[ 95s] raise LookupError(resource_not_found)
[ 95s] E LookupError:
[ 95s] E **********************************************************************
[ 95s] E Resource wordnet not found.
[ 95s] E Please use the NLTK Downloader to obtain the resource:
[ 95s] E
[ 95s] E >>> import nltk
[ 95s] E >>> nltk.download('wordnet')
[ 95s] E
[ 95s] E For more information see: https://www.nltk.org/data.html
[ 95s] E
[ 95s] E Attempted to load corpora/wordnet.zip/wordnet/
[ 95s] E
[ 95s] E Searched in:
[ 95s] E - '/home/abuild/rpmbuild/BUILD/nltk-3.7/ntlk_data'
[ 95s] E - '/home/abuild/nltk_data'
[ 95s] E - '/usr/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/local/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/local/lib/nltk_data'
[ 95s] E **********************************************************************
[ 95s]
[ 95s] During handling of the above exception, another exception occurred:
[ 95s] nltk/test/unit/test_wordnet.py:10: in <module>
[ 95s] wn.ensure_loaded()
[ 95s] nltk/corpus/util.py:121: in __getattr__
[ 95s] self.__load()
[ 95s] nltk/corpus/util.py:86: in __load
[ 95s] raise e
[ 95s] nltk/corpus/util.py:81: in __load
[ 95s] root = nltk.data.find(f"{self.subdir}/{self.__name}")
[ 95s] nltk/data.py:583: in find
[ 95s] raise LookupError(resource_not_found)
[ 95s] E LookupError:
[ 95s] E **********************************************************************
[ 95s] E Resource wordnet not found.
[ 95s] E Please use the NLTK Downloader to obtain the resource:
[ 95s] E
[ 95s] E >>> import nltk
[ 95s] E >>> nltk.download('wordnet')
[ 95s] E
[ 95s] E For more information see: https://www.nltk.org/data.html
[ 95s] E
[ 95s] E Attempted to load corpora/wordnet
[ 95s] E
[ 95s] E Searched in:
[ 95s] E - '/home/abuild/rpmbuild/BUILD/nltk-3.7/ntlk_data'
[ 95s] E - '/home/abuild/nltk_data'
[ 95s] E - '/usr/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/share/nltk_data'
[ 95s] E - '/usr/local/share/nltk_data'
[ 95s] E - '/usr/lib/nltk_data'
[ 95s] E - '/usr/local/lib/nltk_data'
[ 95s] E **********************************************************************
[ 95s] =============================== warnings summary ===============================
[ 95s] nltk/test/unit/test_tokenize.py:22
[ 95s] /home/abuild/rpmbuild/BUILD/nltk-3.7/nltk/test/unit/test_tokenize.py:22: DeprecationWarning:
[ 95s] The StanfordTokenizer will be deprecated in version 3.2.5.
[ 95s] Please use nltk.parse.corenlp.CoreNLPTokenizer instead.'
[ 95s] seg = StanfordSegmenter()
[ 95s]
[ 95s] -- Docs: https://docs.pytest.org/en/stable/warnings.html
[ 95s] =========================== short test summary info ============================
[ 95s] ERROR nltk/test/unit/test_corpora.py - LookupError:
[ 95s] ERROR nltk/test/unit/test_nombank.py - LookupError:
[ 95s] ERROR nltk/test/unit/test_wordnet.py - LookupError:
[ 95s] !!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!!
[ 95s] ======================== 1 warning, 3 errors in 16.17s =========================
[ 95s] error: Bad exit status from /var/tmp/rpm-tmp.xNuAZW (%check)
```
[Complete log](https://github.com/nltk/nltk/files/8358631/_log-python-nltk.txt)
Any idea, what's wrong?
Thank you for any reply,
Matěj
--
https://matej.ceplovi.cz/blog/, Jabber: mcepl@ceplovi.cz
GPG Finger: 3C76 A027 CA45 AD70 98B5 BC1D 7920 5802 880B C9D8
Las cosas claras y el chocolate espeso.
(Ideas should be clear and chocolate thick.)
-- Spanish proverb | open | 2022-03-27T22:57:40Z | 2022-12-27T18:20:06Z | https://github.com/nltk/nltk/issues/2969 | [] | mcepl | 2 |
AirtestProject/Airtest | automation | 1,234 | macOS 14.5 (23F79) 打开IDE,脚本编辑区域不能编辑,显示一片黑色,点击会新建脚本。 |
**问题分类**
* 测试开发环境AirtestIDE使用
**描述问题bug**
* 打开IDE,脚本编辑区域不能编辑,显示一片黑色,点击会新建脚本
**相关截图**
<img width="644" alt="image" src="https://github.com/user-attachments/assets/db5f794e-3af8-47ad-be7b-d231e39f7639">
**复现步骤**
1. 打开IDE
2. 脚本编辑区显示一片黑色
3. 点击会新建脚本
**预期效果**
* 可以正常编辑
**python 版本:** `Python 3.9.6`
**airtest 版本:** `1.3.4`
**设备:**
- 型号: [iMac,3 GHz 六核Intel Core i5, Radeon Pro 570X 4 GB]
- 系统: [macOS 14.5]
**其他相关环境信息**
* 无
| open | 2024-08-08T01:07:02Z | 2024-11-11T07:27:01Z | https://github.com/AirtestProject/Airtest/issues/1234 | [] | CodeKevin | 3 |
ydataai/ydata-profiling | data-science | 1,688 | plot.histogram.max_bins setting error | ### Current Behaviour
when histogram bins > `setting plot.histogram.max_bins`
Exception:`ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()` in Python 3.10
The failed code is : ~/anaconda3/lib/python3.11/site-packages/ydata_profiling/model/summary_algorithms.py:44, in histogram_compute(config, finite_values, n_unique, name, weights)
### Expected Behaviour
The report generate success.
### Data Description
~/anaconda3/lib/python3.11/site-packages/ydata_profiling/model/summary_algorithms.py:44, in histogram_compute(config, finite_values, n_unique, name, weights)
### Code that reproduces the bug
```Python
from ydata_profiling.config import Settings
from ydata_profiling.utils.paths import get_config
import numpy as np
import pandas as pd
from ydata_profiling import ProfileReport
data = [[1.0, 999,"", 10, None],
[34.54,3424,None,4,5],
[9548.43,1,"fdsfv",54,876],
[32,43.43,"dsfda",43,12],
[1.0,5454,"cxcc",13,43],
[45.7,43,"fsdfsfsfdsfsdf",1,54],
]
df = pd.DataFrame(np.array(data), columns=["a", "b", "c", "d", "e"])
conf = get_config("config_default.yaml")
conf = Settings().from_file(conf)
conf.plot.histogram.max_bins=2
conf.plot.histogram.bins = 0
profile = ProfileReport(df, config=conf, title="Pandas Profiling Report")
# profile.to_widgets(
profile.to_file("./your_report.html")
```
### pandas-profiling version
v4.12.0
### Dependencies
```Text
*
```
### OS
Macbook M1ship
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://docs.profiling.ydata.ai/latest/support-contribution/contribution_guidelines/). | open | 2025-01-06T10:43:21Z | 2025-01-06T10:43:35Z | https://github.com/ydataai/ydata-profiling/issues/1688 | [
"needs-triage"
] | shiyinhongsama | 0 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 81 | 尝试pywinauto和 uiautomation共同使用,在使用SetActive提示ctypes类型错误 | 在项目中尝试结合pywinauto 一起进行自动化测试,但是只要导入pywinauto,
使用uiautomaiton中的SetActive()方法就提示类型错误。
Traceback (most recent call last):
File "D:/python_work/CAS_AutoTest/ut/ut.py", line 41, in <module>
auto.WindowControl(searchDepth=1, AutomationId="myMainWindow", RegexName="Login").SetActive()
File "D:\Python37-32\lib\site-packages\uiautomation\uiautomation.py", line 6907, in SetActive
elif not IsWindowVisible(handle):
File "D:\Python37-32\lib\site-packages\uiautomation\uiautomation.py", line 2124, in IsWindowVisible
return bool(ctypes.windll.user32.IsWindowVisible(ctypes.c_void_p(handle)))
ctypes.ArgumentError: argument 1: <class 'TypeError'>: wrong type | open | 2019-07-06T05:00:57Z | 2019-07-08T01:50:06Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/81 | [
"question"
] | qdwangjianjun | 2 |
plotly/dash | flask | 3,238 | The stringcase package can no longer be installed with setuptools version >=78. | The [https://pypi.org/project/stringcase/](stringcase) package, that is hard dependacy of Dash since this PR can no longer be installed using setuptools >=78, because of the following PR [https://github.com/pypa/setuptools/pull/4870](#4870). As a result, Dash can no longer be installed with setuptools version >=78.
_Originally posted by @ivan-mingolov-blue-technologies in https://github.com/plotly/dash/issues/3220#issuecomment-2748637195_
| open | 2025-03-24T16:07:40Z | 2025-03-24T17:31:05Z | https://github.com/plotly/dash/issues/3238 | [
"bug",
"P1"
] | T4rk1n | 3 |
ultralytics/yolov5 | machine-learning | 12,702 | Why the model has background class ? Predicted as Background ? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am currently working on a project using YOLO v5/v8 for classifying different grades of cocoa beans, and I have encountered some confusion regarding the background class during my training.
As shown in the attached confusion matrix, the model predicts the A class with 100% accuracy; however, the background is also being predicted as the A class. Moreover, class B is predicted as background 10%, I want to see the seeds that classified as background but I am not sure how to do. This is puzzling because I would expect the background class to be distinct from the A, B, and C cocoa bean classes.
Could you please clarify the role of the background class in the training process? Do I need to include the background class in the calculations for True Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN)?
Also, I would like to understand why the model might be misclassifying the background as another class. Is there a common reason for this occurrence, and could you suggest any adjustments or best practices to improve the differentiation between the background and the cocoa bean classes in the model predictions?

### Additional
_No response_ | closed | 2024-02-03T19:05:20Z | 2024-10-20T19:38:52Z | https://github.com/ultralytics/yolov5/issues/12702 | [
"question",
"Stale"
] | itsmefifa | 3 |
ultralytics/yolov5 | pytorch | 13,297 | Facing issues while changing class ID values | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I'm currently using the Xtreme1 tool for creating classes in the ontology section. I’m facing a couple of challenges and would appreciate any help:
Manual Assignment of Class IDs:
When I create a new class, the Class IDs are automatically assigned in sequential order. However, I need to assign specific Class IDs manually. Is there a way to override the automatic assignment and set the Class ID myself?
Managing Class ID Sequence After Deleting Datasets:
After deleting a dataset, Xtreme1 continues to track the previously generated Class IDs, and new classes are assigned the next available number. How can I reset or fix the Class ID sequence so that I can reuse or specify certain IDs?
Has anyone encountered similar issues, or does anyone know how to configure these settings?
### Additional
if anyone know the output the kindly reply here. | open | 2024-09-04T09:31:49Z | 2024-09-05T02:33:51Z | https://github.com/ultralytics/yolov5/issues/13297 | [
"question"
] | RaushanSharma7 | 1 |
Lightning-AI/pytorch-lightning | data-science | 19,673 | LightningModule.train_dataloader not being called | ### Bug description
The hook `train_dataloader` of `LightningModule` is not being called from `Trainer.fit`. I need to put code there, that changes the dataloader and requires access to the optimizers, as follows
```python
class Classifier(LightningModule):
def __init__(
self,
*args,
**kwargs,
):
super().__init__()
# model initalized here
def train_dataloader(self) -> Any:
dl = self.trainer.datamodule.train_dataloader()
if not hasattr(self.trainer.datamodule, "batch_size_physical"):
return dl # just use the LightningDataModule as is
# wrap using this function otherwise
return wrap_data_loader(
data_loader=dl,
max_batch_size=self.trainer.datamodule.batch_size_physical,
optimizer=self.optimizer,
)
```
### What version are you seeing the problem on?
v2.1, v2.2
### How to reproduce the bug
run the following code. It should print `Hello from train_dataloader in the LightningModule` if the function is being called.
```python
import os
import torch
from lightning.pytorch import LightningDataModule, LightningModule, Trainer
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
from torchvision.datasets import MNIST
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(28, 2)
def forward(self, batch):
x, y = batch
return self.layer(x)
def train_dataloader(self):
print("Hello from train_dataloader in the LightningModule")
return super().train_dataloader()
def training_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("train_loss", loss)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("valid_loss", loss)
def test_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("test_loss", loss)
def configure_optimizers(self):
return torch.optim.SGD(self.layer.parameters(), lr=0.1)
class MNISTDataModule(LightningDataModule):
def __init__(self, data_dir: str = "./"):
super().__init__()
self.data_dir = data_dir
self.transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
def prepare_data(self):
# download
MNIST(self.data_dir, train=True, download=True)
MNIST(self.data_dir, train=False, download=True)
def setup(self, stage: str):
# Assign train/val datasets for use in dataloaders
if stage == "fit":
mnist_full = MNIST(self.data_dir, train=True, transform=self.transform)
self.mnist_train, self.mnist_val = random_split(
mnist_full, [55000, 5000], generator=torch.Generator().manual_seed(42)
)
# Assign test dataset for use in dataloader(s)
if stage == "test":
self.mnist_test = MNIST(
self.data_dir, train=False, transform=self.transform
)
if stage == "predict":
self.mnist_predict = MNIST(
self.data_dir, train=False, transform=self.transform
)
def train_dataloader(self):
print("Hello from train_dataloader in the LightningDataModule")
return DataLoader(self.mnist_train, batch_size=32)
def val_dataloader(self):
return DataLoader(self.mnist_val, batch_size=32)
def test_dataloader(self):
return DataLoader(self.mnist_test, batch_size=32)
def predict_dataloader(self):
return DataLoader(self.mnist_predict, batch_size=32)
def main():
model = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
devices=1,
limit_train_batches=1,
limit_val_batches=1,
limit_test_batches=1,
num_sanity_val_steps=0,
max_epochs=1,
enable_model_summary=False,
)
datamodule = MNISTDataModule()
trainer.fit(model, datamodule=datamodule)
if __name__ == "__main__":
main()
```
### Error messages and logs
```
python boring_snippet.py
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
`Trainer(limit_train_batches=1)` was configured so 1 batch per epoch will be used.
`Trainer(limit_val_batches=1)` was configured so 1 batch will be used.
`Trainer(limit_test_batches=1)` was configured so 1 batch will be used.
You are using a CUDA device ('NVIDIA A100-SXM4-40GB') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
Hello from train_dataloader in the LightningDataModule
/opt/conda/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:441: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=255` in the `DataLoader` to improve performance.
/opt/conda/lib/python3.11/site-packages/lightning/pytorch/loops/fit_loop.py:298: The number of training batches (1) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
/opt/conda/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:441: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=255` in the `DataLoader` to improve performance.
Epoch 0: 100%|_____________________________________________________________________________________________________________________________________________| 1/1 [00:00<00:00, 7.82it/s, v_num=49]
`Trainer.fit` stopped: `max_epochs=1` reached.
Epoch 0: 100%|_____________________________________________________________________________________________________________________________________________| 1/1 [00:00<00:00, 7.67it/s, v_num=49]
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA A100-SXM4-40GB
- NVIDIA A100-SXM4-40GB
- NVIDIA A100-SXM4-40GB
- NVIDIA A100-SXM4-40GB
- NVIDIA A100-SXM4-40GB
- NVIDIA A100-SXM4-40GB
- NVIDIA A100-SXM4-40GB
- NVIDIA A100-SXM4-40GB
- available: True
- version: 12.1
* Lightning:
- lightning: 2.2.1
- lightning-bolts: 0.7.0
- lightning-utilities: 0.11.0
- pytorch-lightning: 2.2.1
- torch: 2.2.1
- torchaudio: 2.2.1
- torchmetrics: 1.3.2
- torchvision: 0.17.1
* Packages:
- absl-py: 2.1.0
- aiohttp: 3.9.3
- aiohttp-cors: 0.7.0
- aiosignal: 1.3.1
- alembic: 1.13.1
- aniso8601: 9.0.1
- annotated-types: 0.6.0
- antlr4-python3-runtime: 4.9.3
- anyio: 4.3.0
- archspec: 0.2.2
- argon2-cffi: 23.1.0
- argon2-cffi-bindings: 21.2.0
- arrow: 1.3.0
- asciitree: 0.3.3
- asttokens: 2.4.1
- async-lru: 2.0.4
- attrs: 23.2.0
- autodp: 0.2.3.1
- babel: 2.14.0
- bcrypt: 4.1.2
- beautifulsoup4: 4.12.3
- bleach: 6.1.0
- blessed: 1.19.1
- blinker: 1.7.0
- boltons: 23.1.1
- brotli: 1.1.0
- cached-property: 1.5.2
- cachetools: 5.3.3
- certifi: 2024.2.2
- cffi: 1.16.0
- cfgv: 3.4.0
- chardet: 5.2.0
- charset-normalizer: 3.3.2
- chex: 0.1.85
- click: 8.1.7
- cloudpickle: 3.0.0
- colorama: 0.4.6
- colorful: 0.5.6
- colorlog: 6.8.2
- comm: 0.2.2
- conda: 24.1.2
- conda-build: 24.1.2
- conda-index: 0.4.0
- conda-libmamba-solver: 23.12.0
- conda-package-handling: 2.2.0
- conda-package-streaming: 0.9.0
- contourpy: 1.2.0
- cryptography: 42.0.5
- cycler: 0.12.1
- dask: 2024.2.1
- debugpy: 1.8.1
- decorator: 5.1.1
- defusedxml: 0.7.1
- diffprivlib: 0.6.4
- distlib: 0.3.8
- distro: 1.8.0
- dm-tree: 0.1.8
- docker: 7.0.0
- dp-learning-ff: 0.0.9.dev23+g5b7d4b5.d20240319
- entrypoints: 0.4
- equinox: 0.11.3
- etils: 1.7.0
- exceptiongroup: 1.2.0
- executing: 2.0.1
- fasteners: 0.17.3
- fastjsonschema: 2.19.1
- filelock: 3.13.1
- flask: 3.0.2
- flax: 0.8.2
- fonttools: 4.49.0
- fqdn: 1.5.1
- frozenlist: 1.4.1
- fsspec: 2024.3.1
- gast: 0.5.4
- gitdb: 4.0.11
- gitpython: 3.1.42
- gmpy2: 2.1.2
- google-api-core: 2.17.1
- google-auth: 2.28.1
- google-vizier: 0.1.15
- googleapis-common-protos: 1.62.0
- gpustat: 1.1.1
- graphene: 3.3
- graphql-core: 3.2.3
- graphql-relay: 3.2.0
- greenlet: 3.0.3
- grpcio: 1.62.1
- grpcio-tools: 1.62.1
- gunicorn: 21.2.0
- h11: 0.14.0
- h5py: 3.10.0
- httpcore: 1.0.4
- httpx: 0.27.0
- huggingface-hub: 0.21.4
- hydra-core: 1.3.2
- identify: 2.5.35
- idna: 3.6
- importlib-metadata: 7.0.1
- importlib-resources: 6.1.2
- ipykernel: 6.29.3
- ipython: 8.22.2
- ipywidgets: 8.1.2
- isoduration: 20.11.0
- itsdangerous: 2.1.2
- jax: 0.4.25
- jaxlib: 0.4.25
- jaxopt: 0.8.3
- jaxtyping: 0.2.28
- jedi: 0.19.1
- jinja2: 3.1.3
- joblib: 1.3.2
- json5: 0.9.24
- jsonpatch: 1.33
- jsonpointer: 2.4
- jsonschema: 4.21.1
- jsonschema-specifications: 2023.12.1
- jupyter-client: 8.6.0
- jupyter-core: 5.7.1
- jupyter-events: 0.9.0
- jupyter-lsp: 2.2.4
- jupyter-server: 2.13.0
- jupyter-server-mathjax: 0.2.6
- jupyter-server-terminals: 0.5.2
- jupyterlab: 4.1.5
- jupyterlab-git: 0.50.0
- jupyterlab-pygments: 0.3.0
- jupyterlab-server: 2.25.4
- jupyterlab-widgets: 3.0.10
- kiwisolver: 1.4.5
- libarchive-c: 5.0
- libmambapy: 1.5.7
- lightning: 2.2.1
- lightning-bolts: 0.7.0
- lightning-utilities: 0.11.0
- locket: 1.0.0
- mako: 1.3.2
- mamba: 1.5.7
- markdown: 3.5.2
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- matplotlib: 3.8.3
- matplotlib-inline: 0.1.6
- mdurl: 0.1.2
- memory-tempfile: 2.2.3
- menuinst: 2.0.2
- mistune: 3.0.2
- ml-dtypes: 0.3.2
- mlflow: 2.11.0
- mlflow-skinny: 2.11.0
- more-itertools: 10.2.0
- mpmath: 1.3.0
- msgpack: 1.0.7
- multidict: 6.0.5
- munkres: 1.1.4
- nbclient: 0.8.0
- nbconvert: 7.16.2
- nbdime: 4.0.1
- nbformat: 5.9.2
- nest-asyncio: 1.6.0
- networkx: 3.2.1
- nodeenv: 1.8.0
- notebook-shim: 0.2.4
- numcodecs: 0.12.1
- numpy: 1.26.4
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-ml-py: 12.535.133
- nvidia-nccl-cu12: 2.19.3
- nvidia-nvjitlink-cu12: 12.4.99
- nvidia-nvtx-cu12: 12.1.105
- omegaconf: 2.3.0
- opacus: 1.4.1
- opencensus: 0.11.4
- opencensus-context: 0.1.3
- opt-einsum: 3.3.0
- optax: 0.2.1
- optuna: 3.5.0
- orbax-checkpoint: 0.5.6
- overrides: 7.7.0
- packaging: 24.0
- pandas: 2.2.1
- pandocfilters: 1.5.0
- paramiko: 3.4.0
- parso: 0.8.3
- partd: 1.4.1
- pexpect: 4.9.0
- pickleshare: 0.7.5
- pillow: 9.4.0
- pip: 23.3.2
- pkginfo: 1.10.0
- pkgutil-resolve-name: 1.3.10
- platformdirs: 4.1.0
- pluggy: 1.3.0
- portpicker: 1.6.0
- pre-commit: 3.6.2
- prometheus-client: 0.20.0
- prometheus-flask-exporter: 0.23.0
- prompt-toolkit: 3.0.42
- protobuf: 4.24.4
- psutil: 5.9.8
- psycopg2-binary: 2.9.9
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- py-spy: 0.3.14
- pyarrow: 15.0.0
- pyasn1: 0.5.1
- pyasn1-modules: 0.3.0
- pycosat: 0.6.6
- pycparser: 2.21
- pydantic: 2.6.3
- pydantic-core: 2.16.3
- pygments: 2.17.2
- pynacl: 1.5.0
- pyparsing: 3.1.2
- pysocks: 1.7.1
- python-dateutil: 2.9.0
- python-dp: 1.1.4
- python-json-logger: 2.0.7
- pytorch-lightning: 2.2.1
- pytz: 2024.1
- pyyaml: 6.0.1
- pyzmq: 25.1.2
- querystring-parser: 1.2.4
- ray: 2.9.3
- referencing: 0.33.0
- regex: 2023.12.25
- requests: 2.31.0
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rich: 13.7.1
- rpds-py: 0.18.0
- rsa: 4.9
- ruamel.yaml: 0.18.6
- ruamel.yaml.clib: 0.2.8
- ruff: 0.3.3
- safetensors: 0.4.2
- scikit-learn: 1.4.1.post1
- scipy: 1.12.0
- seaborn: 0.13.2
- send2trash: 1.8.2
- setuptools: 68.2.2
- six: 1.16.0
- skorch: 0.15.0
- smart-open: 7.0.1
- smmap: 5.0.0
- sniffio: 1.3.1
- soupsieve: 2.5
- sqlalchemy: 2.0.28
- sqlparse: 0.4.4
- stack-data: 0.6.2
- sympy: 1.12
- tabulate: 0.9.0
- tensorboard: 2.16.2
- tensorboard-data-server: 0.7.0
- tensorstore: 0.1.56
- terminado: 0.18.0
- tfp-nightly: 0.25.0.dev20240318
- threadpoolctl: 3.3.0
- timm: 0.9.16
- tinycss2: 1.2.1
- tokenizers: 0.15.2
- toolz: 0.12.1
- torch: 2.2.1
- torchaudio: 2.2.1
- torchmetrics: 1.3.2
- torchvision: 0.17.1
- tornado: 6.4
- tqdm: 4.66.2
- traitlets: 5.14.1
- transformers: 4.38.2
- triton: 2.2.0
- truststore: 0.8.0
- typeguard: 2.13.3
- types-python-dateutil: 2.8.19.20240106
- typing-extensions: 4.10.0
- typing-utils: 0.1.0
- tzdata: 2024.1
- uri-template: 1.3.0
- urllib3: 2.1.0
- uv: 0.1.22
- virtualenv: 20.25.1
- vit-proto: 0.0.0
- wcwidth: 0.2.13
- webcolors: 1.13
- webencodings: 0.5.1
- websocket-client: 1.7.0
- werkzeug: 3.0.1
- wheel: 0.42.0
- widgetsnbextension: 4.0.10
- wrapt: 1.16.0
- yarl: 1.9.4
- zarr: 2.17.1
- zipp: 3.17.0
- zstandard: 0.22.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.11.8
- release: 5.4.0-173-generic
- version: #191-Ubuntu SMP Fri Feb 2 13:55:07 UTC 2024
</details>
### More info
_No response_
cc @carmocca @awaelchli @borda | open | 2024-03-19T22:14:09Z | 2024-03-20T15:58:48Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19673 | [
"question",
"working as intended",
"lightningdatamodule",
"ver: 2.1.x"
] | dwahdany | 2 |
pydantic/FastUI | fastapi | 327 | Refine nestable components | See https://github.com/pydantic/FastUI/pull/308#discussion_r1593712269 and https://github.com/pydantic/FastUI/pull/308 | open | 2024-05-30T14:14:01Z | 2024-05-30T14:14:23Z | https://github.com/pydantic/FastUI/issues/327 | [] | sydney-runkle | 0 |
FlareSolverr/FlareSolverr | api | 795 | Docker-compose specified port is not respected | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: latest
- Last working FlareSolverr version: New setup
- Operating system: TOS 4.19.165 / x86_64 GNU/Linux
- Are you using Docker: [yes/no] Yes
- FlareSolverr User-Agent (see log traces or / endpoint): FlareSolverr User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no] No
- Are you using a Proxy: [yes/no] No
- Are you using Captcha Solver: [yes/no] No
- If using captcha solver, which one:
- URL to test this issue:
- Startup Logs:
2023-06-05 17:30:17 INFO FlareSolverr 3.2.0
2023-06-05 17:30:17 INFO Testing web browser installation...
2023-06-05 17:30:17 INFO Platform: Linux-4.19.165+-x86_64-with-glibc2.31
2023-06-05 17:30:17 INFO Chrome / Chromium path: /usr/bin/chromium
2023-06-05 17:30:19 INFO Chrome / Chromium major version: 113
2023-06-05 17:30:19 INFO Launching web browser...
2023-06-05 17:30:27 INFO FlareSolverr User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36
2023-06-05 17:30:27 INFO Test successful!
2023-06-05 17:30:27 INFO Serving on http://0.0.0.0:8191
2023-06-05 17:30:43 INFO 192.168.0.151 GET http://192.168.0.10:8191/ 200 OK
```
### Description
Docker-compose contents:
---
version: "3.1"
services:
flaresolverr:
image: flaresolverr/flaresolverr:latest
container_name: flaresolverr
environment:
- LOG_LEVEL=info
- LOG_HTML=false
- CAPTCHA_SOLVER=none
- TZ=America/Los_Angeles
network_mode: host
ports:
- 5050:5050
restart: unless-stopped
I am using Portainer to store my compose data and manage the data. I am trying to get FlareSolverr to listen on port 5050. I have rebuilt/redeployed the docker image several times with the following port settings:
- 5050:5050
- 5050:8191
- 8191:5050
- "${PORT:-5050}:8191"
- "${PORT:-8191}:5050"
In all instances, the log shows "Serving on http://0.0.0.0:8191" and it can only be reached on that port, not 5050. Am I doing something wrong or is it a bug?
### Logged Error Messages
```text
2023-06-05 20:25:12 INFO FlareSolverr 3.2.0
2023-06-05 20:25:12 INFO Testing web browser installation...
2023-06-05 20:25:12 INFO Platform: Linux-4.19.165+-x86_64-with-glibc2.31
2023-06-05 20:25:12 INFO Chrome / Chromium path: /usr/bin/chromium
2023-06-05 20:25:14 INFO Chrome / Chromium major version: 113
2023-06-05 20:25:14 INFO Launching web browser...
2023-06-05 20:25:20 INFO FlareSolverr User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36
2023-06-05 20:25:20 INFO Test successful!
2023-06-05 20:25:20 INFO Serving on http://0.0.0.0:8191
```
### Screenshots
_No response_ | closed | 2023-06-06T03:30:54Z | 2023-06-07T02:45:33Z | https://github.com/FlareSolverr/FlareSolverr/issues/795 | [] | Smokey23 | 3 |
modelscope/data-juicer | streamlit | 451 | [Feat]: Unified LLM Calling Management | ### Search before continuing 先搜索,再继续
- [X] I have searched the Data-Juicer issues and found no similar feature requests. 我已经搜索了 Data-Juicer 的 issue 列表但是没有发现类似的功能需求。
### Description 描述
Currently, some LLM-dependent operators support `vllm`, while others utilize Hugging Face or the OpenAI API for model calling. It is necessary to review and unify these calling capabilities across the codebase.
Furthermore, could we abstract these calling mechanisms, rather than repeating similar code? This would enable unified management and ease the addition of support for more inference engines, such as custom Post APIs, TensorRT, and ONNX.
### Use case 使用场景
_No response_
### Additional 额外信息
_No response_
### Are you willing to submit a PR for this feature? 您是否乐意为此功能提交一个 PR?
- [X] Yes I'd like to help by submitting a PR! 是的!我愿意提供帮助并提交一个PR! | open | 2024-10-16T03:23:38Z | 2024-10-16T08:47:01Z | https://github.com/modelscope/data-juicer/issues/451 | [
"enhancement"
] | drcege | 0 |
koxudaxi/datamodel-code-generator | pydantic | 1,427 | since v0.16.0 `--use-default` is broken when `allOf` is present | **Describe the bug**
In v0.15.0 `--use-default` works as expected. Since `v0.16.0`, this is only the case when no `allOf` is present in the schema.
**To Reproduce**
Example schema:
```json
{
"type": "object",
"title": "Item",
"allOf": [{
"title": "Entity",
"type": "object"
}],
"required": [
"test",
"testarray"
],
"properties": {
"test": {
"type": "string",
"default": "test123"
},
"testarray": {
"title": "test array",
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"default": [
"test123"
]
}
}
}
```
Used commandline:
```
$ datamodel-codegen.exe --input "RequiredWithDefaultTest.json" --input-file-type jsonschema --output "testmodel.py" --use-default
```
**Expected behavior**
With `v0.15.0` or `allOf` removed from the schema, the result is:
```python
class Item(BaseModel):
test: Optional[str] = 'test123'
testarray: Optional[List[str]] = Field(['test123'], min_items=1, title='test array')
```
**Actual behavior**
With `v0.16.0` and `allOf` present in the schema, the result is:
```python
class Item(BaseModel):
test: str
testarray: List[str] = Field(..., min_items=1, title='test array')
```
**Version:**
- OS: Windows 10
- Python version: 3.9.5
- datamodel-code-generator version: >= v0.16.0
**Additional context**
Is is likely that this is related to #1009 / #1012
| closed | 2023-07-16T06:21:05Z | 2024-05-11T05:28:23Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1427 | [
"bug"
] | simontaurus | 2 |
qubvel-org/segmentation_models.pytorch | computer-vision | 508 | Grayscale issues | Hi, I have an issue using the library. I tried to doing training based on gray scale images and with binary mask (0, 255) values with 255 as foreground. If I specified channels = 1 an exception was raised for broadcast operands issue.
If I specify 3 channels and work on image like RGB image, I got mask with negative values before training.
Could you help me, please? | closed | 2021-11-02T16:42:57Z | 2022-03-15T02:01:02Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/508 | [
"Stale"
] | antonino-tocco | 4 |
pallets-eco/flask-sqlalchemy | flask | 841 | · | · | closed | 2020-06-15T04:28:28Z | 2020-12-05T19:58:24Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/841 | [] | jwjyy | 1 |
wemake-services/django-test-migrations | pytest | 11 | Post migrate signal receiver of the auth contrib gives me ForeignKeyViolation. | **Error message**
```
____________________________________________________ ERROR at teardown of test _____________________________________________________
self = <django.db.backends.postgresql.base.DatabaseWrapper object at 0x7f8187ac3588>
def _commit(self):
if self.connection is not None:
with self.wrap_database_errors:
> return self.connection.commit()
E psycopg2.errors.ForeignKeyViolation: insert or update on table "auth_permission" violates foreign key constraint "auth_permission_content_type_id_2f476e4b_fk_django_co"
E DETAIL: Key (content_type_id)=(385) is not present in table "django_content_type".
../../.pyenv/versions/_/lib/python3.7/site-packages/django/db/backends/base/base.py:236: ForeignKeyViolation
The above exception was the direct cause of the following exception:
self = <django.test.testcases.TransactionTestCase testMethod=__init__>
def _post_teardown(self):
"""Performs any post-test things. This includes:
* Flushing the contents of the database, to leave a clean slate. If
the class has an 'available_apps' attribute, post_migrate isn't fired.
* Force-closing the connection, so the next test gets a clean cursor.
"""
try:
> self._fixture_teardown()
../../.pyenv/versions/_/lib/python3.7/site-packages/django/test/testcases.py:925:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../.pyenv/versions/_/lib/python3.7/site-packages/django/test/testcases.py:960: in _fixture_teardown
inhibit_post_migrate=inhibit_post_migrate)
../../.pyenv/versions/_/lib/python3.7/site-packages/django/core/management/__init__.py:131: in call_command
return command.execute(*args, **defaults)
../../.pyenv/versions/_/lib/python3.7/site-packages/django/core/management/base.py:330: in execute
output = self.handle(*args, **options)
../../.pyenv/versions/_/lib/python3.7/site-packages/django/core/management/commands/flush.py:88: in handle
emit_post_migrate_signal(verbosity, interactive, database)
../../.pyenv/versions/_/lib/python3.7/site-packages/django/core/management/sql.py:53: in emit_post_migrate_signal
**kwargs
../../.pyenv/versions/_/lib/python3.7/site-packages/django/dispatch/dispatcher.py:193: in send
for receiver in self._live_receivers(sender)
../../.pyenv/versions/_/lib/python3.7/site-packages/django/dispatch/dispatcher.py:193: in <listcomp>
for receiver in self._live_receivers(sender)
../../.pyenv/versions/_/lib/python3.7/site-packages/django/contrib/auth/management/__init__.py:83: in create_permissions
Permission.objects.using(using).bulk_create(perms)
../../.pyenv/versions/_/lib/python3.7/site-packages/django/db/models/query.py:449: in bulk_create
obj_without_pk._state.db = self.db
../../.pyenv/versions/_/lib/python3.7/site-packages/django/db/transaction.py:223: in __exit__
connection.commit()
../../.pyenv/versions/_/lib/python3.7/site-packages/django/db/backends/base/base.py:262: in commit
self._commit()
../../.pyenv/versions/_/lib/python3.7/site-packages/django/db/backends/base/base.py:236: in _commit
return self.connection.commit()
../../.pyenv/versions/_/lib/python3.7/site-packages/django/db/utils.py:94: in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
../../.pyenv/versions/_/lib/python3.7/site-packages/django/utils/six.py:685: in reraise
raise value.with_traceback(tb)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <django.db.backends.postgresql.base.DatabaseWrapper object at 0x7f8187ac3588>
def _commit(self):
if self.connection is not None:
with self.wrap_database_errors:
> return self.connection.commit()
E django.db.utils.IntegrityError: insert or update on table "auth_permission" violates foreign key constraint "auth_permission_content_type_id_2f476e4b_fk_django_co"
E DETAIL: Key (content_type_id)=(385) is not present in table "django_content_type".
../../.pyenv/versions/_/lib/python3.7/site-packages/django/db/backends/base/base.py:236: IntegrityError
```
**Workaround**
```python
@pytest.fixture(autouse=True)
def _mute_post_migrate_signal():
from django.db.models.signals import post_migrate
restore, post_migrate.receivers = post_migrate.receivers, []
yield
post_migrate.receivers = restore
``` | closed | 2019-11-29T12:41:04Z | 2020-02-25T14:04:22Z | https://github.com/wemake-services/django-test-migrations/issues/11 | [
"bug",
"help wanted"
] | proofit404 | 1 |
onnx/onnx | deep-learning | 6,594 | [RFC] Do we need a 3.13t version [Python 3.13 (64-bit, freethreaded)]? | # Ask a Question
### Question
Do we need a 3.13t version [Python 3.13 (64-bit, freethreaded)]?
The upcoming pytorch version 2.6 will have experimental support for that:
(https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix)
| open | 2024-12-20T20:36:21Z | 2025-03-07T06:00:19Z | https://github.com/onnx/onnx/issues/6594 | [
"question",
"rfc"
] | andife | 12 |
huggingface/datasets | computer-vision | 6,675 | Allow image model (color conversion) to be specified as part of datasets Image() decode | ### Feature request
Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.org/vision/main/generated/torchvision.io.decode_jpeg.html, and similarly in tensorflow.data pipelines decode_jpeg or https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg have a channels arg that allows controlling the image mode in the decode step.
datasets currently requires this pattern (from [examples](https://huggingface.co/docs/datasets/main/en/image_process)):
```
from torchvision.transforms import Compose, ColorJitter, ToTensor
jitter = Compose(
[
ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.7),
ToTensor(),
]
)
def transforms(examples):
examples["pixel_values"] = [jitter(image.convert("RGB")) for image in examples["image"]]
return examples
```
### Motivation
It would be nice to be able to handle `image.convert("RGB")` (or other modes) in the decode step, before applying torchvision transforms, this would reduce differences in code when handling pipelines that can handle torchvision, webdatset, or hf datasets with fewer code differences and without needing to handle image mode argument passing in two different stages of the pipelines...
### Your contribution
Can do a PR with guidance on how mode should be passed / set on the dataset. | closed | 2024-02-16T23:43:20Z | 2024-03-18T15:41:34Z | https://github.com/huggingface/datasets/issues/6675 | [
"enhancement"
] | rwightman | 1 |
pydata/xarray | numpy | 9,784 | open_mfdataset with remote files is broken because of #9687 | ### What happened?
https://github.com/pydata/xarray/pull/9687
This PR broke open_mfdataset with remote files. The ``_normalize_path_list`` doesn't identify them properly and recurses into the remote file
### What did you expect to happen?
This should continue to work, i.e. exit if p is not a list instead of recursing.
### Minimal Complete Verifiable Example
```Python
from distributed import Client
import s3fs
import xarray as xr
s3 = s3fs.S3FileSystem()
file_list = ['s3://nex-gddp-cmip6/NEX-GDDP-CMIP6/ACCESS-CM2/historical/r1i1p1f1/hurs/hurs_day_ACCESS-CM2_historical_r1i1p1f1_gn_1950.nc']
files = [s3.open(f) for f in file_list]
cc @headtr1ck @dcherian
if __name__ == "__main__":
client = Client()
# Load input NetCDF data files
# TODO: Reduce explicit settings once https://github.com/pydata/xarray/issues/8778 is completed.
ds = xr.open_mfdataset(
files,
engine="h5netcdf",
combine="nested",
concat_dim="time",
data_vars="minimal",
coords="minimal",
compat="override",
parallel=True,
)
```
### MVCE confirmation
- [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [x] Complete example — the example is self-contained, including all data and the text of any traceback.
- [x] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [x] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [x] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
Traceback (most recent call last):
File "/Users/patrick/Library/Application Support/JetBrains/PyCharm2024.3/scratches/scratch.py", line 19, in <module>
ds = xr.open_mfdataset(
^^^^^^^^^^^^^^^^^^
File "/Users/patrick/mambaforge/envs/dask-dev/lib/python3.11/site-packages/xarray/backends/api.py", line 1539, in open_mfdataset
paths = _find_absolute_paths(paths, engine=engine, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/patrick/mambaforge/envs/dask-dev/lib/python3.11/site-packages/xarray/backends/common.py", line 149, in _find_absolute_paths
return _normalize_path_list(paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/patrick/mambaforge/envs/dask-dev/lib/python3.11/site-packages/xarray/backends/common.py", line 140, in _normalize_path_list
return [
^
File "/Users/patrick/mambaforge/envs/dask-dev/lib/python3.11/site-packages/xarray/backends/common.py", line 144, in <listcomp>
else _normalize_path_list(p)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/patrick/mambaforge/envs/dask-dev/lib/python3.11/site-packages/xarray/backends/common.py", line 140, in _normalize_path_list
return [
^
File "/Users/patrick/mambaforge/envs/dask-dev/lib/python3.11/site-packages/xarray/backends/common.py", line 144, in <listcomp>
else _normalize_path_list(p)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/patrick/mambaforge/envs/dask-dev/lib/python3.11/site-packages/xarray/backends/common.py", line 140, in _normalize_path_list
return [
^
File "/Users/patrick/mambaforge/envs/dask-dev/lib/python3.11/site-packages/xarray/backends/common.py", line 144, in <listcomp>
else _normalize_path_list(p)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/patrick/mambaforge/envs/dask-dev/lib/python3.11/site-packages/xarray/backends/common.py", line 140, in _normalize_path_list
return [
^
TypeError: 'int' object is not iterable
```
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:26:25) [Clang 17.0.6 ]
python-bits: 64
OS: Darwin
OS-release: 23.4.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: None
LANG: None
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: None
xarray: 2024.10.1.dev51+g864b35a1
pandas: 2.2.3
numpy: 2.0.2
scipy: 1.14.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: 3.12.1
zarr: 2.18.3
cftime: None
nc_time_axis: None
iris: None
bottleneck: 1.4.2
dask: 2024.11.2+23.g709bad03e
distributed: 2024.11.2
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2024.10.0
cupy: None
pint: None
sparse: 0.15.4
flox: None
numpy_groupies: None
setuptools: 75.3.0
pip: 24.3.1
conda: None
pytest: 8.3.3
mypy: None
IPython: 8.29.0
sphinx: None
None
</details>
| closed | 2024-11-15T15:27:54Z | 2024-11-15T20:19:04Z | https://github.com/pydata/xarray/issues/9784 | [
"bug",
"regression"
] | phofl | 0 |
pytest-dev/pytest-cov | pytest | 416 | Running pytest-cov in parallel | # Summary
People are using `tox -p auto` or `tox -p all` more and more since it exists, (and some are simply using `&` in shell scripts).
But it's failing with `pytest-cov` (https://github.com/nedbat/coveragepy/issues/883, https://github.com/pytest-dev/pytest-cov/issues/356, https://github.com/pytest-dev/pytest-cov/issues/237, https://github.com/pytest-dev/pytest-cov/issues/217).
This is because pytest-cov uses a `coverage combine` step which tries to combine all `.coverage.*` files, mixing files from all of the parallels runs. As some are incomplete, this often yield to sqlite errors, but it also sometime just mix the data in strange ways.
A clean fix is to specify a specific coverage file name for each run, so the combine step will search for files with this specific name, avoiding mixing the files.
This can easily be done, for example, in `.tox.ini` by using:
```
setenv =
COVERAGE_FILE=.coverage.{envname}
```
It make `coverage combine` search for `.coverage.py37.*` for example.
I see two strategies:
Either pytest-cov picks a unique coverage file name per run or pytest-cov documents that when used in parallel one should specify a coverage file name to disambiguate the runs.
Do you have a preference? | open | 2020-06-27T20:48:43Z | 2024-09-20T10:02:34Z | https://github.com/pytest-dev/pytest-cov/issues/416 | [] | JulienPalard | 6 |
dmlc/gluon-nlp | numpy | 894 | Avoid hyperlinks to .rst and .md files from the website | Currently rst and markdown files are well-rendered on Github and there are users who browse Github to read these documentations/tutorials. In the meantime these files are converted to html files and hosted on the website. However, hard-coding any hyperlink ending with `.html` will cause a dead-link if the user is reading from Github; Hard-coding any hyperlink ending with `.rst`/`.md` will cause a dead-link if the user is reading from the website.
We should avoid using hyperlinks with `.html` in our source code, and automatically replace `.rst`/`.md` to `.html` when they are deployed on the website. | open | 2019-08-23T17:12:43Z | 2019-08-28T19:19:15Z | https://github.com/dmlc/gluon-nlp/issues/894 | [
"bug"
] | eric-haibin-lin | 0 |
ydataai/ydata-profiling | jupyter | 1,439 | [BUG] 🐛 `TypeCheckError` thrown when initialising a report | ### Current Behaviour
Error thrown when initialising a report:
``` txt
TypeCheckError: argument "config_file" (None) did not match any element in the union:
pathlib.Path: is not an instance of pathlib.Path
str: is not an instance of str
```
### Expected Behaviour
No error to be thrown.
### Data Description
Happens on all data.
### Code that reproduces the bug
```Python
from ydata_profiling import ProfileReport
from pycaret.datasets import get_data
data = get_data(dataset="germany", verbose=False)
ProfileReport(data, title="Pandas DataFrame")
```
### pandas-profiling version
v4.5.1
### Dependencies
```Text
pandas
numpy
pycaret
```
### OS
Ubuntu
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2023-09-13T03:03:51Z | 2023-12-19T13:00:48Z | https://github.com/ydataai/ydata-profiling/issues/1439 | [
"bug 🐛"
] | chrimaho | 2 |
davidsandberg/facenet | tensorflow | 1,170 | How to use DBSCAN on multiple embeddings identities? | I have the following setting:
1. A surveillance system take photos of people's faces (there are a varying number of photos for each person).
2. I run FaceNet for each photo and get a list of embedding vectors for each person (each person is represented by a list of embeddings, not by a single one).
The problem:
I want to cluster observed people using DBSCAN, but I need to guarantee that face embeddings from the same people go to the same cluster (remember we can have multiple photos of the same people, and we already know they must belong to the same cluster).
One solution could be to get a "mean" or average embedding for each person, but I believe this data loss is going to produce bad results.
Another solution could be to concatenate N embeddings (with N constant) in a single vector and pass that 512xN vector to DBSCAN, but the problem with this is that the order in which the embeddings are appended to this vector is going to produce different results.
Anyone has faced this same problem? | open | 2020-09-03T20:04:31Z | 2020-09-03T20:04:31Z | https://github.com/davidsandberg/facenet/issues/1170 | [] | leo7r | 0 |
gradio-app/gradio | data-science | 10,852 | Add local file access example to documentation | - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
I lost a lot of time looking for an example on how to include static files in gradio 5.22 and only found the solution to 1) set absolute paths and 2) use `/gradio_api/file=` in a github issue, namely #9763.
**Describe the solution you'd like**
I would like to have a clear minimal example in the [File-Access](https://www.gradio.app/guides/file-access) documentation.
| closed | 2025-03-21T08:22:45Z | 2025-03-21T10:53:58Z | https://github.com/gradio-app/gradio/issues/10852 | [] | pcschreiber1 | 1 |
amidaware/tacticalrmm | django | 1,876 | After updating to version 0.17.3 of RMM tactical, all hosts were in "overdue" status | After updating to version 0.17.3 of RMM tactical, all hosts were in "overdue" status | closed | 2024-05-23T18:34:35Z | 2024-05-23T19:20:52Z | https://github.com/amidaware/tacticalrmm/issues/1876 | [] | Cleberson-Brandao | 0 |
deeppavlov/DeepPavlov | tensorflow | 811 | Unable to load odqa model. |
from deeppavlov import configs
from deeppavlov.core.commands.infer import build_model
odqa = build_model(configs.odqa.en_odqa_infer_wiki, load_trained=True)
I'm trying to load the model but m getting below error, could you please help.
File "C:\Users\vsolanki\AppData\Local\Programs\Python\Python36\lib\site-packages\deeppavlov\models\vectorizers\hashing_tfidf_vectorizer.py", line 262, in load
<input type="hidden" class="js-site-search-type-field" name="type" >
FileNotFoundError: HashingTfIdfVectorizer path doesn't exist!
| closed | 2019-04-22T05:22:15Z | 2019-04-22T09:39:34Z | https://github.com/deeppavlov/DeepPavlov/issues/811 | [] | Pem14604 | 1 |
pyro-ppl/numpyro | numpy | 1,357 | How to properly define a Mixture | I am trying to define a mixture model
Something like
<img src="https://render.githubusercontent.com/render/math?math=\color{gray}y_i ~\mid~ \hat{y}_{in}, \hat{y}_{out}, \sigma_y, \sigma_{out} \rightsquigarrow (1 - g_i) \, \mathcal{N}(\hat{y}_{in}, \sigma_y) + g_i \, \mathcal{N}(\hat{y}_{out}, \sigma_{out})">
I see there is a `MixtureSameFamily` which would work in my current example below but is not flexible to defining different distributions
I cannot make the model below work. I get `ValueError: All input arrays must have the same shape.`, which I cannot figure from where
```python
def jax_model_outliers(x=None, y=None, sigma_y=None):
## Define weakly informative Normal priors
beta = numpyro.sample("beta", dist.Normal(0.0, 100))
alpha = numpyro.sample("alpha", dist.Normal(0.0, 100))
## Define Bernoulli inlier / outlier flags according to
## a hyperprior fraction of outliers, itself constrained
## to [0,.5] for symmetry
frac_outliers = numpyro.sample('frac_outliers', dist.Uniform(low=0., high=.5))
## variance of outliers
sigma_y_out = numpyro.sample("sigma_y_out", dist.HalfNormal(100))
with numpyro.plate("data", len(y)):
## define the linear model
ypred_in = numpyro.sample("ypred_in",
dist.Normal(beta + alpha * x, sigma_y))
ypred_out = numpyro.sample("ypred_out", dist.Normal(0, sigma_y_out))
is_outlier = numpyro.sample('is_outlier',
dist.Bernoulli(frac_outliers),
infer={'enumerate': 'parallel'})
# Explicit for debugging
mix_ = dist.Categorical(probs=jnp.array([is_outlier, 1 - is_outlier]))
comp_ = dist.Normal(jnp.array([beta + alpha * x, sigma_y, 0]),
jnp.array([sigma_y, sigma_y_out]))
mixture = dist.MixtureSameFamily(mix_, comp_)
# likelihood
numpyro.sample("obs", mixture, obs=y)
```
Thanks for help | closed | 2022-03-09T10:48:59Z | 2022-03-11T09:53:19Z | https://github.com/pyro-ppl/numpyro/issues/1357 | [
"question"
] | mfouesneau | 3 |
chatanywhere/GPT_API_free | api | 225 | OpenAI response: content=None | [chatgpt-2] [INFO] [1714441451.996533728] [chatgpt]: Input message received: Go ahead for 1 m.
[chatgpt-2] [INFO] [1714441452.002544928] [chatgpt]: Chat history updated with {'role': 'user', 'content': 'Go ahead for 1 m.'}
[chatgpt-2] [INFO] [1714441452.006633129] [chatgpt]: Sending messages to OpenAI: [{'role': 'system', 'content': ''}, {'role': 'user', 'content': 'Go ahead for 1 m.'}]
[chatgpt-2] [INFO] [1714441466.389632728] [chatgpt]: OpenAI response: ChatCompletion(id='chatcmpl-wikbJwGK3aIbUzlVph3CqI5iu2RiB', choices=[Choice(finish_reason='function_call', index=0, message=ChatCompletionMessage(content=None, role='assistant', function_call=FunctionCall(arguments='{"angular_x":0,"angular_y":0,"angular_z":0,"duration":10,"linear_x":1,"linear_y":0,"linear_z":0,"robot_name":"turtle1"}', name='publish_cmd_vel')), logprobs=None)], created=1714441464, model='gpt-3.5-turbo-0125', object='chat.completion', usage=CompletionUsage(completion_tokens=51, prompt_tokens=229, total_tokens=280), system_fingerprint=None)
在将信息传输给openai的过程中,openai返回的content=none,我很好奇,于是查询了使用日志

说明我都成功调用了,于是又查询余额

这是否代表着我欠费呢?但如果是这样的话,前几次虽然调用了api并且产生扣费但是输出却是none是怎么回事呢?有人遇到类似的情况吗? | closed | 2024-04-30T06:19:42Z | 2024-05-01T14:10:31Z | https://github.com/chatanywhere/GPT_API_free/issues/225 | [] | SuccuFy | 13 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 195 | Cookie失效了 | ***Platform where the error occurred?***
Such as: Douyin/TikTok
***The endpoint where the error occurred?***
Such as: API-V1/API-V2/Web APP
***Submitted input value?***
Such as: video link
***Have you tried again?***
Such as: Yes, the error still exists after X time after the error occurred.
***Have you checked the readme or interface documentation for this project?***
Such as: Yes, and it is very sure that the problem is caused by the program.
| closed | 2023-04-10T02:59:00Z | 2023-04-11T11:28:56Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/195 | [
"BUG",
"enhancement"
] | Cestb0n | 2 |
huggingface/pytorch-image-models | pytorch | 960 | [BUG] ViT finetuning eval accuracy is too high running on TPU (bits_and_tpu branch) | Hey,
I've been finetuning ViT on different datasets (cifar100, oxford_pets, etc.). I am using Google TRC TPUs, specifically V3 VM using the bits_and_tpu branch. I have found the results of finetuning to be odd, specifically, on CIFAR100 I am seeing the eval top1 accuracy reaching 94.19 within 17 epochs (I even had 1 run get to 94.44), these numbers are closer to JFT300 results and not ImageNet21K results. From the original ViT paper below they get
93.04 on a similar setup to mine and from the google research github repo also attached below the get 93.29. Even more surprising to me is the fact I get the 94.x results when I turn off the image augmentations. 

To try and ensure I didn't introduce a bug into the codebase, I cloned a new copy of the repo and performed tests aginst it. I start finetunning with:
` python3 launch_xla.py --num-devices 1 finetune.py ~/tensorflow_datasets --dataset tfds/cifar100:3.0.2 --opt sgd --epochs 1000 --workers 1 --val-split test --mixup 0 --cutmix 0 --opt-eps=1e-08 --train-interpolation=bicubic --warmup-lr=1e-06 --lr 0.004 -b 128 --num-classes 100 --model vit_large_patch32_384`
and my finetune.py file is just a copy of the train script with a change in the way I create the mode, that is I comment out this
```
# model = create_model(
# args.model,
# pretrained=args.pretrained,
# num_classes=args.num_classes,
# drop_rate=args.drop,
# drop_connect_rate=args.drop_connect, # DEPRECATED, use drop_path
# drop_path_rate=args.drop_path,
# drop_block_rate=args.drop_block,
# global_pool=args.gp,
# bn_tf=args.bn_tf,
# bn_momentum=args.bn_momentum,
# bn_eps=args.bn_eps,
# scriptable=args.torchscript,
# checkpoint_path=args.initial_checkpoint)
```
and instead put this `model = timm.create_model(args.model, pretrained=True, num_classes=args.num_classes)`
The full script is below:
```
#!/usr/bin/env python3
""" ImageNet Training Script
This is intended to be a lean and easily modifiable ImageNet training script that reproduces ImageNet
training results with some of the latest networks and training techniques. It favours canonical PyTorch
and standard Python style over trying to be able to 'do it all.' That said, it offers quite a few speed
and training result improvements over the usual PyTorch example scripts. Repurpose as you see fit.
This script was started from an early version of the PyTorch ImageNet example
(https://github.com/pytorch/examples/tree/master/imagenet)
NVIDIA CUDA specific speedups adopted from NVIDIA Apex examples
(https://github.com/NVIDIA/apex/tree/master/examples/imagenet)
Hacked together by / Copyright 2020 Ross Wightman (https://github.com/rwightman)
"""
import argparse
import time
import yaml
import os
import logging
from collections import OrderedDict
from datetime import datetime
from dataclasses import replace
from typing import Tuple
import torch
import torch.nn as nn
import torchvision.utils
from timm.bits import initialize_device, setup_model_and_optimizer, DeviceEnv, Monitor, Tracker,\
TrainState, TrainServices, TrainCfg, CheckpointManager, AccuracyTopK, AvgTensor, distribute_bn
from timm.data import create_dataset, create_transform_v2, create_loader_v2, resolve_data_config,\
PreprocessCfg, AugCfg, MixupCfg, AugMixDataset
from timm.models import create_model, safe_model_name, convert_splitbn_model
from timm.loss import *
from timm.optim import optimizer_kwargs
from timm.scheduler import create_scheduler
from timm.utils import setup_default_logging, random_seed, get_outdir, unwrap_model
import timm
_logger = logging.getLogger('train')
# The first arg parser parses out only the --config argument, this argument is used to
# load a yaml file containing key-values that override the defaults for the main parser below
config_parser = parser = argparse.ArgumentParser(description='Training Config', add_help=False)
parser.add_argument('-c', '--config', default='', type=str, metavar='FILE',
help='YAML config file specifying default arguments')
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
# Dataset / Model parameters
parser.add_argument('data_dir', metavar='DIR',
help='path to dataset')
parser.add_argument('--dataset', '-d', metavar='NAME', default='',
help='dataset type (default: ImageFolder/ImageTar if empty)')
parser.add_argument('--train-split', metavar='NAME', default='train',
help='dataset train split (default: train)')
parser.add_argument('--val-split', metavar='NAME', default='validation',
help='dataset validation split (default: validation)')
parser.add_argument('--model', default='resnet50', type=str, metavar='MODEL',
help='Name of model to train (default: "resnet50"')
parser.add_argument('--pretrained', action='store_true', default=False,
help='Start with pretrained version of specified network (if avail)')
parser.add_argument('--initial-checkpoint', default='', type=str, metavar='PATH',
help='Initialize model from this checkpoint (default: none)')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='Resume full model and optimizer state from checkpoint (default: none)')
parser.add_argument('--no-resume-opt', action='store_true', default=False,
help='prevent resume of optimizer state when resuming model')
parser.add_argument('--num-classes', type=int, default=None, metavar='N',
help='number of label classes (Model default if None)')
parser.add_argument('--gp', default=None, type=str, metavar='POOL',
help='Global pool type, one of (fast, avg, max, avgmax, avgmaxc). Model default if None.')
parser.add_argument('--img-size', type=int, default=None, metavar='N',
help='Image patch size (default: None => model default)')
parser.add_argument('--input-size', default=None, nargs=3, type=int,
metavar='N N N', help='Input all image dimensions (d h w, e.g. --input-size 3 224 224), uses model default if empty')
parser.add_argument('--crop-pct', default=None, type=float,
metavar='N', help='Input image center crop percent (for validation only)')
parser.add_argument('--mean', type=float, nargs='+', default=None, metavar='MEAN',
help='Override mean pixel value of dataset')
parser.add_argument('--std', type=float, nargs='+', default=None, metavar='STD',
help='Override std deviation of of dataset')
parser.add_argument('--interpolation', default='', type=str, metavar='NAME',
help='Image resize interpolation type (overrides model)')
parser.add_argument('-b', '--batch-size', type=int, default=256, metavar='N',
help='input batch size for training (default: 32)')
parser.add_argument('-vb', '--validation-batch-size', type=int, default=None, metavar='N',
help='validation batch size override (default: None)')
# Optimizer parameters
parser.add_argument('--opt', default='sgd', type=str, metavar='OPTIMIZER',
help='Optimizer (default: "sgd"')
parser.add_argument('--opt-eps', default=None, type=float, metavar='EPSILON',
help='Optimizer Epsilon (default: None, use opt default)')
parser.add_argument('--opt-betas', default=None, type=float, nargs='+', metavar='BETA',
help='Optimizer Betas (default: None, use opt default)')
parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
help='Optimizer momentum (default: 0.9)')
parser.add_argument('--weight-decay', type=float, default=0.0001,
help='weight decay (default: 0.0001)')
parser.add_argument('--clip-grad', type=float, default=None, metavar='NORM',
help='Clip gradient norm (default: None, no clipping)')
parser.add_argument('--clip-mode', type=str, default='norm',
help='Gradient clipping mode. One of ("norm", "value", "agc")')
# Learning rate schedule parameters
parser.add_argument('--sched', default='cosine', type=str, metavar='SCHEDULER',
help='LR scheduler (default: "cosine"')
parser.add_argument('--lr', type=float, default=0.1, metavar='LR',
help='learning rate (default: 0.05)')
parser.add_argument('--lr-noise', type=float, nargs='+', default=None, metavar='pct, pct',
help='learning rate noise on/off epoch percentages')
parser.add_argument('--lr-noise-pct', type=float, default=0.67, metavar='PERCENT',
help='learning rate noise limit percent (default: 0.67)')
parser.add_argument('--lr-noise-std', type=float, default=1.0, metavar='STDDEV',
help='learning rate noise std-dev (default: 1.0)')
parser.add_argument('--lr-cycle-mul', type=float, default=1.0, metavar='MULT',
help='learning rate cycle len multiplier (default: 1.0)')
parser.add_argument('--lr-cycle-decay', type=float, default=0.5, metavar='MULT',
help='amount to decay each learning rate cycle (default: 0.5)')
parser.add_argument('--lr-cycle-limit', type=int, default=1, metavar='N',
help='learning rate cycle limit, cycles enabled if > 1')
parser.add_argument('--lr-k-decay', type=float, default=1.0,
help='learning rate k-decay for cosine/poly (default: 1.0)')
parser.add_argument('--warmup-lr', type=float, default=0.0001, metavar='LR',
help='warmup learning rate (default: 0.0001)')
parser.add_argument('--min-lr', type=float, default=1e-5, metavar='LR',
help='lower lr bound for cyclic schedulers that hit 0 (1e-5)')
parser.add_argument('--epochs', type=int, default=300, metavar='N',
help='number of epochs to train (default: 300)')
parser.add_argument('--epoch-repeats', type=float, default=0., metavar='N',
help='epoch repeat multiplier (number of times to repeat dataset epoch per train epoch).')
parser.add_argument('--start-epoch', default=None, type=int, metavar='N',
help='manual epoch number (useful on restarts)')
parser.add_argument('--decay-epochs', type=float, default=100, metavar='N',
help='epoch interval to decay LR')
parser.add_argument('--warmup-epochs', type=int, default=5, metavar='N',
help='epochs to warmup LR, if scheduler supports')
parser.add_argument('--cooldown-epochs', type=int, default=10, metavar='N',
help='epochs to cooldown LR at min_lr, after cyclic schedule ends')
parser.add_argument('--patience-epochs', type=int, default=10, metavar='N',
help='patience epochs for Plateau LR scheduler (default: 10')
parser.add_argument('--decay-rate', '--dr', type=float, default=0.1, metavar='RATE',
help='LR decay rate (default: 0.1)')
# Augmentation & regularization parameters
parser.add_argument('--num-aug-repeats', type=int, default=3, metavar='N',
help='number of repeated augmentations (default: 3)')
parser.add_argument('--no-aug', action='store_true', default=False,
help='Disable all training augmentation, override other train aug args')
parser.add_argument('--scale', type=float, nargs='+', default=[0.08, 1.0], metavar='PCT',
help='Random resize scale (default: 0.08 1.0)')
parser.add_argument('--ratio', type=float, nargs='+', default=[3./4., 4./3.], metavar='RATIO',
help='Random resize aspect ratio (default: 0.75 1.33)')
parser.add_argument('--hflip', type=float, default=0.5,
help='Horizontal flip training aug probability')
parser.add_argument('--vflip', type=float, default=0.,
help='Vertical flip training aug probability')
parser.add_argument('--color-jitter', type=float, default=0.4, metavar='PCT',
help='Color jitter factor (default: 0.4)')
parser.add_argument('--aa', type=str, default=None, metavar='NAME',
help='Use AutoAugment policy. "v0" or "original". (default: None)'),
parser.add_argument('--aug-splits', type=int, default=0,
help='Number of augmentation splits (default: 0, valid: 0 or >=2)')
parser.add_argument('--jsd-loss', action='store_true', default=False,
help='Enable Jensen-Shannon Divergence + CE loss. Use with `--aug-splits`.')
parser.add_argument('--bce-loss', action='store_true', default=False,
help='Enable BCE loss w/ Mixup/CutMix use.')
parser.add_argument('--bce-target-thresh', type=float, default=None,
help='Threshold for binarizing softened BCE targets (default: None, disabled)')
parser.add_argument('--reprob', type=float, default=0., metavar='PCT',
help='Random erase prob (default: 0.)')
parser.add_argument('--remode', type=str, default='pixel',
help='Random erase mode (default: "pixel")')
parser.add_argument('--recount', type=int, default=1,
help='Random erase count (default: 1)')
parser.add_argument('--resplit', action='store_true', default=False,
help='Do not random erase first (clean) augmentation split')
parser.add_argument('--mixup', type=float, default=0.0,
help='mixup alpha, mixup enabled if > 0. (default: 0.)')
parser.add_argument('--cutmix', type=float, default=0.0,
help='cutmix alpha, cutmix enabled if > 0. (default: 0.)')
parser.add_argument('--cutmix-minmax', type=float, nargs='+', default=None,
help='cutmix min/max ratio, overrides alpha and enables cutmix if set (default: None)')
parser.add_argument('--mixup-prob', type=float, default=1.0,
help='Probability of performing mixup or cutmix when either/both is enabled')
parser.add_argument('--mixup-switch-prob', type=float, default=0.5,
help='Probability of switching to cutmix when both mixup and cutmix enabled')
parser.add_argument('--mixup-mode', type=str, default='batch',
help='How to apply mixup/cutmix params. Per "batch", "pair", or "elem"')
parser.add_argument('--mixup-off-epoch', default=0, type=int, metavar='N',
help='Turn off mixup after this epoch, disabled if 0 (default: 0)')
parser.add_argument('--smoothing', type=float, default=0.1,
help='Label smoothing (default: 0.1)')
parser.add_argument('--train-interpolation', type=str, default='random',
help='Training interpolation (random, bilinear, bicubic default: "random")')
parser.add_argument('--drop', type=float, default=0.0, metavar='PCT',
help='Dropout rate (default: 0.)')
parser.add_argument('--drop-connect', type=float, default=None, metavar='PCT',
help='Drop connect rate, DEPRECATED, use drop-path (default: None)')
parser.add_argument('--drop-path', type=float, default=None, metavar='PCT',
help='Drop path rate (default: None)')
parser.add_argument('--drop-block', type=float, default=None, metavar='PCT',
help='Drop block rate (default: None)')
# Batch norm parameters (only works with gen_efficientnet based models currently)
parser.add_argument('--bn-tf', action='store_true', default=False,
help='Use Tensorflow BatchNorm defaults for models that support it (default: False)')
parser.add_argument('--bn-momentum', type=float, default=None,
help='BatchNorm momentum override (if not None)')
parser.add_argument('--bn-eps', type=float, default=None,
help='BatchNorm epsilon override (if not None)')
parser.add_argument('--sync-bn', action='store_true',
help='Enable NVIDIA Apex or Torch synchronized BatchNorm.')
parser.add_argument('--dist-bn', type=str, default='reduce',
help='Distribute BatchNorm stats between nodes after each epoch ("broadcast", "reduce", or "")')
parser.add_argument('--split-bn', action='store_true',
help='Enable separate BN layers per augmentation split.')
# Model Exponential Moving Average
parser.add_argument('--model-ema', action='store_true', default=False,
help='Enable tracking moving average of model weights')
parser.add_argument('--model-ema-decay', type=float, default=0.9998,
help='decay factor for model weights moving average (default: 0.9998)')
# Misc
parser.add_argument('--seed', type=int, default=42, metavar='S',
help='random seed (default: 42)')
parser.add_argument('--log-interval', type=int, default=50, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--recovery-interval', type=int, default=0, metavar='N',
help='how many batches to wait before writing recovery checkpoint')
parser.add_argument('--checkpoint-hist', type=int, default=10, metavar='N',
help='number of checkpoints to keep (default: 10)')
parser.add_argument('-j', '--workers', type=int, default=4, metavar='N',
help='how many training processes to use (default: 1)')
parser.add_argument('--save-images', action='store_true', default=False,
help='save images of input bathes every log interval for debugging')
parser.add_argument('--amp', action='store_true', default=False,
help='use NVIDIA Apex AMP or Native AMP for mixed precision training')
parser.add_argument('--channels-last', action='store_true', default=False,
help='Use channels_last memory layout')
parser.add_argument('--pin-mem', action='store_true', default=False,
help='Pin CPU memory in DataLoader for more efficient (sometimes) transfer to GPU.'
help='name of train experiment, name of sub-folder for output')
parser.add_argument('--eval-metric', default='top1', type=str, metavar='EVAL_METRIC',
help='Best metric (default: "top1"')
parser.add_argument('--tta', type=int, default=0, metavar='N',
help='Test/inference time augmentation (oversampling) factor. 0=None (default: 0)')
parser.add_argument("--local_rank", default=0, type=int)
parser.add_argument('--use-multi-epochs-loader', action='store_true', default=False,
help='use the multi-epochs-loader to save time at the beginning of every epoch')
parser.add_argument('--torchscript', dest='torchscript', action='store_true',
help='convert model torchscript for inference')
parser.add_argument('--force-cpu', action='store_true', default=False,
help='Force CPU to be used even if HW accelerator exists.')
parser.add_argument('--log-wandb', action='store_true', default=False,
help='log training and validation metrics to wandb')
def _parse_args():
# Do we have a config file to parse?
args_config, remaining = config_parser.parse_known_args()
if args_config.config:
with open(args_config.config, 'r') as f:
cfg = yaml.safe_load(f)
parser.set_defaults(**cfg)
# The main arg parser parses the rest of the args, the usual
# defaults will have been overridden if config file specified.
args = parser.parse_args(remaining)
# Cache the args as a text string to save them in the output dir later
args_text = yaml.safe_dump(args.__dict__, default_flow_style=False)
return args, args_text
def main():
setup_default_logging()
args, args_text = _parse_args()
dev_env = initialize_device(force_cpu=args.force_cpu, amp=args.amp, channels_last=args.channels_last)
if dev_env.distributed:
_logger.info('Training in distributed mode with multiple processes, 1 device per process. Process %d, total %d.'
% (dev_env.global_rank, dev_env.world_size))
else:
_logger.info('Training with a single process on 1 device.')
random_seed(args.seed, 0) # Set all random seeds the same for model/state init (mandatory for XLA)
mixup_active = args.mixup > 0 or args.cutmix > 0. or args.cutmix_minmax is not None
assert args.aug_splits == 0 or args.aug_splits > 1, 'A split of 1 makes no sense'
train_state = setup_train_task(args, dev_env, mixup_active)
train_cfg = train_state.train_cfg
# Set random seeds across ranks differently for train
# FIXME perhaps keep the same and just set diff seeds for dataloader worker process? what about TFDS?
random_seed(args.seed, dev_env.global_rank)
data_config, loader_eval, loader_train = setup_data(
args,
unwrap_model(train_state.model).default_cfg,
dev_env,
mixup_active)
# setup checkpoint manager
eval_metric = args.eval_metric
best_metric = None
best_epoch = None
checkpoint_manager = None
output_dir = None
if dev_env.primary:
if args.experiment:
exp_name = args.experiment
else:
exp_name = '-'.join([
datetime.now().strftime("%Y%m%d-%H%M%S"),
safe_model_name(args.model),
str(data_config['input_size'][-1])
])
output_dir = get_outdir(args.output if args.output else './output/train', exp_name)
checkpoint_manager = CheckpointManager(
hparams=vars(args),
checkpoint_dir=output_di
try:
for epoch in range(train_state.epoch, train_cfg.num_epochs):
if dev_env.distributed and hasattr(loader_train.sampler, 'set_epoch'):
loader_train.sampler.set_epoch(epoch)
if args.mixup_off_epoch and epoch >= args.mixup_off_epoch:
if loader_train.mixup_enabled:
loader_train.mixup_enabled = False
train_metrics = train_one_epoch(
state=train_state,
services=services,
loader=loader_train,
dev_env=dev_env,
)
if dev_env.distributed and args.dist_bn in ('broadcast', 'reduce'):
if dev_env.primary:
_logger.info("Distributing BatchNorm running means and vars")
distribute_bn(train_state.model, args.dist_bn == 'reduce', dev_env)
eval_metrics = evaluate(
train_state.model,
train_state.eval_loss,
loader_eval,
services.monitor,
dev_env)
if train_state.model_ema is not None:
if dev_env.distributed and args.dist_bn in ('broadcast', 'reduce'):
distribute_bn(train_state.model_ema, args.dist_bn == 'reduce', dev_env)
ema_eval_metrics = evaluate(
train_state.model_ema.module,
train_state.eval_loss,
loader_eval,
services.monitor,
dev_env,
phase_suffix='EMA')
eval_metrics = ema_eval_metrics
if train_state.lr_scheduler is not None:
# step LR for next epoch
train_state.lr_scheduler.step(epoch + 1, eval_metrics[eval_metric])
if services.monitor is not None:
services.monitor.write_summary(
index=epoch,
results=dict(train=train_metrics, eval=eval_metrics))
if checkpoint_manager is not None:
# save proper checkpoint with eval metric
best_checkpoint = checkpoint_manager.save_checkpoint(train_state, eval_metrics)
best_metric, best_epoch = best_checkpoint.sort_key, best_checkpoint.epoch
train_state = replace(train_state, epoch=epoch + 1)
except KeyboardInterrupt:
pass
if best_metric is not None:
_logger.info('*** Best metric: {0} (epoch {1})'.format(best_metric, best_epoch))
def setup_train_task(args, dev_env: DeviceEnv, mixup_active: bool):
# model = create_model(
# args.model,
# pretrained=args.pretrained,
# num_classes=args.num_classes,
# drop_rate=args.drop,
# drop_connect_rate=args.drop_connect, # DEPRECATED, use drop_path
# drop_path_rate=args.drop_path,
# drop_block_rate=args.drop_block,
# global_pool=args.gp,
# bn_tf=args.bn_tf,
# bn_momentum=args.bn_momentum,
# bn_eps=args.bn_eps,
# scriptable=args.torchscript,
# checkpoint_path=args.initial_checkpoint)
model = timm.create_model(args.model, pretrained=True, num_classes=args.num_classes)
if args.num_classes is None:
assert hasattr(model, 'num_classes'), 'Model must have `num_classes` attr if not set on cmd line/config.'
args.num_classes = mod
# FIXME move into updater?
lr_scheduler, num_epochs = create_scheduler(args, train_state.updater.optimizer)
if lr_scheduler is not None and train_state.epoch > 0:
lr_scheduler.step(train_state.epoch)
# setup loss function
if args.jsd_loss:
assert args.aug_splits > 1 # JSD only valid with aug splits set
train_loss_fn = JsdCrossEntropy(num_splits=args.aug_splits, smoothing=args.smoothing)
elif mixup_active:
# smoothing is handled with mixup target transform
if args.bce_loss:
train_loss_fn = BinaryCrossEntropy(target_threshold=args.bce_target_thresh)
else:
train_loss_fn = SoftTargetCrossEntropy()
elif args.smoothing:
if args.bce_loss:
train_loss_fn = BinaryCrossEntropy(smoothing=args.smoothing, target_threshold=args.bce_target_thresh)
else:
train_loss_fn = LabelSmoothingCrossEntropy(smoothing=args.smoothing)
else:
train_loss_fn = nn.CrossEntropyLoss()
eval_loss_fn = nn.CrossEntropyLoss()
dev_env.to_device(train_loss_fn, eval_loss_fn)
if dev_env.primary:
_logger.info('Scheduled epochs: {}'.format(num_epochs))
train_cfg = TrainCfg(
num_epochs=num_epochs,
log_interval=args.log_interval,
recovery_interval=args.recovery_interval,
)
train_state = replace(
train_state,
lr_scheduler=lr_scheduler,
train_loss=train_loss_fn,
eval_loss=eval_loss_fn,
train_cfg=train_cfg,
)
return train_state
def setup_data(args, default_cfg, dev_env: DeviceEnv, mixup_active: bool):
data_config = resolve_data_config(vars(args), default_cfg=default_cfg, verbose=dev_env.primary)
# create the train and eval datasets
dataset_train = create_dataset(
args.dataset,
root=args.data_dir, split=args.train_split, is_training=True,
batch_size=args.batch_size, repeats=args.epoch_repeats)
dataset_eval = create_dataset(
args.dataset,
root=args.data_dir, split=args.val_split, is_training=False, batch_size=args.batch_size)
# setup mixup / cutmix
mixup_cfg = None
if mixup_active:
mixup_cfg = MixupCfg(
prob=args.mixup_prob, switch_prob=args.mixup_switch_prob, mode=args.mixup_mode,
mixup_alpha=args.mixup, cutmix_alpha=args.cutmix, cutmix_minmax=args.cutmix_minmax,
label_smoothing=args.smoothing, num_classes=args.num_classes)
# wrap dataset in AugMix helper
if args.aug_splits > 1:
dataset_train = AugMixDataset(dataset_train, num_splits=args.aug_splits)
# create data loaders w/ augmentation pipeiine
train_interpolation = args.train_interpolation
if args.no_aug or not train_interpolation:
train_interpolation = data_config['interpolation']
if args.no_aug:
train_aug_cfg = None
else:
train_aug_cfg = AugCfg(
re_prob=args.reprob,
re_mode=args.remode,
re_count=args.recount,
ratio_range=args.rat
dataset_eval.transform = create_transform_v2(
cfg=eval_pp_cfg, is_training=False, normalize=normalize_in_transform)
eval_workers = args.workers
if 'tfds' in args.dataset:
# FIXME reduce validation issues when using TFDS w/ workers and distributed training
eval_workers = min(2, args.workers)
loader_eval = create_loader_v2(
dataset_eval,
batch_size=args.validation_batch_size or args.batch_size,
is_training=False,
normalize=not normalize_in_transform,
pp_cfg=eval_pp_cfg,
num_workers=eval_workers,
pin_memory=args.pin_mem,
)
return data_config, loader_eval, loader_train
def train_one_epoch(
state: TrainState,
services: TrainServices,
loader,
dev_env: DeviceEnv,
):
tracker = Tracker()
loss_meter = AvgTensor() # FIXME move loss meter into task specific TaskMetric
state.model.train()
state.updater.reset() # zero-grad
step_end_idx = len(loader) - 1
tracker.mark_iter()
for step_idx, (sample, target) in enumerate(loader):
tracker.mark_iter_data_end()
# FIXME move forward + loss into model 'task' wrapper
with dev_env.autocast():
output = state.model(sample)
loss = state.train_loss(output, target)
state.updater.apply(loss)
tracker.mark_iter_step_end()
state.updater.after_step(
after_train_step,
state,
services,
dev_env,
step_idx,
step_end_idx,
tracker,
loss_meter,
(output, target, loss),
)
tracker.mark_iter()
# end for
if hasattr(state.updater.optimizer, 'sync_lookahead'):
state.updater.optimizer.sync_lookahead()
return OrderedDict([('loss', loss_meter.compute().item())])
def after_train_step(
state: TrainState,
services: TrainServices,
dev_env: DeviceEnv,
step_idx: int,
step_end_idx: int,
tracker: Tracker,
loss_meter: AvgTensor,
tensors: Tuple[torch.Tensor, ...],
):
"""
After the core loss / backward / gradient apply step, we perform all non-gradient related
activities here including updating meters, metrics, performing logging, and writing checkpoints.
Many / most of these operations require tensors to be moved to CPU, they shoud not be done
every step and for XLA use they should be done via the optimizer step_closure. This function includes
loss_avg = loss_meter.compute()
if services.monitor is not None:
lr_avg = state.updater.get_average_lr()
services.monitor.log_step(
'Train',
step=step_idx,
step_end=step_end_idx,
epoch=state.epoch,
loss=loss_avg.item(),
rate=tracker.get_avg_iter_rate(global_batch_size),
lr=lr_avg,
)
if services.checkpoint is not None and cfg.recovery_interval and (
end_step or (step_idx + 1) % cfg.recovery_interval == 0):
services.checkpoint.save_recovery(state.epoch, batch_idx=step_idx)
if state.lr_scheduler is not None:
# FIXME perform scheduler update here or via updater after_step call?
state.lr_scheduler.step_update(num_updates=state.step_count_global)
def evaluate(
model: nn.Module,
loss_fn: nn.Module,
loader,
logger: Monitor,
dev_env: DeviceEnv,
phase_suffix: str = '',
log_interval: int = 10,
):
tracker = Tracker()
losses_m = AvgTensor()
accuracy_m = AccuracyTopK() # FIXME move loss and accuracy modules into task specific TaskMetric obj
model.eval()
end_idx = len(loader) - 1
tracker.mark_iter()
with torch.no_grad():
for step_idx, (sample, target) in enumerate(loader):
tracker.mark_iter_data_end()
last_step = step_idx == end_idx
with dev_env.autocast():
output = model(sample)
if isinstance(output, (tuple, list)):
output = output[0]
loss = loss_fn(output, target)
# FIXME, explictly marking step for XLA use since I'm not using the parallel xm loader
# need to investigate whether parallel loader wrapper is helpful on tpu-vm or only use for 2-vm setup.
if dev_env.type_xla:
dev_env.mark_step()
elif dev_env.type_cuda:
dev_env.synchronize()
# FIXME uncommenting this fixes race btw model `output`/`loss` and loss_m/accuracy_m meter input
# for PyTorch XLA GPU use.
# This issue does not exist for normal PyTorch w/ GPU (CUDA) or PyTorch XLA w/ TPU.
# loss.item()
tracker.mark_iter_step_end()
losses_m.update(loss, output.size(0))
accuracy_m.update(output, target)
if last_step or step_idx % log_interval == 0:
top1, top5 = accuracy_m.compute().values()
loss_avg = losses_m.compute()
logger.log_step(
'Eval',
step=step_idx,
step_end=end_idx,
loss=loss_avg.item(),
top1=top1.item(),
top5=top5.item(),
phase_suffix=phase_suffix,
)
tracker.mark_iter()
top1, top5 = accuracy_m.compute().values()
results = OrderedDict([('loss', losses_m.compute().item()), ('top1', top1.item()), ('top5', top5.item())])
return results
def _mp_entry(*args):
main()
if __name__ == '__main__':
main()
```
Here is the summary of the above output (I stopped it once I saw it is too high)
```
epoch,train_loss,eval_loss,eval_top1,eval_top5
0,4.732754230499268,4.729395389556885,0.8700000047683716,4.989999771118164
1,3.210913896560669,1.154198408126831,85.3699951171875,97.27999877929688
2,1.6976295709609985,0.4715765118598938,90.56999969482422,98.86000061035156
3,1.5128341913223267,0.3998292088508606,91.66999816894531,99.20999908447266
4,1.4772536754608154,0.370217889547348,92.33999633789062,99.32999420166016
5,1.4140307903289795,0.3580523431301117,92.80999755859375,99.36000061035156
6,1.390270709991455,0.34456485509872437,93.0199966430664,99.37999725341797
7,1.3623195886611938,0.3357977569103241,93.36000061035156,99.39999389648438
8,1.3307034969329834,0.33426693081855774,93.14999389648438,99.43999481201172
9,1.307023048400879,0.3217673897743225,93.47000122070312,99.45999908447266
10,1.3035824298858643,0.32201898097991943,93.66999816894531,99.48999786376953
11,1.2851903438568115,0.329518586397171,93.41999816894531,99.38999938964844
12,1.2727124691009521,0.32014748454093933,93.66999816894531,99.43999481201172
13,1.2688237428665161,0.31492725014686584,93.88999938964844,99.45999908447266
14,1.2594046592712402,0.3136151432991028,93.95999908447266,99.44999694824219
15,1.2442022562026978,0.3131980299949646,93.65999603271484,99.45999908447266
16,1.2306550741195679,0.3129279613494873,93.72999572753906,99.41999816894531
17,1.2250698804855347,0.31124258041381836,94.19999694824219,99.47000122070312
18,1.2192376852035522,0.3087320327758789,94.15999603271484,99.50999450683594
19,1.2128868103027344,0.3063335418701172,94.15999603271484,99.5
20,1.1995835304260254,0.307146817445755,94.06999969482422,99.43000030517578
21,1.2054955959320068,0.30594122409820557,94.08999633789062,99.5
```
And here is a graph of a similar run with slightly different hyperparams which I let run for longer (it reached 94.44!!!)

I've made sure to start a clean machine for this, with a fresh download of cifar100 from TFDS, and of course, a fresh clone of the codebase.
The above results also make me completely doubt the results I have been getting for my own models that use this codebase/pretrained models. I am working now on trying to reproduce this on a GPU, but I don't have access to the same amount of compute so this is going to be more challenging.
Am I somehow missing something or doing something wrong in the fine-tuning script? Could these be real results? Or do you think there is some bug in the XLA/TPU side of things?
Do you have any recommendations as to where should I start looking for a solution?
Thanks,
Eliahu
| closed | 2021-11-07T18:14:35Z | 2021-11-11T21:02:23Z | https://github.com/huggingface/pytorch-image-models/issues/960 | [
"bug"
] | eliahuhorwitz | 6 |
redis/redis-om-python | pydantic | 468 | mypy errors | Hello,
when I run mypy on my project, I got some errors. Can you help me?
Regards,
# Example
```python
import os
from datetime import datetime
from redis_om import EmbeddedJsonModel
from redis_om import Field
from redis_om import get_redis_connection
from redis_om import JsonModel
class Message(EmbeddedJsonModel):
content: str = Field(index=True)
id: str = Field(index=True)
class Conversation(JsonModel):
messages: list[Message] = Field(index=True)
class Meta:
database = get_redis_connection(
url="myredis",
decode_responses=True,
)
```
# mypy
## Conf file
```
[mypy]
plugins = sqlalchemy.ext.mypy.plugin
ignore_missing_imports = True
warn_return_any = True
warn_unused_configs = True
follow_imports = normal
show_column_numbers = True
pretty = False
strict = True
```
## Command
`mypy .`
## Output
```
error: Class cannot subclass "EmbeddedJsonModel" (has type "Any") [misc]
error: Class cannot subclass "JsonModel" (has type "Any") [misc]
```
# Package Version
```
redis-om==0.1.2
- hiredis [required: >=2.0.0,<3.0.0, installed: 2.1.1]
- redis [required: >=3.5.3,<5.0.0, installed: 4.4.2]
- types-redis [required: >=3.5.9,<5.0.0, installed: 4.4.0.3]
mypy-extensions [required: >=0.4.3, installed: 0.4.3]
mypy [required: >=0.780, installed: 0.991]
``` | open | 2023-01-28T13:44:26Z | 2023-04-30T07:26:22Z | https://github.com/redis/redis-om-python/issues/468 | [
"maintenance"
] | tyki6 | 2 |
microsoft/nni | deep-learning | 5,501 | Error occurs when pip install nni[SMAC] | **Describe the issue**:
When I use `pip install nni[SMAC]` , error occurs. The error is as follows. I don't know how to solve it. Thanks for your answer!
```
Collecting ConfigSpaceNNI>=0.4.7.3
Downloading http://mirrors.aliyun.com/pypi/packages/35/c7/e3b8b1d662498a92fa2913d9c7c2134b4831820c8a13de962b987c0acb18/ConfigSpaceNNI-0.4.7.3.tar.gz (108 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 108.5/108.5 kB 2.6 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [40 lines of output]
/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/_distutils/extension.py:134: UserWarning: Unknown Extension options: 'compiler_directives'
warnings.warn(msg)
/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/dist.py:770: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
warnings.warn(
WARNING: The repository located at mirrors.aliyun.com is not a trusted or secure host and is being ignored. If this repository is available via HTTPS we recommend you use HTTPS instead, otherwise you may silence this warning and allow it anyway with '--trusted-host mirrors.aliyun.com'.
ERROR: Could not find a version that satisfies the requirement Cython (from versions: none)
ERROR: No matching distribution found for Cython
Traceback (most recent call last):
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/installer.py", line 82, in fetch_build_egg
subprocess.check_call(cmd)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/home/lvqinyi/miniconda3/envs/sunze/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmp8spnbdat', '--quiet', 'Cython']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-tzzzixvz/configspacenni_126c6bee502a4fe7b51e4ef98928bc8c/setup.py", line 56, in <module>
setup(
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/__init__.py", line 86, in setup
_install_setup_requires(attrs)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/__init__.py", line 80, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/dist.py", line 874, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/pkg_resources/__init__.py", line 789, in resolve
dist = best[req.key] = env.best_match(
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1075, in best_match
return self.obtain(req, installer)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1087, in obtain
return installer(requirement)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/dist.py", line 944, in fetch_build_egg
return fetch_build_egg(self, req)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/installer.py", line 84, in fetch_build_egg
raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command '['/home/lvqinyi/miniconda3/envs/sunze/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmp8spnbdat', '--quiet', 'Cython']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu 20.04
- Python version: 3.9.13
- PyTorch version: 1.12.0
- Is conda/virtualenv/venv used?: conda used
- Is running in Docker?: No
| closed | 2023-04-03T14:36:06Z | 2023-04-04T07:14:56Z | https://github.com/microsoft/nni/issues/5501 | [] | sunze992 | 0 |
chainer/chainer | numpy | 7,676 | TypeError: incompatible array types are mixed in the forward input (LinearFunction). Actual: <class 'numpy.ndarray'>, <class 'cupy.core.core.ndarray'>, <class 'cupy.core.core.ndarray'> | * Conditions
```
python -c 'import chainer; chainer.print_runtime_info()'
```
python -c 'import chainer; chainer.print_runtime_info()'
Platform: Linux-4.18.0-25-generic-x86_64-with-debian-buster-sid
Chainer: 6.1.0
NumPy: 1.16.4
CuPy:
CuPy Version : 6.1.0
CUDA Root : /usr/local/cuda-10.1
CUDA Build Version : 10010
CUDA Driver Version : 10010
CUDA Runtime Version : 10010
cuDNN Build Version : 7500
cuDNN Version : 7500
NCCL Build Version : 2402
NCCL Runtime Version : 2402
iDeep: Not Available
[1]+ Done gedit cg.dot
* Code to reproduce
x, t = test[np.random.randint(len(test))]
predict = model.predictor(x[None]).array
predict = predict[0][0]
if predict >= 0:
print('Predicted Poisonous, Actual ' + ['Edible', 'Poisonous'][t[0]])
else:
print('Predicted Edible, Actual ' + ['Edible', 'Poisonous'][t[0]])
* Error messages, stack traces, or logs
--
TypeError Traceback (most recent call last)
<ipython-input-16-44343da90da8> in <module>()
1 x, t = test[np.random.randint(len(test))]
2
----> 3 predict = model.predictor(x[None]).array
4 predict = predict[0][0]
5
.../anaconda3/envs/.../lib/python3.7/site-packages/chainer/link.py in __call__(self, *args, **kwargs)
292 # forward is implemented in the child classes
293 forward = self.forward # type: ignore
--> 294 out = forward(*args, **kwargs)
295
296 # Call forward_postprocess hook
.../anaconda3/envs/.../lib/python3.7/site-packages/chainer/sequential.py in forward(self, *x)
211 for layer in self._layers:
212 if isinstance(x, tuple):
--> 213 x = layer(*x)
214 else:
215 x = layer(x)
.../anaconda3/envs/.../lib/python3.7/site-packages/chainer/link.py in __call__(self, *args, **kwargs)
292 # forward is implemented in the child classes
293 forward = self.forward # type: ignore
--> 294 out = forward(*args, **kwargs)
295
296 # Call forward_postprocess hook
.../anaconda3/envs/.../lib/python3.7/site-packages/chainer/sequential.py in forward(self, *x)
211 for layer in self._layers:
212 if isinstance(x, tuple):
--> 213 x = layer(*x)
214 else:
215 x = layer(x)
.../anaconda3/envs/.../lib/python3.7/site-packages/chainer/link.py in __call__(self, *args, **kwargs)
292 # forward is implemented in the child classes
293 forward = self.forward # type: ignore
--> 294 out = forward(*args, **kwargs)
295
296 # Call forward_postprocess hook
.../anaconda3/envs/.../lib/python3.7/site-packages/chainer/links/connection/linear.py in forward(self, x, n_batch_axes)
153 in_size = utils.size_of_shape(x.shape[n_batch_axes:])
154 self._initialize_params(in_size)
--> 155 return linear.linear(x, self.W, self.b, n_batch_axes=n_batch_axes)
.../anaconda3/envs/.../lib/python3.7/site-packages/chainer/functions/connection/linear.py in linear(x, W, b, n_batch_axes)
303 args = x, W, b
304
--> 305 y, = LinearFunction().apply(args)
306 if n_batch_axes > 1:
307 y = y.reshape(batch_shape + (-1,))
.../anaconda3/envs/.../lib/python3.7/site-packages/chainer/function_node.py in apply(self, inputs)
287 self.chainerx_device = chainerx_device
288
--> 289 utils._check_arrays_forward_compatible(in_data, self.label)
290
291 is_debug = chainer.is_debug()
.../anaconda3/envs/.../lib/python3.7/site-packages/chainer/utils/__init__.py in _check_arrays_forward_compatible(arrays, label)
91 'Actual: {}'.format(
92 ' ({})'.format(label) if label is not None else '',
---> 93 ', '.join(str(type(a)) for a in arrays)))
94
95
TypeError: incompatible array types are mixed in the forward input (LinearFunction).
Actual: <class 'numpy.ndarray'>, <class 'cupy.core.core.ndarray'>, <class 'cupy.core.core.ndarray'>
Any help is appreciated~ | closed | 2019-07-02T09:30:35Z | 2019-07-03T02:25:16Z | https://github.com/chainer/chainer/issues/7676 | [] | BenoitKAO | 1 |
plotly/dash | plotly | 3,207 | Editshape - overwriting the behavior of the editable properly in shape definition | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Context**
In DCC Graph if `'edits': {'shapePosition':True}` is defined - it overwrites the editable property of the shapes when defining the shapes. Is that the expected behavior?
The shapes are defined as following (I was hoping to have two shapes as non-moveable / editable and two shapes to be moveable):
```python
if command_issued is not None:
fig.add_shape(dict(type='line', x0=command_issued, x1=command_issued, y0=0, y1=1, yref='paper', xref='x', line_color="blue", line_width=1.5, line_dash="dash", editable=True, opacity=0.75,
layer="between",
label=dict(text=f"Command Issue Time", textangle=0, xanchor="left", )))
if limit_reached is not None:
fig.add_shape(dict(type='line', x0=limit_reached, x1=limit_reached, y0=0, y1=1, yref='paper', xref='x', line_color="red", line_width=1.5, line_dash="dash", editable=True, opacity=0.75,
layer="between",
label=dict(text=f"Power Limit Reach Time", textangle=0, xanchor="left", )))
fig.add_shape(dict(type='line', x0=0, x1=1, y0=active_power_limit / 100, y1=active_power_limit / 100, yref='y', xref='paper',
line_color="green", line_width=1.0, line_dash="dash", editable=False, opacity=0.75,
layer="between",
label=dict(text=f"Active Power Limit ({active_power_limit:0.2f})%", textangle=0, )))
fig.add_shape(type="rect",editable=False,
x0=0, y0=active_power_limit / 100 - 0.05, x1=1, y1=active_power_limit / 100 + 0.05,xref='paper',
line=dict(
color="yellow",
width=1,
),
fillcolor="yellow",opacity=0.2,
)
```
- replace the result of `pip list | grep dash` below
```
dash 2.18
```
**Expected behavior**
Expected behavior was if editable property are defined that should be respected and editshapes should only allow the user to move whatever shapes the developer allowed to move while defining the shape.
| closed | 2025-03-11T05:13:02Z | 2025-03-11T05:15:39Z | https://github.com/plotly/dash/issues/3207 | [] | sssaha | 1 |
DistrictDataLabs/yellowbrick | matplotlib | 738 | Update ROCAUC to aid interpretation | Whenever I use a ROC plot I have to refresh myself about what it means. In particular - what do the axis labels _mean_ and where are the thresholds on the plot. It doesn't help that wikipedia's https://en.wikipedia.org/wiki/Receiver_operating_characteristic page has a heap of formulas and their confusion matrix example has different conventions relative the sklearn's.
I'd like clearer descriptions for the axis labels and a guide to interpreting the thresholds that'll give me a point on the curve that I might choose (which is very likely not to be the 0.5 default threshold in sklearn). I suggest an example below.
Here's the current plot in 0.9.1. I've used the standard cancer dataset with 1 feature and a default LogisticRegression:

Here is my suggestion for a more interpretable plot for discussion (feel very free to push back, maybe I added too much!):

My suggestions are:
* Add formula annotations to the x and y axis
* Add "0" and "1" to the labels along with the human-readable class names (personally I work for False or True classes and only think on the human-readable names after)
* Added 3 increasing-size circles to mark the points on each curve closest to decision thresholds for 0.25, 0.5 (the sklearn default) and 0.75 to give me an idea of which threshold I might want to choose
I don't actually like the increasing-size circles but I'm not sure how to better introduce this idea.
This idea is built out of this lovely blog post: https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/
In the blog post colour was used but with multiple curves that'll get messy really quickly so I figured avoiding that might be better.
I was introduced to this post after my talk at PyDataAmsterdam last year (which included yb): https://twitter.com/ianozsvald/status/1000373609888706560
Apologies for leaving this suggestion for so long, I wrote prototype code after PyDataAmsterdam last May, then got distracted, then lost it!
Thoughts? | open | 2019-02-10T22:58:11Z | 2019-03-12T13:01:18Z | https://github.com/DistrictDataLabs/yellowbrick/issues/738 | [
"type: feature",
"priority: medium"
] | ianozsvald | 9 |
coqui-ai/TTS | deep-learning | 3,992 | Finetune XTTS for new languages | Hello everyone, below is my code for fine-tuning XTTS for a new language. It works well in my case with over 100 hours of audio.
https://github.com/nguyenhoanganh2002/XTTSv2-Finetuning-for-New-Languages | closed | 2024-09-08T08:18:10Z | 2025-01-25T12:14:49Z | https://github.com/coqui-ai/TTS/issues/3992 | [
"wontfix",
"feature request"
] | anhnh2002 | 25 |
jupyter/docker-stacks | jupyter | 1,309 | Please, provide me with DockerHub access | @parente I created an issue, to kindly ask you for additional permissions on DockerHub Read the Docs.
These permissions will sometimes make maintenance of this project much easier.
My username is `mathbunnyru` in both places. | closed | 2021-05-19T10:01:51Z | 2022-11-09T17:36:11Z | https://github.com/jupyter/docker-stacks/issues/1309 | [] | mathbunnyru | 18 |
SYSTRAN/faster-whisper | deep-learning | 553 | Implementation with Large-v3 but with Batching | I saw a large-v3 implementation with faster_whisper (https://github.com/guillaumekln/faster-whisper/issues/547) but it's quite slow.
Large-v3 is very fast with batching as shown here --- https://huggingface.co/openai/whisper-large-v3
Batching speeds up the transcription process by a lot. The only reason I wish to use faster_whisper is cause it provides things like srt, verbose, word level transcription | closed | 2023-11-08T15:25:53Z | 2023-11-13T06:28:59Z | https://github.com/SYSTRAN/faster-whisper/issues/553 | [] | souvikqb | 9 |
gradio-app/gradio | data-visualization | 10,289 | Certain linting tools enforce a style of import. (Warning: Cannot statically find a gradio demo called demo. Reload work may fail.) | ### Describe the bug
Certain linting tools enforce a style of import.
Like:
```python
from gradio.blocks import Blocks
```
Following the previous patterns:
```python
patterns = [
f"with gr\\.Blocks\\(.*\\) as {demo_name}",
f"{demo_name} = gr\\.Blocks",
f"{demo_name} = gr\\.Interface",
f"{demo_name} = gr\\.ChatInterface",
f"{demo_name} = gr\\.TabbedInterface",
]
```
We need this format
```python
import gradio as gr
with gr.Blocks(...)
>> Watching: xxx
```
But certain linting tools enforce a style of import.
```python
from gradio.blocks import Blocks
with Blocks(...)
>> Warning: Cannot statically find a gradio demo called demo. Reload work may fail.
Watching: xxx
```
### Pull request
Here the pull request: https://github.com/gradio-app/gradio/pull/10290
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
from gradio.blocks import Blocks
with Blocks(...)
>> Warning: Cannot statically find a gradio demo called demo. Reload work may fail.
Watching: xxx
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.9.1
gradio_client version: 1.5.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.2 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.0
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.1
orjson: 3.10.13
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.4
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.8.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.27.0
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.1
```
### Severity
I can work around it | closed | 2025-01-05T21:03:57Z | 2025-01-06T21:04:44Z | https://github.com/gradio-app/gradio/issues/10289 | [
"bug"
] | YanSte | 0 |
huggingface/diffusers | pytorch | 10,520 | Sana 4K: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! | ### Describe the bug
Inference not working with quantization
### Reproduction
Use the sample code from here
https://github.com/NVlabs/Sana/blob/main/asset/docs/8bit_sana.md#quantization
Replace model with Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers
and dtype torch.bfloat16
### Logs
```shell
(venv) C:\ai1\diffuser_t2i>python Sana_4K-Quant.py
`low_cpu_mem_usage` was None, now default to True since model is quantized.
Loading checkpoint shards: 100%|████████████████████████████████████| 2/2 [00:28<00:00, 14.45s/it]
Expected types for text_encoder: ['AutoModelForCausalLM'], got Gemma2Model.
Loading pipeline components...: 100%|███████████████████████████████| 5/5 [00:15<00:00, 3.17s/it]
The 'batch_size' argument of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'max_batch_size' argument instead.
The 'batch_size' attribute of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'self.max_batch_size' attribute instead.
C:\ai1\diffuser_t2i\venv\lib\site-packages\bitsandbytes\autograd\_functions.py:315: UserWarning: MatMul8bitLt: inputs will be cast from torch.bfloat16 to float16 during quantization
warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\ai1\diffuser_t2i\Sana_4K-Quant.py", line 30, in <module>
image = pipeline(prompt).images[0]
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\diffusers\pipelines\sana\pipeline_sana.py", line 882, in __call__
noise_pred = self.transformer(
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\diffusers\models\transformers\sana_transformer.py", line 414, in forward
hidden_states = self.patch_embed(hidden_states)
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ai1\diffuser_t2i\venv\lib\site-packages\diffusers\models\embeddings.py", line 569, in forward
return (latent + pos_embed).to(latent.dtype)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
### System Info
python 3.10.11
accelerate 1.2.0.dev0
aiofiles 23.2.1
annotated-types 0.7.0
anyio 4.7.0
bitsandbytes 0.45.0
certifi 2024.12.14
charset-normalizer 3.4.1
click 8.1.8
colorama 0.4.6
diffusers 0.33.0.dev0
einops 0.8.0
exceptiongroup 1.2.2
fastapi 0.115.6
ffmpy 0.5.0
filelock 3.16.1
fsspec 2024.12.0
gguf 0.13.0
gradio 5.9.1
gradio_client 1.5.2
h11 0.14.0
httpcore 1.0.7
httpx 0.28.1
huggingface-hub 0.25.2
idna 3.10
imageio 2.36.1
imageio-ffmpeg 0.5.1
importlib_metadata 8.5.0
Jinja2 3.1.5
markdown-it-py 3.0.0
MarkupSafe 2.1.5
mdurl 0.1.2
mpmath 1.3.0
networkx 3.4.2
ninja 1.11.1.3
numpy 2.2.1
opencv-python 4.10.0.84
optimum-quanto 0.2.6.dev0
orjson 3.10.13
packaging 24.2
pandas 2.2.3
patch-conv 0.0.1b0
pillow 11.1.0
pip 23.0.1
protobuf 5.29.2
psutil 6.1.1
pydantic 2.10.4
pydantic_core 2.27.2
pydub 0.25.1
Pygments 2.18.0
python-dateutil 2.9.0.post0
python-multipart 0.0.20
pytz 2024.2
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.3
rich 13.9.4
ruff 0.8.6
safehttpx 0.1.6
safetensors 0.5.0
semantic-version 2.10.0
sentencepiece 0.2.0
setuptools 65.5.0
shellingham 1.5.4
six 1.17.0
sniffio 1.3.1
starlette 0.41.3
sympy 1.13.1
tokenizers 0.21.0
tomlkit 0.13.2
torch 2.5.1+cu124
torchao 0.7.0
torchvision 0.20.1+cu124
tqdm 4.67.1
transformers 4.47.1
typer 0.15.1
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.3.0
uvicorn 0.34.0
websockets 14.1
wheel 0.45.1
zipp 3.21.0
### Who can help?
_No response_ | closed | 2025-01-10T09:03:32Z | 2025-01-16T18:09:43Z | https://github.com/huggingface/diffusers/issues/10520 | [
"bug"
] | nitinmukesh | 3 |
Lightning-AI/LitServe | api | 310 | Pyright issues [Name mismatches] | ## 🐛 Bug Report
### Description
When creating a new notebook via Lightning and copying the example code from the README, Pyright shows issues with the abstract methods of the `LitAPI` base class.
The error occurs due to a parameter name mismatch in method overrides:
```
Method "setup" overrides class "LitAPI" in an incompatible manner Parameter 2 name mismatch: base parameter is named "devices", override parameter is named "device" Pyright[reportIncompatibleMethodOverride]
```

### Steps to Reproduce
1. Open a new notebook in Lightning Studio.
2. Copy the example code from the README.
3. Observe the Pyright issues that appear for the `LitAPI` base class methods.
### Expected Behavior
No Pyright issues should be present. Although this won't block users, it can cause confusion, especially for those new to the framework.
### Environment
- Lightning Studio
- Pyright
### Additional Context
I’d be happy to work on resolving this issue (Updating the README should do the job?)
| closed | 2024-10-01T07:53:03Z | 2024-10-01T18:28:40Z | https://github.com/Lightning-AI/LitServe/issues/310 | [
"bug",
"help wanted"
] | grumpyp | 2 |
adbar/trafilatura | web-scraping | 568 | Content failed to be extracted | The contact section of https://www.mozilla.org/en-US/privacy/ gets missed out

This is my code
` extract(
web_content,
include_formatting=True,
include_tables=True,
include_comments=False,
include_links=False,
output_format="xml",
favor_recall=True,
config=config,
)`
What I've noticed is that the contact section is placed between a footer tag separate from the main footer.
Can anything be done besides ignoring footers all together?
| closed | 2024-04-20T13:12:47Z | 2024-04-22T16:18:30Z | https://github.com/adbar/trafilatura/issues/568 | [] | alroythalus | 1 |
RayVentura/ShortGPT | automation | 41 | Multiple versions of a package installing repeatedly during requirements.txt run | Hi,
Was looking to test this, set up Python 3.10 venv and started installing dependencies.
I am getting the following during the install:

It's been at it for about 30 minutes now. Any reason this might be the case?
It finally seems to have bombed out here:

It has continued with installation but is still painfully slow.
Thanks!
| closed | 2023-07-24T14:58:58Z | 2023-07-24T16:01:59Z | https://github.com/RayVentura/ShortGPT/issues/41 | [] | Vermath | 0 |
deezer/spleeter | tensorflow | 378 | Package 'spleeter-gpu' requires a different Python | ERROR: Package 'spleeter-gpu' requires a different Python: 3.8.2 not in '>=3.6, <3.8' | closed | 2020-05-19T00:02:05Z | 2020-05-19T08:43:53Z | https://github.com/deezer/spleeter/issues/378 | [
"bug",
"invalid"
] | Matheart | 0 |
pydantic/FastUI | fastapi | 150 | Support srcdoc attribute in iframe component. | Do you think it would be reasonable to add [```srcdoc```](https://www.w3schools.com/tags/att_iframe_srcdoc.asp) attribute support to the iframe component? This would enable embedding arbitrary html. The [```sandbox```](https://www.w3schools.com/tags/att_iframe_sandbox.asp) attribute might go along with this in order to enable scripts.
For context, I have been looking into doing some data visualisation in FastUI but this could also be useful for embedding reports, for example from [MultiQC](https://multiqc.info/).
```
# Use it in FastUI like:
c.Iframe(srcdoc='<p>FastUI is neat</p>', src='https://pydantic.dev', width='100%', height=400),
``` | open | 2024-01-14T10:03:01Z | 2024-02-09T06:56:27Z | https://github.com/pydantic/FastUI/issues/150 | [
"help wanted"
] | AaronNHart | 1 |
dgtlmoon/changedetection.io | web-scraping | 2,022 | [feature] Dynamic URL's | I have a website where I want to monitor bookings.
The api looks like:
https://my.api.com/v1/f/availability?date=2023-12-01
I'd like to monitor the next 120 days, starting from today.
I could make use of a jinja template and create 120 different watches.
Is it possible to specify this in a template (or even some basic JS) that I can define:
This is the script, it outputs 120 urls, monitor all of these | closed | 2023-12-01T17:48:29Z | 2023-12-01T17:50:43Z | https://github.com/dgtlmoon/changedetection.io/issues/2022 | [
"enhancement"
] | PaulWoitaschek | 0 |
PrefectHQ/prefect | data-science | 16,792 | Can't set config on aws credentials block | ### Bug summary
Hello 👋
It seems like there is a bug on the interface and API of the config option in aws credentials, it can't 'be set using either the interface or the :

It only says `object`, but I can't write anywhere.
When using the API:
```python
def create_aws_credentials_block(overwrite: bool = False):
# uses env vars to get the credentials
parameters = AwsClientParameters(
config={
"read_timeout": 900,
"connect_timeout": 900,
"max_pool_connections": 100,
"region_name": "us-west-2",
"retries": {"max_attempts": 10, "mode": "standard"},
}
)
credentials = AwsCredentials(aws_client_parameters=parameters)
credentials.save("default", overwrite=overwrite)
```
The block is correctly created, but the config of parameters is empty
### Version info
```Text
Version: 3.1.13
API version: 0.8.4
Python version: 3.12.3
Git commit: 16e85ce3
Built: Fri, Jan 17, 2025 8:46 AM
OS/Arch: linux/x86_64
Profile: staging
Server type: server
Pydantic version: 2.10.5
Integrations:
prefect-sqlalchemy: 0.5.2
prefect-slack: 0.3.1
prefect-aws: 0.5.3
```
### Additional context
_No response_ | open | 2025-01-21T09:23:06Z | 2025-01-21T09:23:06Z | https://github.com/PrefectHQ/prefect/issues/16792 | [
"bug"
] | obendidi | 0 |
2noise/ChatTTS | python | 779 | ChatTTS 0.2.0 does not compatible with the model in modelscope | The `_load()` will check the model files. There is a hash map in the 0.2.0 source code:
https://github.com/2noise/ChatTTS/blob/main/ChatTTS/res/sha256_map.json
```json
{
"sha256_asset_Decoder_pt": "9964e36e840f0e3a748c5f716fe6de6490d2135a5f5155f4a642d51860e2ec38",
"sha256_asset_DVAE_full_pt": "553eb75763511e23f3e5f86303e2163c5ca775489d637fb635d979c8ae58bbe5",
"sha256_asset_Embed_safetensors": "2ff0be7134934155741b643b74e32fb6bf3eec41257984459b2ed60cdb4c48b0",
"sha256_asset_Vocos_pt": "09a670eda1c08b740013679c7a90ebb7f1a97646ea7673069a6838e6b51d6c58",
"sha256_asset_gpt_config_json": "0aaa1ecd96c49ad4f473459eb1982fa7ad79fa5de08cde2781bf6ad1f9a0c236",
"sha256_asset_gpt_model_safetensors": "cd0806fd971f52f6a22c923ec64982b305e817bcc41ca83417fcf9141b984a0f",
"sha256_asset_tokenizer_special_tokens_map_json": "bd0ac9d9bb1657996b5c5fbcaa7d80f8de530d01a283da97f89deae5b1b8d011",
"sha256_asset_tokenizer_tokenizer_config_json": "43e9d658b554fa5ee8d8e1d763349323bfef1ed7a89c0794220ab8861387d421",
"sha256_asset_tokenizer_tokenizer_json": "843838a64e121e23e774cc75874c6fe862198d9f7dd43747914633a8fd89c20e"
}
```
However, I can't find the `asset/DVAE_full.pt` in modelscope: https://modelscope.cn/models/pzc163/chatTTS/files

There exists the file `asset/DVAE_full.pt` in huggingface: https://huggingface.co/2Noise/ChatTTS/tree/main/asset

| closed | 2024-10-10T18:50:24Z | 2024-10-16T13:09:18Z | https://github.com/2noise/ChatTTS/issues/779 | [
"documentation"
] | codingl2k1 | 5 |
open-mmlab/mmdetection | pytorch | 11,948 | scores are nan after finetuning GroundingDINO | **Notice**
There are several common situations in the reimplementation issues as below
1. Reimplement a model in the model zoo using the provided configs
2. Reimplement a model in the model zoo on other dataset (e.g., custom datasets)
3. Reimplement a custom model but all the components are implemented in MMDetection
4. Reimplement a custom model with new modules implemented by yourself
There are several things to do for different cases as below.
- For case 1 & 3, please follow the steps in the following sections thus we could help to quick identify the issue.
- For case 2 & 4, please understand that we are not able to do much help here because we usually do not know the full code and the users should be responsible to the code they write.
- One suggestion for case 2 & 4 is that the users should first check whether the bug lies in the self-implemented code or the original code. For example, users can first make sure that the same model runs well on supported datasets. If you still need help, please describe what you have done and what you obtain in the issue, and follow the steps in the following sections and try as clear as possible so that we can better help you.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The issue has not been fixed in the latest version.
**Describe the issue**
A clear and concise description of what the problem you meet and what have you done.
**Reproduction**
1. What command or script did you run?
```none
A placeholder for the command.
```
2. What config dir you run?
```none
A placeholder for the config.
```
3. Did you make any modifications on the code or config? Did you understand what you have modified?
4. What dataset did you use?
**Environment**
1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
1. How you installed PyTorch \[e.g., pip, conda, source\]
2. Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Results**
If applicable, paste the related results here, e.g., what you expect and what you get.
```none
A placeholder for results comparison
```
**Issue fix**
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
| open | 2024-09-07T08:45:39Z | 2024-11-23T00:25:01Z | https://github.com/open-mmlab/mmdetection/issues/11948 | [
"reimplementation"
] | simranbajaj06 | 2 |
ipyflow/ipyflow | jupyter | 104 | test coverage for comm handlers | open | 2022-06-19T16:53:01Z | 2022-06-19T16:53:01Z | https://github.com/ipyflow/ipyflow/issues/104 | [] | smacke | 0 | |
TencentARC/GFPGAN | deep-learning | 251 | 10x speed up inferences with MobileStyleGAN | Do you foresee any issue with leveraging MobileStyleGAN as a drop-in to StyleGan2 ?
It's almost 10x faster
[MobileStyleGAN Github](https://github.com/bes-dev/MobileStyleGAN.pytorch)
[Side by side comparison video](https://www.youtube.com/watch?v=_yrOA4YIuj4) | open | 2022-09-04T20:13:05Z | 2025-03-07T07:16:49Z | https://github.com/TencentARC/GFPGAN/issues/251 | [] | vlordier | 4 |
voxel51/fiftyone | data-science | 4,800 | [BUG] Compute Similiarity not working on Windows 11 | ### Describe the problem
I'm trying to remove duplicate images on my dataset but I encountered this issue
### Code to reproduce issue
```
fob.compute_similarity(
dataset,
model="clip-vit-base32-torch", # store model's name for future use
embeddings=clip_embeddings, # precomputed image embeddings
brain_key="img_sim",
)
```
### System information
- **OS Platform and Distribution** (e.g., Linux Ubuntu 22.04): Windows 11 Pro
- **Python version** (`python --version`): 3.9.13
- **FiftyOne version** (`fiftyone --version`): 0.25.1
- **FiftyOne installed from** (pip or source): pip
### Other info/logs

### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [ ] Yes. I can contribute a fix for this bug independently
- [ ] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community
- [x] No. I cannot contribute a bug fix at this time
| open | 2024-09-15T10:47:52Z | 2024-09-15T10:47:52Z | https://github.com/voxel51/fiftyone/issues/4800 | [
"bug"
] | DarknessVN-1 | 0 |
horovod/horovod | pytorch | 3,501 | Is there a problem with ProcessSetTable Finalize when elastic? | Background:
Suppose there are currently 4 ranks on 4 machines
Due to the failure of machine 1, rank1 exits directly, and the final shutdown: logic is not executed
Then the remaining machines will perform the shutdown operation in the case of elasticity, and will call process_set_table.Finalize function. this function uses allgather to determine whether the process set needs to be removed, but at this time rank1 has already exited, then allgather operation should theoretically cause the remaining processes to be abnormal, so that the shutdown cannot be normal and elastic cannot be normal.
@maxhgerlach | closed | 2022-04-02T08:23:35Z | 2022-05-12T03:16:01Z | https://github.com/horovod/horovod/issues/3501 | [
"bug"
] | Richie-yan | 1 |
tqdm/tqdm | pandas | 1,204 | Bar update after finish showing number of iterations | When I use the update method of the progress bar, after finishing the iteration. The number of iterations is shown instead of the updated metric. As a fix I reduced the number I update with by one on each iteraterion, but the result stays the same.
Normally I the progress bar progresses with each iteration by 1. In My cause this leaded in a missestimation of the total execution time by up to a factor of 10 (10 times to fast). Therefore I decided, because open files in the loop in python, that the file size shall be used as a estimator of the speed (with a max. error of a factor of 3 in both directions). But when the iteration is finished (after the last update, but before calling close) the progress bar shows the number of iterations as done but the total file size as aim.
```python
from gc import collect
from tqdm import tqdm
import numpy as np
# progress bar that scales with opened file sizes in mb
p_bar = tqdm(dataframe.itertuples(index=True),
unit='B', unit_scale=True, unit_divisor=1_024,
total=np.sum([x.stat().st_size for x in files], dtype='int64'))
for line in p_bar:
collect()
p_bar.set_description(f"Describe current action")
... # do the current calculation
file_size = full_path.stat().st_size if full_path is not None else 0
p_bar.update(file_size-1) # try fix by subtracting 1
p_bar.close()
```
In my current project the shown progress after the finish is: 52.0/2.03G [40:27<28218765:47:25, 46.7s/B]
at 0 %. | closed | 2021-06-24T20:25:25Z | 2021-07-15T19:58:08Z | https://github.com/tqdm/tqdm/issues/1204 | [] | sehHeiden | 0 |
zappa/Zappa | django | 656 | [Migrated] Incorrect IAM permissions for DynamoDB | Originally from: https://github.com/Miserlou/Zappa/issues/1662 by [tk421](https://github.com/tk421)
<!--- Provide a general summary of the issue in the Title above -->
## Dynamo DB Incorrect permissions
When deploying a zappa application based in this [post](https://serverlessblog.com/example), with the following zappa_settings.json
```
{
"dev": {
"app_function": "blog.app",
"aws_region": "ap-southeast-2",
"profile_name": "default",
"project_name": "serverless-blog",
"runtime": "python2.7",
"s3_bucket": "taromba-sb"
}
}
```
and make zappa deploy, it starts the deployment but eventually fails with the following error:
> Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 502 response code.
Zappa creates a IAM policy called _zappa_permissions_ that contains the following code for DynamoDB
```
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "arn:aws:dynamodb:*:*:*"
},
```
And those permissions does not allow to execute the action ListTables which is needed in the deployment process.
Python 2.7
## Expected Behavior
After running zappa deploy, the deployment should be successful.
## Actual Behavior
```
% zappa update
(python-slugify 1.2.6 (/home/tk421/code/serverless.blog/env/lib/python2.7/site-packages), Requirement.parse('python-slugify==1.2.4'), set([u'zappa']))
Calling update for stage dev..
Downloading and installing dependencies..
Packaging project as zip.
Uploading serverless-blog-dev-1539819658.zip (8.6MiB)..
100%|| 9.02M/9.02M [01:16<00:00, 117KB/s]
Updating Lambda function code..
Updating Lambda function configuration..
Uploading serverless-blog-dev-template-1539819740.json (1.6KiB)..
100%|█| 1.66K/1.66K [00:00<00:00, 14.1KB/s]
Deploying API Gateway..
Scheduling..
Unscheduled serverless-blog-dev-zappa-keep-warm-handler.keep_warm_callback.
Scheduled serverless-blog-dev-zappa-keep-warm-handler.keep_warm_callback with expression rate(4 minutes)!
Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 502 response code.
```
zappa tail
```
[1539819427320] An error occurred (AccessDeniedException) when calling the ListTables operation: User: arn:aws:sts::808777168163:assumed-role/serverless-blog-dev-ZappaLambdaExecutionRole/serverless-blog-dev is not authorized to perform: dynamodb:ListTables on resource: *: ClientError
Traceback (most recent call last):
File "/var/task/handler.py", line 580, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 245, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 139, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/var/task/blog.py", line 12, in <module>
dyn_storage = DynamoDBStorage(region_name='us-east-1')
File "/var/task/flask_blogging/dynamodbstorage.py", line 22, in __init__
self._create_all_tables()
File "/var/task/flask_blogging/dynamodbstorage.py", line 195, in _create_all_tables
response = self._client.list_tables()
File "/var/runtime/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (AccessDeniedException) when calling the ListTables operation: User: arn:aws:sts::808777168163:assumed-role/serverless-blog-dev-ZappaLambdaExecutionRole/serverless-blog-dev is not authorized to perform: dynamodb:ListTables on resource: *
```
## Possible Fix
Make sure that zappa-permissions creates the correct values. More broader permissions works, but this gets override by zappa all the time - it would be best to tailor those permissions to what is actually needed.
```
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "*"
},
```
## Steps to Reproduce
1. Configure AWS CLI and confirm that you can interact with AWS
2. Zappa version 0.47.0
3. git clone https://bitbucket.org/manageacloud/serverless-test.git
4. virtualenv env (tested with python 2.7)
5. source env/bin/activate
6. pip install -r requirements.txt
7. zappa deploy dev
## Your Environment
* Zappa version used: 0.47.0
* Operating System and Python version: Ubuntu Xenial
* The output of `pip freeze`:
argcomplete==1.9.3
blinker==1.4
boto3==1.9.23
botocore==1.12.23
certifi==2018.10.15
cfn-flip==1.0.3
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
Flask==1.0.2
Flask-Blogging==1.1.0
Flask-Caching==1.4.0
Flask-FileUpload==0.5.0
Flask-Login==0.4.1
Flask-LoginManager==1.1.6
Flask-Principal==0.4.0
Flask-WTF==0.14.2
future==0.16.0
futures==3.2.0
hjson==3.0.1
idna==2.7
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
Markdown==3.0.1
MarkupSafe==1.0
pkg-resources==0.0.0
placebo==0.8.2
python-dateutil==2.7.3
python-slugify==1.2.6
PyYAML==3.13
requests==2.19.1
s3transfer==0.1.13
shortuuid==0.5.0
six==1.11.0
SQLAlchemy==1.2.12
toml==0.10.0
tqdm==4.19.1
troposphere==2.3.3
Unidecode==1.0.22
urllib3==1.23
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
WTForms==2.2.1
zappa==0.47.0
* Link to your project (optional):
* Your `zappa_settings.py`:
{
"dev": {
"app_function": "blog.app",
"aws_region": "ap-southeast-2",
"profile_name": "default",
"project_name": "serverless-blog",
"runtime": "python2.7",
"s3_bucket": "taromba-sb"
}
}
| closed | 2021-02-20T12:32:31Z | 2024-04-13T17:36:35Z | https://github.com/zappa/Zappa/issues/656 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
ultralytics/yolov5 | machine-learning | 13,065 | Model size is doubled when exporting model to onnx/torchscript |
### Bug
My yolov5 model size was 92 mb. After exporting to onnx the onnx model file is of size 184mb. Why is that?
| closed | 2024-06-03T13:10:11Z | 2024-10-20T19:47:12Z | https://github.com/ultralytics/yolov5/issues/13065 | [
"bug",
"Stale"
] | nkhlS141 | 3 |
miLibris/flask-rest-jsonapi | sqlalchemy | 56 | include multiple related resources | this issues is probably similar to [#29](https://github.com/miLibris/flask-rest-jsonapi/issues/29) only the suggested answer doesn't solve it.
in a request like this:
api/author/123/?include=articles.comments,articles.ratings
only ratings are included(or only comments, depending on what comes last) never both. | closed | 2017-07-13T14:03:48Z | 2017-10-11T09:25:40Z | https://github.com/miLibris/flask-rest-jsonapi/issues/56 | [] | tzimme | 4 |
modoboa/modoboa | django | 2,148 | False positive DNSBL reports because of spamcop.net | # Impacted versions
All.
# Steps to reproduce
Just have any domain in modoboa.
# Current behavior
Getting a lot of service emails:
```
Modoboa detected that domain example.com is listed by the following DNSBL providers:
bl.spamcop.net: example2.com (1.1.1.1) for example.com
The domain's reputation will be affected and there is a chance that emails coming from it are considered as spam. You should contact those providers and ask them to unlist detected IP address(es).
```
# Expected behavior
spamcop.net domain is expired, this service is not working anymore and so those reports should be considered as false-positive.
| closed | 2021-01-31T16:37:26Z | 2021-02-01T14:39:41Z | https://github.com/modoboa/modoboa/issues/2148 | [] | phpony | 3 |
huggingface/datasets | tensorflow | 6,539 | 'Repo card metadata block was not found' when loading a pragmeval dataset | ### Describe the bug
I can't load dataset subsets of 'pragmeval'.
The funny thing is I ran the dataset author's [colab notebook](https://colab.research.google.com/drive/1sg--LF4z7XR1wxAOfp0-3d4J6kQ9nj_A?usp=sharing) and it works just fine. I tried to install exactly the same packages that are installed on colab using poetry, so my environment info only differs from the one from colab in linux version - I still get the same bug outside colab.
### Steps to reproduce the bug
Install dependencies with poetry
pyproject.toml
```
[tool.poetry]
name = "project"
version = "0.1.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = "^3.10"
datasets = "2.16.0"
pandas = "1.5.3"
pyarrow = "10.0.1"
huggingface-hub = "0.19.4"
fsspec = "2023.6.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
`poetry run python -c "import datasets; print(datasets.get_dataset_config_names('pragmeval'))`
prints ['default']
### Expected behavior
The command should print
```
['emergent',
'emobank-arousal',
'emobank-dominance',
'emobank-valence',
'gum',
'mrda',
'pdtb',
'persuasiveness-claimtype',
'persuasiveness-eloquence',
'persuasiveness-premisetype',
'persuasiveness-relevance',
'persuasiveness-specificity',
'persuasiveness-strength',
'sarcasm',
'squinky-formality',
'squinky-implicature',
'squinky-informativeness',
'stac',
'switchboard',
'verifiability']
```
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | open | 2023-12-28T14:18:25Z | 2023-12-28T14:18:37Z | https://github.com/huggingface/datasets/issues/6539 | [] | lambdaofgod | 0 |
cupy/cupy | numpy | 8,255 | `test_sos_freqz_against_mp` test trying to import nonexisting local `mpsig` module | The following test fail for me in my build, and I'm confused as to how it was ever intended to work;
https://github.com/cupy/cupy/blob/028889eef3ba110829d677726348aa0f75aadb4e/tests/cupyx_tests/scipy_tests/signal_tests/test_filter_design.py#L642-L648
There is no mpsig submodule, there is no definition of zpkfreqz or butter_lp anywhere in the code.
Maybe it was intended to be part of https://github.com/cupy/cupy/pull/7537 by @ev-br but wasn't staged into a commit? | closed | 2024-03-24T22:22:22Z | 2024-03-27T10:15:42Z | https://github.com/cupy/cupy/issues/8255 | [
"cat:test",
"prio:medium"
] | Micket | 2 |
holoviz/panel | matplotlib | 6,926 | VideoStream from CCTV | When I use the `VideoStream` class to get the network camera information, it can only get the video stream information of the local camera. If I want to use a remote network camera, such as a video stream transmitted through the rtmp or rtsp protocol, how should I display it in the panel?
Using opencv can achieve the effect I want:
```python
import cv2
cap = cv2.VideoCapture("rtsp://cctv_url")
ret, frame = cap.read()
while ret:
ret, frame = cap.read()
cv2.imshow("frame",frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
cap.release()
```
Is it possible to use Panel's `VideoStream` to achieve the above effect? | open | 2024-06-16T16:43:55Z | 2024-06-17T01:24:30Z | https://github.com/holoviz/panel/issues/6926 | [] | lankoestee | 2 |
dsdanielpark/Bard-API | nlp | 54 | cannot import | When trying to import the package, I got the following error
ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with LibreSSL 2.8.3. See: https://github.com/urllib3/urllib3/issues/2168
please advice which urllib3 to use | closed | 2023-06-05T16:35:38Z | 2023-06-07T12:14:48Z | https://github.com/dsdanielpark/Bard-API/issues/54 | [] | todo | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 949 | gp_minimize recomputing points provided in x0 and y0 | When I try to resume an optimization by feeding in points for x0 and y0, it is re-evaluating all of the values. I've created a simple example below. In my real world case this is a huge problem because my objective function takes 10 minutes to evaluate.
###############################
import numpy as np
np.random.seed(237)
import matplotlib.pyplot as plt
from skopt.plots import plot_gaussian_process
from skopt import gp_minimize
noise_level = 0.1
def f(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2))\
+ np.random.randn() * noise_level
res0 = gp_minimize(f, # the function to minimize
[(-2.0, 2.0)], # the bounds on each dimension of x
acq_func="EI", # the acquisition function
n_calls=15, # the number of evaluations of f
n_random_starts=5, # the number of random initialization points
noise=0.1**2, # the noise level (optional)
random_state=1234,
verbose=True) # the random seed
res1 = gp_minimize(f, # the function to minimize
[(-2.0, 2.0)], # the bounds on each dimension of x
acq_func="EI", # the acquisition function
n_calls=30, # the number of evaluations of f
n_random_starts=5, # the number of random initialization points
noise=0.1**2, # the noise level (optional)
random_state=1234, # the random seed
verbose=True,
x0=res0.x_iters,
y0=res0.func_vals)
###
Output from the second optimization:
Iteration No: 1 started. Evaluating function at random point.
Iteration No: 1 ended. Evaluation done at random point.
Time taken: 0.0010
Function value obtained: -0.0996
Current minimum: -0.2521
Iteration No: 2 started. Evaluating function at random point.
Iteration No: 2 ended. Evaluation done at random point.
Time taken: 0.0000
Function value obtained: -0.1450
Current minimum: -0.2521
Iteration No: 3 started. Evaluating function at random point.
Iteration No: 3 ended. Evaluation done at random point.
Time taken: 0.0010
Function value obtained: -0.1118
Current minimum: -0.2521
Iteration No: 4 started. Evaluating function at random point.
Iteration No: 4 ended. Evaluation done at random point.
Time taken: 0.0000
Function value obtained: 0.5426
Current minimum: -0.2521
Iteration No: 5 started. Evaluating function at random point.
Iteration No: 5 ended. Evaluation done at random point.
Time taken: 0.0000
Function value obtained: -0.2395
Current minimum: -0.2521
Iteration No: 6 started. Evaluating function at random point.
Iteration No: 6 ended. Evaluation done at random point.
Time taken: 0.1875
Function value obtained: 0.0643
Current minimum: -0.2521
Iteration No: 7 started. Evaluating function at random point.
Iteration No: 7 ended. Evaluation done at random point.
Time taken: 0.2384
Function value obtained: -0.0501
Current minimum: -0.2521
Iteration No: 8 started. Evaluating function at random point.
Iteration No: 8 ended. Evaluation done at random point.
Time taken: 0.1346
Function value obtained: -0.0720
Current minimum: -0.2521
Iteration No: 9 started. Evaluating function at random point.
Iteration No: 9 ended. Evaluation done at random point.
Time taken: 0.1930
Function value obtained: -0.2140
Current minimum: -0.2521
Iteration No: 10 started. Evaluating function at random point.
Iteration No: 10 ended. Evaluation done at random point.
Time taken: 0.1695
Function value obtained: -0.1941
Current minimum: -0.2521
Iteration No: 11 started. Evaluating function at random point.
Iteration No: 11 ended. Evaluation done at random point.
Time taken: 0.1755
Function value obtained: -0.0377
Current minimum: -0.2521
Iteration No: 12 started. Evaluating function at random point.
Iteration No: 12 ended. Evaluation done at random point.
Time taken: 0.1795
Function value obtained: -0.1251
Current minimum: -0.2521
Iteration No: 13 started. Evaluating function at random point.
Iteration No: 13 ended. Evaluation done at random point.
Time taken: 0.2404
Function value obtained: -0.3051
Current minimum: -0.3051
Iteration No: 14 started. Evaluating function at random point.
Iteration No: 14 ended. Evaluation done at random point.
Time taken: 0.1601
Function value obtained: -0.2705
Current minimum: -0.3051
Iteration No: 15 started. Evaluating function at random point.
Iteration No: 15 ended. Evaluation done at random point.
Time taken: 0.1721
Function value obtained: -0.3876
Current minimum: -0.3876
Iteration No: 16 started. Evaluating function at random point.
Iteration No: 16 ended. Evaluation done at random point.
Time taken: 0.1516
Function value obtained: -0.3551
Current minimum: -0.3876
Iteration No: 17 started. Evaluating function at random point.
Iteration No: 17 ended. Evaluation done at random point.
Time taken: 0.1626
Function value obtained: -0.4590
Current minimum: -0.4590
Iteration No: 18 started. Evaluating function at random point.
Iteration No: 18 ended. Evaluation done at random point.
Time taken: 0.1629
Function value obtained: -0.4612
Current minimum: -0.4612
Iteration No: 19 started. Evaluating function at random point.
Iteration No: 19 ended. Evaluation done at random point.
Time taken: 0.2000
Function value obtained: -0.4717
Current minimum: -0.4717
Iteration No: 20 started. Evaluating function at random point.
Iteration No: 20 ended. Evaluation done at random point.
Time taken: 0.1920
Function value obtained: -0.4204
Current minimum: -0.4717
Iteration No: 21 started. Searching for the next optimal point.
Iteration No: 21 ended. Search finished for the next optimal point.
Time taken: 0.1990
Function value obtained: -0.2853
Current minimum: -0.4717
Iteration No: 22 started. Searching for the next optimal point.
Iteration No: 22 ended. Search finished for the next optimal point.
Time taken: 0.1810
Function value obtained: -0.5237
Current minimum: -0.5237
Iteration No: 23 started. Searching for the next optimal point.
Iteration No: 23 ended. Search finished for the next optimal point.
Time taken: 0.2155
Function value obtained: -0.2704
Current minimum: -0.5237
Iteration No: 24 started. Searching for the next optimal point.
Iteration No: 24 ended. Search finished for the next optimal point.
Time taken: 0.2304
Function value obtained: -0.4851
Current minimum: -0.5237
Iteration No: 25 started. Searching for the next optimal point.
Iteration No: 25 ended. Search finished for the next optimal point.
Time taken: 0.2334
Function value obtained: -0.5179
Current minimum: -0.5237
Iteration No: 26 started. Searching for the next optimal point.
Iteration No: 26 ended. Search finished for the next optimal point.
Time taken: 0.1855
Function value obtained: -0.3991
Current minimum: -0.5237
Iteration No: 27 started. Searching for the next optimal point.
Iteration No: 27 ended. Search finished for the next optimal point.
Time taken: 0.1820
Function value obtained: -0.3546
Current minimum: -0.5237
Iteration No: 28 started. Searching for the next optimal point.
Iteration No: 28 ended. Search finished for the next optimal point.
Time taken: 0.1925
Function value obtained: -0.4528
Current minimum: -0.5237
Iteration No: 29 started. Searching for the next optimal point.
Iteration No: 29 ended. Search finished for the next optimal point.
Time taken: 0.1942
Function value obtained: -0.4484
Current minimum: -0.5237
Iteration No: 30 started. Searching for the next optimal point.
Iteration No: 30 ended. Search finished for the next optimal point.
Time taken: 0.3196
Function value obtained: -0.3383
Current minimum: -0.5237
Iteration No: 31 ended. Search finished for the next optimal point.
Time taken: 0.5141
Function value obtained: -0.3841
Current minimum: -0.5237 | open | 2020-09-11T19:32:56Z | 2021-02-13T20:38:12Z | https://github.com/scikit-optimize/scikit-optimize/issues/949 | [] | brightsmall | 5 |
jmcnamara/XlsxWriter | pandas | 996 | lock some cell which i want to | ### Question
I use 'add_format({"locked": True})' and 'add_format({"locked": False})' to set lock format for cell. After use worksheet.protect(), I can update 'locked=False' cells , also 'locked =True' cells will be locked!
Then my question is why the blank cells are protected or how to un-lock all the blank cells (other cell without write value)!
Could anybody give me a hand! | closed | 2023-06-26T06:49:35Z | 2023-06-26T09:28:04Z | https://github.com/jmcnamara/XlsxWriter/issues/996 | [
"question"
] | wy329 | 1 |
MilesCranmer/PySR | scikit-learn | 169 | [Feature] My wild idea | I have been thinking about this my idea, might sound stupid:
I have a circuit (called randles circuit), whose impedance is defined by the function:
```
def randles(p, f):
s = 1j * 2*np.pi*f
Rs = p[0]
Cdl = p[1]
Rct = p[2]
Wct=p[3]
Zct = Rct + Wct
Ydl = s*Cdl + 1/Zct
Z=Rs + 1/Ydl
return Z
```
p represents the parameters, f is the frequencies, 1j represents the complex number so the output Z of the randles function is complex number in C^n but could be written also in R^2n by concatenating the real and the imaginary parts.
Then I also have some experimental data Zexpt which is also complex and the frequencies f which is real. My loss function is the weighted nonlinear least squares where the weights can be the inverse of the squared absolute value of the impedance
Now I was wondering _if it is possible_ to use symbolic regression to approach this type of problem in such a way that I search the space of combinations of Rs, Cdl, Rct, Wct to fit any arbitrary impedance data or maybe obtain a set of polynomials in f that approximate the impedance. I tried to do the latter but the results were not encouraging.
Thanks
```
model = PySRRegressor(
niterations=40,
binary_operators=["plus", "mult", "-", "/"],
unary_operators=["inv(x) = 1/x",],
model_selection="accuracy",
populations=300,
variable_names = list(names),
# loss="loss(x, y) = sum(1/abs(y)^2 * (x-y)^2)",
)
``` | open | 2022-07-29T10:46:02Z | 2023-04-20T06:05:49Z | https://github.com/MilesCranmer/PySR/issues/169 | [
"enhancement"
] | richinex | 0 |
piskvorky/gensim | data-science | 2,669 | word2vec doc-comment example of KeyedVectors usage broken | The usage example in the word2vec.py doc-comment regarding `KeyedVectors` uses inconsistent paths and thus doesn't work.
https://github.com/RaRe-Technologies/gensim/blob/e859c11f6f57bf3c883a718a9ab7067ac0c2d4cf/gensim/models/word2vec.py#L73
https://github.com/RaRe-Technologies/gensim/blob/e859c11f6f57bf3c883a718a9ab7067ac0c2d4cf/gensim/models/word2vec.py#L76
If vectors were saved to a tmpfile-path based on the filename `'wordvectors.kv'`, they need to loaded from that same path, not some other local-directory file named 'model.wv'.
(Also, in my opinion the use of `get_tmpfile()` adds unnecessary extra complexity to this example. People usually **don't** want their models in a "temp" directory, which some systems will occasionally delete, so the examples might as well do the simplest possible thing: store in the current working directory with simple string filenames. The example code above this is also confused, because it creates a temp-file path, but then doesn't actually use it, choosing to do the simple & right thing with a local file instead.) | open | 2019-11-05T17:17:55Z | 2020-04-17T12:28:33Z | https://github.com/piskvorky/gensim/issues/2669 | [
"documentation",
"difficulty easy",
"good first issue"
] | gojomo | 6 |
open-mmlab/mmdetection | pytorch | 11,235 | faster rcnn BCE loss during the RPN the dimension of pred is 1,not 2.Help help! | I want to modify the BCE loss of faster rcnn,Previously, when entering the BCE loss during the RPN phase, the dimension of pred was 2, with a prospect score and a background score.But now I don't know why, there is only one dimension left in Pred. Request to resolve, thank you very much.
<img width="855" alt="7e73c2e3b7dc71eb77c28b58d7456e1" src="https://github.com/open-mmlab/mmdetection/assets/59356865/d92f7e44-481c-4273-9a21-0a57f9108b33">
<img width="1040" alt="e93fc01288c62601eb37015d663b396" src="https://github.com/open-mmlab/mmdetection/assets/59356865/cd05a752-339e-4280-b539-8d93aa65bbff">
| open | 2023-12-01T03:15:55Z | 2023-12-01T03:16:19Z | https://github.com/open-mmlab/mmdetection/issues/11235 | [] | lllsgq | 0 |
darrenburns/posting | rest-api | 30 | Soften up python dependancies...? | Good afternoon!
I'm looking into packaging Posting for Fedora, but I'm hitting a wall regarding the package requiring very specific package versions. For example:
click-default-group ( == 1.2.4)
pydantic (==2.7.3)
textual (==0.72)
textual[syntax] (==0.72)
xdg-base-dirs (==6.0.1)
Would it be possible to relax those requirements a bit? I guess I could alter the .toml to soften the requirements a bit but I wanted to see beforehand if you'd be willing to look at it.
Thank you! | closed | 2024-07-11T20:04:06Z | 2024-07-12T07:59:59Z | https://github.com/darrenburns/posting/issues/30 | [] | farchord | 1 |
zappa/Zappa | flask | 936 | [Migrated] Document acceptance of either .yml or .yaml | Originally from: https://github.com/Miserlou/Zappa/issues/2204 by [vshih](https://github.com/vshih)
<!--
Before you submit this PR, please make sure that you meet these criteria:
* Did you read the [contributing guide](https://github.com/Miserlou/Zappa/#contributing)?
* If this is a non-trivial commit, did you **open a ticket** for discussion?
* Did you **put the URL for that ticket in a comment** in the code?
* If you made a new function, did you **write a good docstring** for it?
* Did you avoid putting "_" in front of your new function for no reason?
* Did you write a test for your new code?
* Did the Travis build pass?
* Did you improve (or at least not significantly reduce) the amount of code test coverage?
* Did you **make sure this code actually works on Lambda**, as well as locally?
* Did you test this code with all of **Python 3.6**, **Python 3.7** and **Python 3.8** ?
* Does this commit ONLY relate to the issue at hand and have your linter shit all over the code?
If so, awesome! If not, please try to fix those issues before submitting your Pull Request.
Thank you for your contribution!
-->
## Description
<!-- Please describe the changes included in this PR -->
## GitHub Issues
<!-- Proposed changes should be discussed in an issue before submitting a PR. -->
<!-- Link to relevant tickets here. -->
| closed | 2021-02-20T13:24:45Z | 2022-07-16T05:00:19Z | https://github.com/zappa/Zappa/issues/936 | [] | jneves | 1 |
dask/dask | pandas | 11,126 | add a api load dataset from [huggingface datasets] | - https://docs.dask.org/en/latest/bag-api.html | closed | 2024-05-17T11:42:33Z | 2024-05-21T01:42:13Z | https://github.com/dask/dask/issues/11126 | [
"needs info"
] | simplew2011 | 4 |
sqlalchemy/alembic | sqlalchemy | 586 | Alembic upgrade/downgrade one engine | Hello.
I need to perform alembic upgrade/downgrade only on one specific db_engine.
I have alembic which was INITed from multidb template. At the moment I have 4 db_engines listed in alembic.ini
I want be able to upgrade/downgrade only specific db_engine For example:
`alembic upgrade *db_engine1* head
`
That means that I'll run migration only on db_engine1, not on db_engine2. | closed | 2019-07-10T16:16:49Z | 2019-07-17T22:35:50Z | https://github.com/sqlalchemy/alembic/issues/586 | [
"question"
] | anthony0bondar | 2 |
pydantic/pydantic-ai | pydantic | 944 | Feature Request: provide an easy way to include your (versioned) API docs in LLM contexts | Claude Sonnet and some of the other LLMs don't seem to pull the pedantic-ai API into their weights yet. They will always lag recent API/SDK releases so providing a way to manually include them into an LLM context might be helpful, either programmatically or using interactive tools like NotebookLM.
Perhaps the build could aggregate the API docs into a unicode file, using a standard format and naming convention, for inclusion as a release asset. | open | 2025-02-19T12:16:23Z | 2025-03-01T18:31:52Z | https://github.com/pydantic/pydantic-ai/issues/944 | [
"documentation"
] | jb747 | 5 |
sinaptik-ai/pandas-ai | data-science | 691 | Empty Graph or Chart | ### 🚀 The feature
Sometimes while generating a graph or chart the values of the filtered dataframe is empty rendering an empty chart.
Before returning the datapoints in xaxis and yaxis , we can check if dataframe is empty or not.
if its empty, a text message like 'no such value in data present' can be rendered.

### Motivation, pitch
Improves user experience. No one would like to see an empty graph
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2023-10-26T14:13:32Z | 2024-06-01T00:20:41Z | https://github.com/sinaptik-ai/pandas-ai/issues/691 | [] | shwetabhattad-TU | 4 |
sktime/sktime | scikit-learn | 7,373 | [ENH] Support Individual Sequence Training in GaussianHMM | **Is your feature request related to a problem? Please describe.**
GaussianHMM in sktime currently doesn't support training on multiple sequences individually (panel inputs). While this functionality exists in hmmlearn, users of sktime cannot train their HMM models one sequence at a time.
**Describe the solution you'd like**
Add support for training GaussianHMM on multiple sequences individually, similar to hmmlearn's implementation. This would allow users to train the model one sequence at a time instead of requiring training on the entire time sequence at once. The fit method should accept parameters `X` and `lengths`, allowing usage like `model.fit(X, lengths)`.
**Describe alternatives you've considered**
Currently, the only alternative is to train on the entire time sequence at once, which may not be suitable for all use cases.
**Additional context**
This feature would align sktime's GaussianHMM implementation more closely with hmmlearn's capabilities and provide more flexibility in how users can train their models.
Franz commented that it would interesting to see whether `skchange` has support for panels (multiple time series) | open | 2024-11-08T14:56:25Z | 2024-11-08T18:35:01Z | https://github.com/sktime/sktime/issues/7373 | [
"feature request",
"interfacing algorithms",
"module:detection",
"enhancement"
] | Tony911029 | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,348 | In checkbox questionaires, rearangement of selections can not be saved. | ### What version of GlobaLeaks are you using?
5.0.32
### What browser(s) are you seeing the problem on?
Chrome
### What operating system(s) are you seeing the problem on?
Windows
### Describe the issue
When trying to rearrange the order of selections in a checkbox question, save does not save the new order.
### Proposed solution
_No response_ | closed | 2024-12-06T06:58:42Z | 2024-12-06T16:10:40Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4348 | [
"T: Bug",
"C: Client"
] | elbill | 1 |
collerek/ormar | pydantic | 548 | Json filter support and get multi objects. | For example:
```
# books: Dict[str, str] = ormar.Json()
filter(json_extract(Publisher.books, "$.name") = "Hello")
--------------------------
pbs, books = await MultiObjects(Publisher, Book).filter(Publisher.id == Book.publisher_id)
or
pbs, books = await MultiObjects(Publisher, Book).all()
``` | open | 2022-01-27T03:29:23Z | 2022-01-27T03:29:23Z | https://github.com/collerek/ormar/issues/548 | [
"enhancement"
] | ponytailer | 0 |
freqtrade/freqtrade | python | 10,600 | freqtrade UI "Visualize result" shows "Dataprovider was not initialized with a pairlist provider." | when I run backtesting for a strategy, which one calls the func "self.dp.current_whitelist" in populate_indicators.
The freqtrade UI cant shows the "Visualize result" correctly.
The toast is
"Bot: freqtrade
Dataprovider was not initialized with a pairlist provider."
But it is work when the "self.dp.current_whitelist" not in strategy's populate_indicators func | closed | 2024-08-31T14:40:59Z | 2025-03-23T11:58:57Z | https://github.com/freqtrade/freqtrade/issues/10600 | [
"Question"
] | SeverusHuang-HLF | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.