repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
jina-ai/clip-as-service | pytorch | 693 | Can we use it in tensorflow 1.15 or 1.13? | closed | 2022-04-24T07:04:57Z | 2022-04-25T05:29:33Z | https://github.com/jina-ai/clip-as-service/issues/693 | [] | username123062 | 8 | |
holoviz/panel | jupyter | 7,237 | `ModuleNotFoundError` in Panel Pyodide Site under `1.5.0rc1` | #### ALL software version info
```
panel==1.5.0rc1
```
#### Description of expected behavior and the observed behavior
I was trying to see if the `1.5.0rc1` release of @philippjfr really did solve my previous issue with `panel convert`:
- https://github.com/holoviz/panel/issues/7231#event-14172571936
Unfortunately, the MWE of this issue is now stuck at `ModuleNotFoundError: No module named 'bw2io'`... but only in the "Executing Code" stage - _after_ successfully installing the `bw2io` package.
#### Complete, minimal, self-contained example code that reproduces the issue
See MWE in https://github.com/holoviz/panel/issues/7231#event-14172571936
#### Stack traceback and/or browser JavaScript console output
```
pyodide.asm.js:10 Uncaught (in promise) PythonError: Traceback (most recent call last):
File "/lib/python312.zip/_pyodide/_base.py", line 596, in eval_code_async
await CodeRunner(
File "/lib/python312.zip/_pyodide/_base.py", line 412, in run_async
await coroutine
File "<exec>", line 13, in <module>
ModuleNotFoundError: No module named 'bw2io'
at new_error (pyodide.asm.js:10:9965)
at pyodide.asm.wasm:0x16dc69
at pyodide.asm.wasm:0x177492
at _PyEM_TrampolineCall_JS (pyodide.asm.js:10:124082)
at pyodide.asm.wasm:0x1c2c4c
at pyodide.asm.wasm:0x2c79ac
at pyodide.asm.wasm:0x20a621
at pyodide.asm.wasm:0x1c3339
at pyodide.asm.wasm:0x1c3648
at pyodide.asm.wasm:0x1c36c6
at pyodide.asm.wasm:0x29e6fa
at pyodide.asm.wasm:0x2a4cf1
at pyodide.asm.wasm:0x1c3806
at pyodide.asm.wasm:0x1c346f
at pyodide.asm.wasm:0x176af6
at callPyObjectKwargs (pyodide.asm.js:10:62217)
at Module.callPyObjectMaybePromising (pyodide.asm.js:10:63465)
at wrapper (pyodide.asm.js:10:27093)
at Kn.e.port1.onmessage (pyodide.asm.js:10:100342)
```
#### Screenshots or screencasts of the bug in action
<img width="1728" alt="Screenshot 2024-09-08 at 06 59 22" src="https://github.com/user-attachments/assets/c9027067-2bab-4094-98b5-705f7a5a6d3e">
- [x] I may be interested in making a pull request to address this
| closed | 2024-09-08T05:14:17Z | 2024-09-08T05:34:53Z | https://github.com/holoviz/panel/issues/7237 | [] | michaelweinold | 1 |
OpenInterpreter/open-interpreter | python | 1,552 | the `DISPLAY` environment variable can be undefined | ### Describe the bug
the code assumes it's always defined, but often it's not (e.g., if you're logging in via `ssh`)
### Reproduce
```
(base) pasquale@host:~$ interpreter
Open Interpreter 1.0.0
Copyright (C) 2024 Open Interpreter Team
Licensed under GNU AGPL v3.0
A modern command-line assistant.
Usage: i [prompt]
or: interpreter [options]
Documentation: docs.openinterpreter.com
Run 'interpreter --help' for all options
> [sample instruction]
Traceback (most recent call last):
File "/home/../miniconda3/lib/python3.12/site-packages/interpreter/tools/computer.py", line 14, in <module>
import pyautogui
File "/home/../miniconda3/lib/python3.12/site-packages/pyautogui/__init__.py", line 246, in <module>
import mouseinfo
File "/home/../miniconda3/lib/python3.12/site-packages/mouseinfo/__init__.py", line 223, in <module>
_display = Display(os.environ['DISPLAY'])
~~~~~~~~~~^^^^^^^^^^^
File "<frozen os>", line 714, in __getitem__
KeyError: 'DISPLAY'
Failed to import pyautogui. Computer tool will not work.
```
### Expected behavior
no `KeyError: 'DISPLAY'`
### Screenshots
_No response_
### Open Interpreter version
latest @development
### Python version
3.12.7
### Operating System name and version
GNU/Linux aarch64
### Additional context
_No response_ | open | 2024-12-08T23:44:34Z | 2024-12-08T23:48:54Z | https://github.com/OpenInterpreter/open-interpreter/issues/1552 | [] | pminervini | 0 |
thtrieu/darkflow | tensorflow | 438 | Images in subdirectory ./train/image | In my dataset I have images in subdirectories instead of directly in images folder for instance, 00,01,02 these are the directories that then contain images. Same is the case with ./train/annotations folder I can't trace where to make the changes. will i have to write a batch script? Kindly help. Thank you. | open | 2017-11-22T06:54:33Z | 2017-11-22T06:55:33Z | https://github.com/thtrieu/darkflow/issues/438 | [] | nerdykamil | 0 |
keras-team/autokeras | tensorflow | 1,364 | How to read the best model architecture? | Where can I find the architecture of the best model saved and how can I visualize it? | closed | 2020-10-05T12:29:28Z | 2022-01-31T13:32:59Z | https://github.com/keras-team/autokeras/issues/1364 | [
"wontfix"
] | AntonioDomenech | 10 |
agronholm/anyio | asyncio | 237 | failure in start after `task_status.started` delivers exception differently between trio and asyncio | ```python
import anyio
async def main():
def task_fn(*, task_status):
task_status.started("hello")
result = None
try:
async with anyio.create_task_group() as tg:
result = await tg.start(task_fn)
except TypeError:
pass
assert result is None
anyio.run(main, backend="trio") # passes
anyio.run(main, backend="asyncio") # fails
``` | closed | 2021-03-11T23:35:54Z | 2021-03-13T21:39:55Z | https://github.com/agronholm/anyio/issues/237 | [] | graingert | 0 |
kymatio/kymatio | numpy | 362 | Include examples in test suite | We've had these break several times, but we don't know about it since these are not tested as part of the CI. They should be tested in some way to make sure everything runs, at least prior to a release.
The biggest issue here is the 3D example, which takes several hours to finish, even on a GPU. Perhaps there is some way of simplifying it? | closed | 2019-03-02T19:49:27Z | 2019-07-23T14:17:22Z | https://github.com/kymatio/kymatio/issues/362 | [
"tests"
] | janden | 4 |
apify/crawlee-python | automation | 403 | Evaluate the efficiency of opening new Playwright tabs versus windows | Try to experiment with [PlaywrightBrowserController](https://github.com/apify/crawlee-python/blob/master/src/crawlee/browsers/playwright_browser_controller.py) to determine whether opening new Playwright pages in tabs offers better performance compared to opening them in separate windows (current state). | open | 2024-08-06T07:47:10Z | 2024-08-06T08:31:05Z | https://github.com/apify/crawlee-python/issues/403 | [
"t-tooling",
"solutioning"
] | vdusek | 1 |
iperov/DeepFaceLab | deep-learning | 581 | Can't override model trainer settings in Ubuntu | With Ubuntu, when pressing Enter at the option to override settings before training the program skips user input and begins training with the first run settings instead. Windows release seems to work fine though. | closed | 2020-01-26T15:26:04Z | 2020-01-30T14:00:02Z | https://github.com/iperov/DeepFaceLab/issues/581 | [] | youmebangbang | 5 |
pytest-dev/pytest-html | pytest | 850 | In the Linux environment, the test report generated using pytest-html 4.1.1 does not contain any test case data. | Using pytest-html 4.1.1, the same test cases can generate the pass/fail status and logs in the Windows environment, but fail to generate report content in the Linux environment.


| open | 2024-11-28T06:32:59Z | 2024-11-28T08:15:47Z | https://github.com/pytest-dev/pytest-html/issues/850 | [
"needs more info"
] | chengxiang1997 | 1 |
developmentseed/lonboard | data-visualization | 645 | Test with latest deck.gl version | @vgeorge said that the current main branch isn't working. I should test ensuring latest dependencies | closed | 2024-09-24T13:40:40Z | 2024-09-24T15:12:42Z | https://github.com/developmentseed/lonboard/issues/645 | [] | kylebarron | 0 |
wkentaro/labelme | computer-vision | 845 | Importing VOC | How do I import annotations in Pascal VOC Format? | closed | 2021-03-14T22:33:55Z | 2021-11-20T23:00:10Z | https://github.com/wkentaro/labelme/issues/845 | [] | Zumbalamambo | 1 |
encode/apistar | api | 266 | Default example not working | ```
apistar new .
./app.py
./tests.py
```
```
apistar run
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 170-045-591
127.0.0.1 - - [30/Aug/2017 19:12:00] "GET / HTTP/1.1" 500 -
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/apistar/frameworks/wsgi.py", line 115, in __call__
response = self.http_injector.run(self.exception_handler, state=state)
File "/usr/local/lib/python3.5/dist-packages/apistar/components/dependency.py", line 110, in run
ret = step.func(**kwargs)
File "/usr/local/lib/python3.5/dist-packages/apistar/frameworks/wsgi.py", line 112, in __call__
response = self.http_injector.run(handler, state=state)
File "/usr/local/lib/python3.5/dist-packages/apistar/components/dependency.py", line 110, in run
ret = step.func(**kwargs)
File "/usr/local/lib/python3.5/dist-packages/apistar/components/dependency.py", line 386, in empty
return query_params.get(name)
AttributeError: 'QueryParams' object has no attribute 'get'
```
run tests
```
apistar test
============================================ test session starts =============================================
platform linux -- Python 3.5.2, pytest-3.2.1, py-1.4.34, pluggy-0.4.0
rootdir: /tmp/prueb/asdasd, inifile:
collected 2 items
tests.py .F
================================================== FAILURES ==================================================
_____________________________________________ test_http_request ______________________________________________
def test_http_request():
"""
Testing a view, using the test client.
"""
client = TestClient(app)
> response = client.get('http://localhost/')
tests.py:18:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3/dist-packages/requests/sessions.py:480: in get
return self.request('GET', url, **kwargs)
/usr/local/lib/python3.5/dist-packages/apistar/test.py:249: in request
return super().request(method, url, **kwargs)
/usr/lib/python3/dist-packages/requests/sessions.py:468: in request
resp = self.send(prep, **send_kwargs)
/usr/lib/python3/dist-packages/requests/sessions.py:576: in send
r = adapter.send(request, **kwargs)
/usr/local/lib/python3.5/dist-packages/apistar/test.py:113: in send
wsgi_response = self.app(environ, start_response)
/usr/local/lib/python3.5/dist-packages/apistar/frameworks/wsgi.py:115: in __call__
response = self.http_injector.run(self.exception_handler, state=state)
/usr/local/lib/python3.5/dist-packages/apistar/components/dependency.py:110: in run
ret = step.func(**kwargs)
/usr/local/lib/python3.5/dist-packages/apistar/frameworks/wsgi.py:112: in __call__
response = self.http_injector.run(handler, state=state)
/usr/local/lib/python3.5/dist-packages/apistar/components/dependency.py:110: in run
ret = step.func(**kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <apistar.components.dependency.HTTPResolver object at 0x7ffb20512780>, name = 'name', kwargs = {}
query_params = QueryParams([])
def empty(self,
name: ParamName,
kwargs: KeywordArgs,
query_params: http.QueryParams) -> str:
"""
Handles unannotated parameters for HTTP requests.
These types use either a matched URL keyword argument, or else
a query parameter.
Args:
name: The name of the parameter.
kwargs: The URL keyword arguments, as returned by the router.
query_params: The query parameters of the incoming HTTP request.
Returns:
The value that should be used for the handler function.
"""
if name in kwargs:
return kwargs[name]
> return query_params.get(name)
E AttributeError: 'QueryParams' object has no attribute 'get'
/usr/local/lib/python3.5/dist-packages/apistar/components/dependency.py:386: AttributeError
```
app.py
```python
from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar.handlers import docs_urls, static_urls
def welcome(name=None):
if name is None:
return {'message': 'Welcome to API Star!'}
return {'message': 'Welcome to API Star, %s!' % name}
routes = [
Route('/', 'GET', welcome),
Include('/docs', docs_urls),
Include('/static', static_urls)
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
``` | closed | 2017-08-30T17:13:42Z | 2018-04-11T12:44:57Z | https://github.com/encode/apistar/issues/266 | [] | agalera | 7 |
axnsan12/drf-yasg | rest-api | 370 | Why do types inside an array of objects all show `null`? | I am using `@swagger_auto_schema` for generating the documentation of my project, I need to show that the response gives a list of dicts, which I do the following:
```
@swagger_auto_schema(
tags=["<tag>"],
operation_id="<id>",
operation_description="<description>.",
responses={200: openapi.Response(
description="<description>",
schema=openapi.Schema(
type=openapi.TYPE_OBJECT,
properties={"objects": openapi.Schema(
type=openapi.TYPE_ARRAY,
items=openapi.Items(
type=openapi.TYPE_OBJECT,
properties={
"id": openapi.TYPE_STRING,
"prop1": openapi.TYPE_STRING,
"prop2": openapi.TYPE_INTEGER
}
)
)}
)
)}
)
```
I load it up in swagger or redoc, and it shows that it is an array of dict elements, with the correct keys of the dict, but all the types show `null`.
I am using custom django implementation for making it all async, so for documentation I didn't use the whole drf_yasg pacakge as I have no proper urlpatterns, viewsets, etc..., so I just wrap all of my functions with `@swagger_auto_schema`, and then generate the swagger object myself with:
```
schemas = []
for route in http_routes:
pattern = route.pattern
consumer = route.callback
method_list = [getattr(consumer, func) for func in dir(consumer) if callable(getattr(consumer, func))]
for method in method_list:
try:
if method._swagger_auto_schema:
schemas.append({
"pattern": pattern,
"schema": method._swagger_auto_schema
})
except Exception:
continue
paths = {}
for schema in schemas:
pattern = str(schema["pattern"])
schema = schema["schema"]
method = schema["operation_description"].split(":")[0].lower()
print("schema.responses: ", schema["responses"])
responses = schema["responses"] # Already bundled as Response:Schemas
operation = openapi.Operation(
operation_id=schema["operation_id"],
responses=openapi.Responses(responses), # Responses takes a dict
parameters=[],
consumes=["application/json"],
produces=["application/json"],
summary="",
description=schema["operation_description"].split(":")[-1],
tags=schema["tags"],
security=None
)
paths[f"{pattern} [{method}]"] = openapi.PathItem(**{method: operation})
swagger = openapi.Swagger(
info=openapi.Info(
title="",
description="",
terms_of_service="",
contact=openapi.Contact(email=""),
licence=openapi.License(name=""),
default_version="v.0.1"),
_prefix="/",
consumes=["application/json"],
produces=["application/json"],
paths=openapi.Paths(paths)
)
```
Once i understand more I can re-write the decorator for my own needs, such as providing methods, as I don't use any viewsets etc...
I saw that [https://drf-swagger.readthedocs.io/en/1.2.1/custom_spec.html](url) has the line:
> if the view is a list view (as defined by is_list_view()), the response schema is wrapped in an array
but I cannot find the specific part of this project that actually does wrap it, that seems a possible way to it and then I wouldnt need to use `openapi.TYPE_ARRAY`?
I am also aware of being able to do this via using a serializer, and setting it with `many=True`, the databases I am using don't use models or serializers. I could generate one without a class, but then again, with the way I generate the swagger object, I would need a way to directly convert a serializer into a openapi.Schema object, I cant find out how to do that either..
So if there is anyone that can suggest a way to solve this problem, please let me know.
| closed | 2019-05-21T07:58:42Z | 2019-05-22T01:32:15Z | https://github.com/axnsan12/drf-yasg/issues/370 | [] | ghost | 1 |
ijl/orjson | numpy | 278 | Wheels for musllinux_1_1_armv7l ? | Is it possible to build and publish wheels for musl armv7?
It is a quite popular architecture for OpenWrt builds and compiling it on tiny routers is almost impossible. | closed | 2022-07-07T06:53:18Z | 2022-07-29T22:47:24Z | https://github.com/ijl/orjson/issues/278 | [] | devbis | 8 |
quokkaproject/quokka | flask | 318 | google site tools unearthed some errors for me. | Good day ladies and gentlemen. While browsing google site tools it showed me there are several errors under the hood I never noticed. First, when trying to access http://www.sid.dontexist.dynu.com/articles.xml I get a 502 bad gateway which just means there was an error with it. Here is what the log has to say immediately after accessing (or trying to access) `--------------------------------------------------------------------------------
INFO in before_request [./quokka/ext/before_request.py:9]:
## Called only once, when the first request comes in
Traceback (most recent call last):
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/werkzeug/wsgi.py", line 650, in **call**
return app(environ, start_response)
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/app.py", line 1836, in **call**
return self.wsgi_app(environ, start_response)
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(_args, *_kwargs)
File "/home/quokka/quokkaenv/local/lib/python2.7/site-packages/flask/views.py", line 149, in dispatch_request
```
return meth(*args, **kwargs)
```
File "./quokka/core/views.py", line 529, in get
return self.make_rss(feed_name, contents)
File "./quokka/core/views.py", line 433, in make_rss
content.title + content.get_absolute_url()
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 17: ordinal not in range(128)
[pid: 2644|app: 0|req: 1/1] 192.168.1.2 () {46 vars in 1160 bytes} [Sat Nov 14 13:37:20 2015] GET /articles.xml => generated 0 bytes in 365 msecs (HTTP/1.1 500) 0 headers in 0 bytes (0 switches on core 0)
root@sid:/var/log/uwsgi/app#
`
Lasty I am getting a lot of 404 errors if I move categories around. Its as if google indexes me at a certain point and then if I change my articles or categories it goes back to find stuff and if it cant because its moved so it just gives a 404 rather then trying to find it. Is this normal? Im thinking it is and once I stop moving categories around (I like my stuff organized) it will index it one day and then be fine. Thanks for reading have a great day.
| closed | 2015-11-14T19:22:55Z | 2017-01-09T22:29:22Z | https://github.com/quokkaproject/quokka/issues/318 | [] | eurabilis | 1 |
alteryx/featuretools | data-science | 2,486 | Add AverageCountPerUnique, CountryCodeToContinent, FileExtension, FirstLastTimeDelta, SavgolFilter, CumulativeTimeSinceLastFalse, CumulativeTimeSinceLastTrue, PercentChange, PercentUnique primitives | closed | 2023-02-13T22:03:02Z | 2023-04-04T22:02:01Z | https://github.com/alteryx/featuretools/issues/2486 | [] | gsheni | 0 | |
albumentations-team/albumentations | machine-learning | 1,770 | [Documentation] Add to documentation about HFHub load / save functionality | closed | 2024-06-03T16:42:53Z | 2024-06-19T03:27:03Z | https://github.com/albumentations-team/albumentations/issues/1770 | [
"documentation"
] | ternaus | 1 | |
pallets/flask | python | 4,590 | No link in docs to mailing list or Discord server | The documentation mentions — but doesn't link to — the mailing list and Discord server.
I'm referring to this short section of the docs:
https://flask.palletsprojects.com/en/2.1.x/becomingbig/#discuss-with-the-community
The text makes plain that there are both a mailing list and a Discord server, but fails to provide links.
> The Flask developers keep the framework accessible to users with codebases big and small. If you find an obstacle in your way, caused by Flask, don’t hesitate to contact the developers on the mailing list or Discord server. The best way for the Flask and Flask extension developers to improve the tools for larger applications is getting feedback from users. | closed | 2022-05-09T18:16:59Z | 2022-05-26T00:06:04Z | https://github.com/pallets/flask/issues/4590 | [
"docs"
] | smontanaro | 3 |
Yorko/mlcourse.ai | data-science | 673 | Jupyter kernel error | Hi,
When I open the jupyter notebooks for the course, I am getting the following kernel error:
"RuntimeError: Permissions assignment failed for secure file: '/notebooks/home/.local/share/jupyter/runtime/kernel-c1c99a70-5225-4507-b438-c1b9697b5473.json'.Got '33279' instead of '600'"
Tried to open the notebooks by a preinstalled Anaconda and it works fine.
Possible reason for failure: https://discourse.jupyter.org/t/jupyter-core-4-6-2-release-with-insure-mode-option/3300 | closed | 2020-09-13T07:35:20Z | 2020-10-31T08:48:29Z | https://github.com/Yorko/mlcourse.ai/issues/673 | [] | matemik | 0 |
keras-team/autokeras | tensorflow | 1,226 | Is it possible to re-train a few trials from the latest trial or best model? | In some restricted resources situations, leaving trials metadata will strain the HDD.
I was wondering if it would be nice to be able to split the process of finding the best model into several parts, even in a restricted resource environment.
As I was imaging after running a few trials, remove all metadata but the most recent ones, and then load it and re-run a few trials again.
What do you think?
Or is it already possible to do this? | closed | 2020-07-08T05:56:33Z | 2020-09-13T09:28:02Z | https://github.com/keras-team/autokeras/issues/1226 | [
"wontfix"
] | toohsk | 1 |
databricks/koalas | pandas | 2,139 | why kdf.head() is much lower than sdf.show()? | ```python
sdf = read_csv('backflow.csv')
kdf = sdf.to_koalas()
# run time 75ms
sdf.show(5)
# run time 53s
kdf.head()
```


kdf.head() is much lower than sdf.show().Is there any way to speed it up in koalas?
| closed | 2021-04-07T02:05:27Z | 2021-05-13T03:14:42Z | https://github.com/databricks/koalas/issues/2139 | [
"question"
] | RainFung | 3 |
python-gino/gino | asyncio | 375 | Select some fields with where | * GINO version: 0.7.1
* Python version: 3.6
* asyncpg version: 1.15.0
* aiocontextvars version:
* PostgreSQL version:
### Description
I need select some field with select() and where
### What I Did
```
denomination = await Denomination. \
join(SettingsCurrency). \
join(Currency). \
select('value'). \
where(and_(Currency.code == 'USD',
SettingsCurrency.settings_game_id.in_([s.id for s in settings_game]))).gino.all()
```
sql:
```
SELECT games_denomination.id, games_denomination.settings_currency_id, games_denomination.value, games_denomination."default", games_settingscurrency.id, games_settingscurrency.settings_game_id, games_settingscurrency.currency_id, games_currency.id, games_currency.name, games_currency.code, games_currency.is_active
FROM games_denomination JOIN games_settingscurrency ON games_settingscurrency.id = games_denomination.settings_currency_id JOIN games_currency ON games_currency.id = games_settingscurrency.currency_id
WHERE value AND games_currency.code = $1 AND games_settingscurrency.settings_game_id IN ($2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)
```
error:
asyncpg.exceptions.DatatypeMismatchError: argument of AND must be type boolean, not type double precision
| closed | 2018-10-29T14:13:47Z | 2018-10-30T09:01:59Z | https://github.com/python-gino/gino/issues/375 | [
"question"
] | yarara | 5 |
Farama-Foundation/PettingZoo | api | 784 | License not updated in setup.py | Please update your license in setup.py , as it is not visible in. Pypi.org | closed | 2022-09-16T10:22:41Z | 2022-09-16T10:37:33Z | https://github.com/Farama-Foundation/PettingZoo/issues/784 | [] | shaktisd | 1 |
LAION-AI/Open-Assistant | machine-learning | 3,734 | Dear ladies and gentlemen,
I click on the button "create a chat" but it doesn't work at all. Could you please solve this problem and help me ?
Best regards
Ehsan Pazooki | closed | 2023-11-23T09:10:24Z | 2023-11-23T13:52:37Z | https://github.com/LAION-AI/Open-Assistant/issues/3734 | [] | epz1371 | 1 | |
ymcui/Chinese-BERT-wwm | tensorflow | 123 | 加载模型问题 | 你好我加载模型遇到,以下问题
Python 3.6.8 (v3.6.8:3c6b436a57, Dec 24 2018, 02:04:31)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers as tfs
To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html
>>> tfs.__version__
'2.2.2'
>>> tfs.BertModel.from_pretrained("chinese_rbt3_L-3_H-768_A-12")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/transformers/modeling_utils.py", line 367, in from_pretrained
pretrained_model_name_or_path))
OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] found in directory chinese_rbt3_L-3_H-768_A-12 or `from_tf` set to False
>>> config = BertConfig.from_json_file('./chinese_rbt3_L-3_H-768_A-12/config.json')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'BertConfig' is not defined
>>> config = tfs.BertConfig.from_json_file('./chinese_rbt3_L-3_H-768_A-12/config.json')
>>> tfs.BertModel.from_pretrained("chinese_rbt3_L-3_H-768_A-12/model.ckpt.index",from_tf=True,config=config)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/transformers/modeling_utils.py", line 418, in from_pretrained
model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/transformers/modeling_bert.py", line 116, in load_tf_weights_in_bert
assert pointer.shape == array.shape
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'BertModel' object has no attribute 'shape'
一次按照README中的操作,发现不行。第二次看了transformers的接口,重新加载还是出错。发现代码默认调用的时候torch的,我感觉是transformers的bug。不知道你使用2.2.2版本有没有加载成功tf版本的 | closed | 2020-06-01T09:52:56Z | 2020-06-02T05:08:19Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/123 | [] | white-wolf-tech | 5 |
thtrieu/darkflow | tensorflow | 1,070 | Custom objects not seen by darkflow | Hi everyone,
first of all, since my question may seem a bit stupid, I apologize for that - I'm a neophyte to YoLo.
Let's go to the point: I have 9 GB videos recorded by an IR camera, and my task is to build a neural network capable of recognizing some details into them.
Starting from YoLo, I wrote the architecture of the ANN and successfully trained it on few labelled (with labelImg) screenshots of the videos: the problem is that, even if I give the net an already seen example, it can't recognize my custom objects.
Where am I wrong? Labelling? Training set?
Many thanks,
bi94 | open | 2019-08-13T06:07:15Z | 2019-11-15T17:14:25Z | https://github.com/thtrieu/darkflow/issues/1070 | [] | bi94 | 2 |
dask/dask | scikit-learn | 11,252 | Local memory explodes on isin() | When doing a Series.isin() with PyArrow strings, local memory just explodes.
It just works (with version 2023.9.1) when it is of type "object".
A workaround is to disable the string conversion (`dask.config.set({"dataframe.convert-string": False})`), but not ideal. Any ideas why this happens now?
**Minimal Complete Verifiable Example**:
```python
import dask.dataframe as dd
import random
import string
test = dd.from_dict(
{
"id": [''.join(random.choices(string.ascii_uppercase + string.digits, k=35)) for _ in range(1000000)],
},
npartitions=1
)
users = [''.join(random.choices(string.ascii_uppercase + string.digits, k=35)) for _ in range(5000000)]
test[test.id.isin(users)].compute()
```
**Environment**:
- Dask version: 2024.2.1
- Python version: 3.10
- Operating System: Linux
- Install method (conda, pip, source): pip
| open | 2024-07-25T11:38:17Z | 2024-07-25T12:37:42Z | https://github.com/dask/dask/issues/11252 | [
"dataframe",
"upstream"
] | manschoe | 1 |
python-visualization/folium | data-visualization | 1,573 | Performance hit when FeatureGroup is created with show=False. | Thanks for the great library - fun to use, and beautiful!
Not sure whether this was best put in the bug or feature request category, so to be safe I chose the latter.
I've run into the following performance issue:
The following code renders the map very fast:
```
import folium
m = folium.Map(location=[35.11567262307692,-89.97423444615382], zoom_start=12, tiles='Stamen Terrain')
for i in range(200):
feature_group = folium.FeatureGroup(i, show=True)
feature_group.add_to(m)
folium.LayerControl().add_to(m)
m
```
However, if show is set to False, it takes much longer to render.
```
import folium
m = folium.Map(location=[35.11567262307692,-89.97423444615382], zoom_start=12, tiles='Stamen Terrain')
for i in range(200):
feature_group = folium.FeatureGroup(i, show=False)
feature_group.add_to(m)
folium.LayerControl().add_to(m)
m
```
The problem is, I don't want these layers shown by default. So I'm wondering if there's a way to render the map, and then automatically uncheck the layers (I know that in these examples, the layers are "empty", but that is just to illustrate that the performance issue has nothing to do with the "content" of the layers).
Failing that, are there any other ways to deal with this performance issue?
Thanks very much for any guidance! | closed | 2022-02-20T18:17:16Z | 2023-05-18T08:52:05Z | https://github.com/python-visualization/folium/issues/1573 | [
"bug"
] | spacediver99 | 7 |
robinhood/faust | asyncio | 118 | KafkaError upon rebalance | Got the following error:
```
KafkaErroraiokafka.consumer.group_coordinator in _send_sync_group_request
```
Looks like the faust workers were not able to rebalance the consumer group post the above error.
The error was thrown here: https://github.com/aio-libs/aiokafka/blob/v0.4.1/aiokafka/consumer/group_coordinator.py#L1217-L1221 | open | 2018-07-18T23:45:09Z | 2018-07-31T14:59:43Z | https://github.com/robinhood/faust/issues/118 | [
"Issue Type: Bug",
"Component: Transport",
"Priority: Critical",
"Status: Need Verification"
] | vineetgoel | 1 |
autogluon/autogluon | data-science | 3,915 | [BUG] Unable to work with Autogluon Object Detection | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
Unable to work with Autogluon in Kaggle env
**Expected behavior**
Code able to run without any error
**To Reproduce**
!pip install autogluon
from autogluon.multimodal import MultiModalPredictor
predictor = MultiModalPredictor(label=label_col).fit(
train_data=train_data,
time_limit=120
)
OSError Traceback (most recent call last)
Cell In[10], line 1
----> 1 from autogluon.multimodal import MultiModalPredictor
3 predictor = MultiModalPredictor(label=label_col).fit(
4 train_data=train_data,
5 time_limit=120
6 )
File /opt/conda/lib/python3.10/site-packages/autogluon/multimodal/__init__.py:6
3 except ImportError:
4 pass
----> 6 from . import constants, data, learners, models, optimization, predictor, problem_types, utils
7 from .predictor import MultiModalPredictor
8 from .utils import download
File /opt/conda/lib/python3.10/site-packages/autogluon/multimodal/data/__init__.py:2
1 from . import collator, infer_types, randaug, utils
----> 2 from .datamodule import BaseDataModule
3 from .dataset import BaseDataset
4 from .dataset_mmlab import MultiImageMixDataset
File /opt/conda/lib/python3.10/site-packages/autogluon/multimodal/data/datamodule.py:4
1 from typing import Dict, List, Optional, Union
3 import pandas as pd
----> 4 from lightning.pytorch import LightningDataModule
5 from torch.utils.data import DataLoader, Dataset
7 from ..constants import PREDICT, TEST, TRAIN, VALIDATE
File /opt/conda/lib/python3.10/site-packages/lightning/__init__.py:25
23 from lightning.fabric.fabric import Fabric # noqa: E402
24 from lightning.fabric.utilities.seed import seed_everything # noqa: E402
---> 25 from lightning.pytorch.callbacks import Callback # noqa: E402
26 from lightning.pytorch.core import LightningDataModule, LightningModule # noqa: E402
27 from lightning.pytorch.trainer import Trainer # noqa: E402
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/__init__.py:26
23 _logger.propagate = False
25 from lightning.fabric.utilities.seed import seed_everything # noqa: E402
---> 26 from lightning.pytorch.callbacks import Callback # noqa: E402
27 from lightning.pytorch.core import LightningDataModule, LightningModule # noqa: E402
28 from lightning.pytorch.trainer import Trainer # noqa: E402
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/callbacks/__init__.py:14
1 # Copyright The Lightning AI team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from lightning.pytorch.callbacks.batch_size_finder import BatchSizeFinder
15 from lightning.pytorch.callbacks.callback import Callback
16 from lightning.pytorch.callbacks.checkpoint import Checkpoint
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/callbacks/batch_size_finder.py:24
21 from typing import Optional
23 import lightning.pytorch as pl
---> 24 from lightning.pytorch.callbacks.callback import Callback
25 from lightning.pytorch.tuner.batch_size_scaling import _scale_batch_size
26 from lightning.pytorch.utilities.exceptions import _TunerExitException, MisconfigurationException
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/callbacks/callback.py:22
19 from torch.optim import Optimizer
21 import lightning.pytorch as pl
---> 22 from lightning.pytorch.utilities.types import STEP_OUTPUT
25 class Callback:
26 r"""Abstract base class used to build new callbacks.
27
28 Subclass this class and override any of the relevant hooks
29
30 """
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/utilities/types.py:40
38 from torch import Tensor
39 from torch.optim import Optimizer
---> 40 from torchmetrics import Metric
41 from typing_extensions import NotRequired, Required
43 from lightning.fabric.utilities.types import _TORCH_LRSCHEDULER, LRScheduler, ProcessGroup, ReduceLROnPlateau
File /opt/conda/lib/python3.10/site-packages/torchmetrics/__init__.py:14
11 _PACKAGE_ROOT = os.path.dirname(__file__)
12 _PROJECT_ROOT = os.path.dirname(_PACKAGE_ROOT)
---> 14 from torchmetrics import functional # noqa: E402
15 from torchmetrics.aggregation import ( # noqa: E402
16 CatMetric,
17 MaxMetric,
(...)
22 SumMetric,
23 )
24 from torchmetrics.audio._deprecated import _PermutationInvariantTraining as PermutationInvariantTraining # noqa: E402
File /opt/conda/lib/python3.10/site-packages/torchmetrics/functional/__init__.py:14
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from torchmetrics.functional.audio._deprecated import _permutation_invariant_training as permutation_invariant_training
15 from torchmetrics.functional.audio._deprecated import _pit_permutate as pit_permutate
16 from torchmetrics.functional.audio._deprecated import (
17 _scale_invariant_signal_distortion_ratio as scale_invariant_signal_distortion_ratio,
18 )
File /opt/conda/lib/python3.10/site-packages/torchmetrics/functional/audio/__init__.py:14
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate
15 from torchmetrics.functional.audio.sdr import (
16 scale_invariant_signal_distortion_ratio,
17 signal_distortion_ratio,
18 source_aggregated_signal_distortion_ratio,
19 )
20 from torchmetrics.functional.audio.snr import (
21 complex_scale_invariant_signal_noise_ratio,
22 scale_invariant_signal_noise_ratio,
23 signal_noise_ratio,
24 )
File /opt/conda/lib/python3.10/site-packages/torchmetrics/functional/audio/pit.py:22
19 from torch import Tensor
20 from typing_extensions import Literal
---> 22 from torchmetrics.utilities import rank_zero_warn
23 from torchmetrics.utilities.imports import _SCIPY_AVAILABLE
25 # _ps_dict: cache of permutations
26 # it's necessary to cache it, otherwise it will consume a large amount of time
File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/__init__.py:14
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from torchmetrics.utilities.checks import check_forward_full_state_property
15 from torchmetrics.utilities.distributed import class_reduce, reduce
16 from torchmetrics.utilities.prints import rank_zero_debug, rank_zero_info, rank_zero_warn
File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/checks.py:25
22 import torch
23 from torch import Tensor
---> 25 from torchmetrics.metric import Metric
26 from torchmetrics.utilities.data import select_topk, to_onehot
27 from torchmetrics.utilities.enums import DataType
File /opt/conda/lib/python3.10/site-packages/torchmetrics/metric.py:30
27 from torch import Tensor
28 from torch.nn import Module
---> 30 from torchmetrics.utilities.data import (
31 _flatten,
32 _squeeze_if_scalar,
33 dim_zero_cat,
34 dim_zero_max,
35 dim_zero_mean,
36 dim_zero_min,
37 dim_zero_sum,
38 )
39 from torchmetrics.utilities.distributed import gather_all_tensors
40 from torchmetrics.utilities.exceptions import TorchMetricsUserError
File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/data.py:22
19 from torch import Tensor
21 from torchmetrics.utilities.exceptions import TorchMetricsUserWarning
---> 22 from torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE
23 from torchmetrics.utilities.prints import rank_zero_warn
25 METRIC_EPS = 1e-6
File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py:50
48 _GAMMATONE_AVAILABEL: bool = package_available("gammatone")
49 _TORCHAUDIO_AVAILABEL: bool = package_available("torchaudio")
---> 50 _TORCHAUDIO_GREATER_EQUAL_0_10: Optional[bool] = compare_version("torchaudio", operator.ge, "0.10.0")
51 _SACREBLEU_AVAILABLE: bool = package_available("sacrebleu")
52 _REGEX_AVAILABLE: bool = package_available("regex")
File /opt/conda/lib/python3.10/site-packages/lightning_utilities/core/imports.py:77, in compare_version(package, op, version, use_base_version)
68 """Compare package version with some requirements.
69
70 >>> compare_version("torch", operator.ge, "0.1")
(...)
74
75 """
76 try:
---> 77 pkg = importlib.import_module(package)
78 except (ImportError, pkg_resources.DistributionNotFound):
79 return False
File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File /opt/conda/lib/python3.10/site-packages/torchaudio/__init__.py:1
----> 1 from . import ( # noqa: F401
2 _extension,
3 compliance,
4 datasets,
5 functional,
6 io,
7 kaldi_io,
8 models,
9 pipelines,
10 sox_effects,
11 transforms,
12 utils,
13 )
14 from ._backend.common import AudioMetaData # noqa
16 try:
File /opt/conda/lib/python3.10/site-packages/torchaudio/_extension/__init__.py:45
43 _IS_ALIGN_AVAILABLE = False
44 if _IS_TORCHAUDIO_EXT_AVAILABLE:
---> 45 _load_lib("libtorchaudio")
47 import torchaudio.lib._torchaudio # noqa
49 _check_cuda_version()
File /opt/conda/lib/python3.10/site-packages/torchaudio/_extension/utils.py:64, in _load_lib(lib)
62 if not path.exists():
63 return False
---> 64 torch.ops.load_library(path)
65 torch.classes.load_library(path)
66 return True
File /opt/conda/lib/python3.10/site-packages/torch/_ops.py:643, in _Ops.load_library(self, path)
638 path = _utils_internal.resolve_library_path(path)
639 with dl_open_guard():
640 # Import the shared library into the process, thus running its
641 # static (global) initialization code in order to register custom
642 # operators with the JIT.
--> 643 ctypes.CDLL(path)
644 self.loaded_libraries.add(path)
File /opt/conda/lib/python3.10/ctypes/__init__.py:374, in CDLL.__init__(self, name, mode, handle, use_errno, use_last_error, winmode)
371 self._FuncPtr = _FuncPtr
373 if handle is None:
--> 374 self._handle = _dlopen(self._name, mode)
375 else:
376 self._handle = handle
OSError: /opt/conda/lib/python3.10/site-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZN3c10ltERKNS_6SymIntEi
```python
INSTALLED VERSIONS
------------------
date : 2024-02-12
time : 18:42:30.368085
python : 3.10.13.final.0
OS : Linux
OS-release : 5.15.133+
Version : #1 SMP Tue Dec 19 13:14:11 UTC 2023
machine : x86_64
processor : x86_64
num_cores : 4
cpu_ram_mb : 32110.140625
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 19933
accelerate : 0.21.0
async-timeout : 4.0.3
autogluon : 1.0.0
autogluon.common : 1.0.0
autogluon.core : 1.0.0
autogluon.features : 1.0.0
autogluon.multimodal : 1.0.0
autogluon.tabular : 1.0.0
autogluon.timeseries : 1.0.0
boto3 : 1.26.100
catboost : 1.2.2
defusedxml : 0.7.1
evaluate : 0.4.1
fastai : 2.7.13
gluonts : 0.14.4
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.2
joblib : 1.3.2
jsonschema : 4.17.3
lightgbm : 4.1.0
lightning : 2.0.9.post0
matplotlib : None
mlforecast : 0.10.0
networkx : 3.2.1
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.26.3
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
orjson : 3.9.10
pandas : 2.1.4
Pillow : 10.2.0
psutil : 5.9.7
PyMuPDF : None
pytesseract : 0.3.10
pytorch-lightning : 2.0.9.post0
pytorch-metric-learning: 1.7.3
ray : 2.6.3
requests : 2.31.0
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : 2024.1.0
scipy : 1.11.4
seqeval : 1.2.2
setuptools : 69.0.3
skl2onnx : None
statsforecast : 1.4.0
statsmodels : 0.14.1
tabpfn : None
tensorboard : 2.15.1
text-unidecode : 1.3
timm : 0.9.12
torch : 2.0.1
torchmetrics : 1.1.2
torchvision : 0.15.2
tqdm : 4.66.1
transformers : 4.31.0
utilsforecast : 0.0.10
vowpalwabbit : 9.9.0
xgboost : 2.0.3
</details>
| closed | 2024-02-12T18:44:11Z | 2024-06-27T10:17:14Z | https://github.com/autogluon/autogluon/issues/3915 | [
"bug: unconfirmed",
"Needs Triage"
] | GDGauravDutta | 2 |
deepfakes/faceswap | machine-learning | 1,091 | Document memory usage requirements | **Is your feature request related to a problem? Please describe.**
I'm trying to understand what box to get so I can start experimenting with faceswap. So I see a good offer for a 16GB RAM machine (I already have a GPU with 16GB integrated memory) but I don't know if this will allow me to work with descent size videos. e.g. will it let me process 720p videos?
**Describe the solution you'd like**
Update documentation with some clue of what user to expect as memory requirement. This is related to #5
Thank you. | closed | 2020-11-29T15:28:01Z | 2020-12-02T13:17:04Z | https://github.com/deepfakes/faceswap/issues/1091 | [] | akostadinov | 1 |
aio-libs/aiomysql | sqlalchemy | 997 | Connection compression support | ### Is your feature request related to a problem?
Hello,
MySQL client applications from 8.0.18+ support the flags `--compression-algorithms` and `--zstd-compression-level` to manage connection compression, in addition to the older/deprecated `--compress` flag, which uses zlib compression. https://dev.mysql.com/doc/refman/8.4/en/connection-compression-control.html
This is important in cloud environments where cross-AZ bandwidth is expensive, and compression can improve data bandwidth by 50%.
Other Python libraries support `compress`. It would be very helpful to see support of these modern options and keep pace with MySQL clients.
### Describe the solution you'd like
Please offer support for MySQL's standard compression options.
### Describe alternatives you've considered
The alternative is to leave connection functionality incomplete.
### Additional context
_No response_
### Code of Conduct
- [x] I agree to follow the aio-libs Code of Conduct | open | 2025-01-16T21:30:34Z | 2025-01-16T21:30:34Z | https://github.com/aio-libs/aiomysql/issues/997 | [
"enhancement"
] | davidegreenwald | 0 |
ploomber/ploomber | jupyter | 656 | add a few sections to the FAQ | These are a few things we've been constantly asked:
* how to have tasks output a variable number of products - answer: set a folder as output
* conditionals in pipelines - answer: `import_tasks_from`, partial build or add branching logic inside a task
* hiding code cells in HTML/PDF outputs | open | 2022-03-18T02:43:08Z | 2022-03-18T14:10:49Z | https://github.com/ploomber/ploomber/issues/656 | [] | edublancas | 0 |
modoboa/modoboa | django | 2,206 | On CentOS7 OpenDKIM can't start on boot | OpenDKIM can't start on CentOS7 boot. It can be started by running sudo systemctl start opendkim or sudo -u opendkim systemctl start opendkim.
I have tried everything including file/folder permissions, set selinux to permissive, verified the "After" setting in /usr/lib/systemd/system, changed settings in /etc/opendkim.conf, etc that I can find on the web in last few days but not work.
Please help. Any idea or any way I can trace the issue?
SELinux reported these issues when I do systemctl start opendkim but SELinux is in permissive mode
----------------------------------------------------------------------------------------------------
SELinux is preventing /usr/sbin/opendkim from name_connect access on the tcp_socket port 3306.
SELinux is preventing /usr/sbin/opendkim from name_bind access on the tcp_socket port 12345.
The systemctl status output on system boot
---------------------------------------------
opendkim.service - DomainKeys Identified Mail (DKIM) Milter
Loaded: loaded (/usr/lib/systemd/system/opendkim.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2021-03-26 11:19:59 CDT; 6min ago
Docs: man:opendkim(8)
man:opendkim.conf(5)
man:opendkim-genkey(8)
man:opendkim-genzone(8)
man:opendkim-testadsp(8)
man:opendkim-testkey
http://www.opendkim.org/docs.html
Process: 704 ExecStart=/usr/sbin/opendkim $OPTIONS (code=exited, status=78)
Mar 26 11:19:57 mail.em01_centos7.com systemd[1]: Starting DomainKeys Identified Mail (DKIM) Milter...
Mar 26 11:19:59 mail.em01_centos7.com systemd[1]: opendkim.service: control process exited, code=exited status=78
Mar 26 11:19:59 mail.em01_centos7.com systemd[1]: Failed to start DomainKeys Identified Mail (DKIM) Milter.
Mar 26 11:19:59 mail.em01_centos7.com systemd[1]: Unit opendkim.service entered failed state.
Mar 26 11:19:59 mail.em01_centos7.com systemd[1]: opendkim.service failed.
The /usr/lib/systemd/system/opendkim.service
-----------------------------------------------
[Unit]
Description=DomainKeys Identified Mail (DKIM) Milter
Documentation=man:opendkim(8) man:opendkim.conf(5) man:opendkim-genkey(8) man:opendkim-genzone(8) man:opendkim-testadsp(8) man:opendkim-testkey http://www.opendkim.org/docs.html
After=network.target nss-lookup.target syslog.target mysqld.service
[Service]
Type=forking
PIDFile=/var/run/opendkim/opendkim.pid
EnvironmentFile=/etc/sysconfig/opendkim
ExecStart=/usr/sbin/opendkim $OPTIONS
ExecReload=/bin/kill -USR1 $MAINPID
User=opendkim
Group=opendkim
[Install]
WantedBy=multi-user.target
The /etc/opendkim.conf major settings
----------------------------------------
Syslog yes
SyslogSuccess Yes
LogWhy Yes
LogResults Yes
UMask 007
KeyTable dsn:mysql://opendkim:xxxxxxxxxxxxxxx@127.0.0.1/modoboa/table=dkim?keycol=id?datacol=domain_name,selector,private_key_path
SigningTable dsn:mysql://opendkim:xxxxxxxxxxxxxxx@127.0.0.1/modoboa/table=dkim?keycol=domain_name?datacol=id
SubDomains yes
Canonicalization relaxed/relaxed
Socket inet:12345@127.0.0.1
PidFile /var/run/opendkim/opendkim.pid
UserID opendkim
ExternalIgnoreList /etc/opendkim.hosts
InternalHosts /etc/opendkim.hosts
| closed | 2021-03-26T17:03:29Z | 2021-05-10T14:01:41Z | https://github.com/modoboa/modoboa/issues/2206 | [] | etmpoon | 2 |
InstaPy/InstaPy | automation | 6,624 | Get a saved post information | Hi everyone
I'm trying to create an Instagram bot. I want download my saved posts but I don't know how I can do that.
have **instapy** the ability of working with saved posts? | open | 2022-06-23T12:10:47Z | 2022-06-23T12:10:47Z | https://github.com/InstaPy/InstaPy/issues/6624 | [] | shakibm83 | 0 |
iterative/dvc | data-science | 9,697 | add: Cached output(s) outside of DVC project | # Bug Report
## add: Cached output(s) outside of DVC project
## Description
i followed this https://dvc.org/doc/start/data-management/data-versioning#tracking-data
and when i use [dvc add] to start tracking the dataset file, it happens
$ dvc add data/data.xml
ERROR: Cached output(s) outside of DVC project: data\1.txt. See <https://dvc.org/doc/user-guide/pipelines/external-dependencies-and-outputs> for more info.
### Reproduce
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
### Expected
### Environment information
DVC version: 3.2.4
------------------
Platform: Python 3.10.11 on Windows-10-10.0.19044-SP0
Subprojects:
Supports:
azure (adlfs = 2023.4.0, knack = 0.10.1, azure-identity = 1.13.0),
gdrive (pydrive2 = 1.16.0),
gs (gcsfs = 2023.6.0),
hdfs (fsspec = 2023.6.0, pyarrow = 12.0.1),
http (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
oss (ossfs = 2021.8.0),
s3 (s3fs = 2023.6.0, boto3 = 1.26.161),
ssh (sshfs = 2023.4.1),
webdav (webdav4 = 0.9.8),
webdavs (webdav4 = 0.9.8),
webhdfs (fsspec = 2023.6.0)
Config:
Global: C:\Users\Administrator\AppData\Local\iterative\dvc
System: C:\ProgramData\iterative\dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: None
Workspace directory: NTFS on E:\
Repo: dvc, git
Repo.site_cache_dir: C:\ProgramData\iterative\dvc\Cache\repo\e9c957124aa961a326533f79cd946a9e
| closed | 2023-07-04T09:40:06Z | 2023-10-06T16:08:04Z | https://github.com/iterative/dvc/issues/9697 | [] | auroraRag | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 679 | backbone about SegNeXt | Hi, Thanks for your great work! Are there plans to support SegNeXt? | closed | 2022-10-25T09:07:25Z | 2022-11-17T07:41:14Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/679 | [] | kamen007 | 0 |
coqui-ai/TTS | pytorch | 2,903 | [Bug] UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xf. | ### Describe the bug
Hi!
I am getting a warning and a related runtime error where it is unable to initialise NumPy and a related `NumPy` runtime error when doing any command related to this tts library.
What are the supported versions of numpy am I supposed to use?
Thank you so much for your help!
### To Reproduce
The command `tts --model_name tts_models/en/ek1/tacotron2 --list_speaker_idxs` gives the warning and error described below.
### Expected behavior
List the speakers for the downloaded model
### Logs
```shell
Running: tts --model_name tts_models/en/ek1/tacotron2 --list_speaker_idxs
C:\...\lib\site-packages\torchaudio\compliance\kaldi.py:22: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xf . Check the section C-API incompatibility at the Troubleshooting ImportError section at https://numpy.org/devdocs/user/troubleshooting-importerror.html#c-api-incompatibility for indications on how to solve this problem . (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.)`
(The above warning occurs at every tts command that I run)
// log for loading the model has been left out.
`File "C:\Users\...\scoop\apps\python\current\Scripts\tts.exe\__main__.py", line 7, in <module>
File "C:\Users\...\lib\site-packages\TTS\bin\synthesize.py", line 396, in main
synthesizer = Synthesizer(
File "C:\Users\...\lib\site-packages\TTS\utils\synthesizer.py", line 95, in __init__
self._load_vocoder(vocoder_checkpoint, vocoder_config, use_cuda)
File "C:\Users\...\lib\site-packages\TTS\utils\synthesizer.py", line 221, in _load_vocoder
self.vocoder_model.load_checkpoint(self.vocoder_config, model_file, eval=True)
File "C:\Users\...\lib\site-packages\TTS\vocoder\models\wavegrad.py", line 235, in load_checkpoint
self.compute_noise_level(betas)
File "C:\Users\...\lib\site-packages\TTS\vocoder\models\wavegrad.py", line 168, in compute_noise_level
self.beta = torch.tensor(beta.astype(np.float32))
**RuntimeError: Could not infer dtype of numpy.float32**
```
### Environment
```shell
I have the following dependencies:
- TTS==0.16.0
- torch==1.13.1
- torchaudio==0.13.1
- numba==0.56.4
- numpy==1.23.5
- Python==3.10.7
```
### Additional context
Windows 11 (64-bit) | closed | 2023-08-29T09:25:47Z | 2023-10-30T09:55:02Z | https://github.com/coqui-ai/TTS/issues/2903 | [
"bug",
"wontfix"
] | JET2001 | 5 |
plotly/dash | plotly | 2,444 | change behavior of grouping in go.Scatter x axes | When plotting a go.Scatter with many values in the x axis, the points suddenly get grouped, so for the same visible point, multiple values are represented.

How can I represent all datapoints separately, independently of how big my x axis is?
Thank you! | open | 2023-03-06T07:42:21Z | 2024-08-13T19:28:32Z | https://github.com/plotly/dash/issues/2444 | [
"feature",
"P3"
] | asicoderOfficial | 5 |
flasgger/flasgger | api | 602 | Flassger support for parameter type:file | Hello, is there any way to use flasgger to have a parameter type as File ?
For example here is the yml :
Get The Audio Transcription
---
tags:
- My TAG API
consumes:
- multipart/form-data
parameters:
- name: audiofile
in: formData
type: file
required: true
description: The audio file to upload
responses:
200:
description: OK
schema:
key: file
Here is how the swagger looks with this yaml
<img width="1447" alt="Screen Shot 2023-12-15 at 00 56 42" src="https://github.com/flasgger/flasgger/assets/18285060/7585336b-dce7-4704-995a-dbb71c4defa5">
However any time that i try to use it i can see the following error
Traceback (most recent call last):
File "/Users/danilomurbach/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 2529, in wsgi_app
response = self.full_dispatch_request()
File "/Users/danilomurbach/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/danilomurbach/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 1821, in full_dispatch_request
rv = self.preprocess_request()
File "/Users/danilomurbach/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 2313, in preprocess_request
rv = self.ensure_sync(before_func)()
File "/Users/danilomurbach/Library/Python/3.9/lib/python/site-packages/flasgger/base.py", line 677, in before_request
type=self.SCHEMA_TYPES[
KeyError: 'file'
Here is my pip freeze:
flasgger==0.9.5
Flask==2.2.5
Flask-Cors==3.0.10
Flask-RESTful==0.3.9
flask-restplus==0.13.0
Flask-SQLAlchemy==3.0.5
flask-swagger==0.2.14
jsonify==0.5
jsonschema==4.17.3
Werkzeug==3.0.1
| closed | 2023-12-15T03:59:41Z | 2024-02-08T19:30:47Z | https://github.com/flasgger/flasgger/issues/602 | [] | DaniloMurbach | 3 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,313 | I noticed you created an issue with the repo URL in the title but didn't provide any details. Is there a specific problem you’re facing, or do you need help with something? | I noticed you created an issue with the repo URL in the title but didn't provide any details. Is there a specific problem you’re facing, or do you need help with something?
_Originally posted by @fastfingertips in https://github.com/muaaz-ur-habibi/G-Scraper/issues/3#issuecomment-2337059052_ | open | 2024-09-09T14:34:08Z | 2024-09-09T14:34:08Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1313 | [] | OAIUR | 0 |
ultralytics/yolov5 | deep-learning | 12,418 | Folder YOLOv5 does not appear in the directory after its installation. | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi everybody!
I am new with the use of Yolov5 tool. I have followed the steps indicated in the webpage for yolov5 installation in my laptop.

And the docker has properly installed as you can see the container in the docker desktop application.

However, when I have checked if the folder was created in my local directory root no "Yolov5" folder appears. I have followed similar steps for other dockers such as cvat, where you can see that the foder was properly created.

And CVAT folder contains the typical structure of a docker

Is there any step that I do not follow properly? Do I need to do something else to finish with the installation of yolov5 ?
### Additional
_No response_ | closed | 2023-11-23T07:31:59Z | 2024-10-20T19:32:20Z | https://github.com/ultralytics/yolov5/issues/12418 | [
"question"
] | frl93 | 8 |
adamerose/PandasGUI | pandas | 131 | Crash on import: TypeError: 'int' object is not subscriptable | **Describe the bug**
When I try to run the example code on a Jupyter Notebook:
```python
import pandas as pd
from pandasgui import show
df = pd.DataFrame(([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), columns=['a', 'b', 'c'])
show(df)
I get the following error `TypeError: 'int' object is not subscriptable
```
**Full stack trace of the error**
```python
TypeError Traceback (most recent call last)
<ipython-input-15-c5c5d8976c89> in <module>()
1 import pandas as pd
----> 2 from pandasgui import show
3 df = pd.DataFrame(([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), columns=['a', 'b', 'c'])
4 show(df)
5 frames
/usr/local/lib/python3.7/dist-packages/pandasgui/__init__.py in <module>()
13
14 # Imports
---> 15 from pandasgui.gui import show
16
17 __all__ = ["show", "__version__"]
/usr/local/lib/python3.7/dist-packages/pandasgui/gui.py in <module>()
13 from pandasgui.store import PandasGuiStore, PandasGuiDataFrameStore
14 from pandasgui.utility import fix_ipython, fix_pyqt, as_dict, delete_datasets, resize_widget
---> 15 from pandasgui.widgets.dataframe_explorer import DataFrameExplorer
16 from pandasgui.widgets.grapher import schemas
17 from pandasgui.widgets.dragger import BooleanArg
/usr/local/lib/python3.7/dist-packages/pandasgui/widgets/dataframe_explorer.py in <module>()
7 from pandasgui.utility import nunique
8 from pandasgui.widgets.dataframe_viewer import DataFrameViewer
----> 9 from pandasgui.widgets.grapher import Grapher
10 from pandasgui.widgets.reshaper import Reshaper
11 from pandasgui.widgets.filter_viewer import FilterViewer
/usr/local/lib/python3.7/dist-packages/pandasgui/widgets/grapher.py in <module>()
14 from pandasgui.store import PandasGuiStore, PandasGuiDataFrameStore, HistoryItem, SETTINGS_STORE
15
---> 16 from pandasgui.widgets.plotly_viewer import PlotlyViewer, plotly_markers
17 from pandasgui.utility import flatten_df, flatten_iter, kwargs_string, nunique, unique, eval_title
18 from pandasgui.widgets.dragger import Dragger, ColumnArg, Schema, BooleanArg
/usr/local/lib/python3.7/dist-packages/pandasgui/widgets/plotly_viewer.py in <module>()
38 # Available symbol names for a given version of Plotly
39 _extended_symbols = SymbolValidator().values[0::2][1::3]
---> 40 plotly_markers = [symbol for symbol in _extended_symbols if symbol[-3:] != "dot"]
41
42
/usr/local/lib/python3.7/dist-packages/pandasgui/widgets/plotly_viewer.py in <listcomp>(.0)
38 # Available symbol names for a given version of Plotly
39 _extended_symbols = SymbolValidator().values[0::2][1::3]
---> 40 plotly_markers = [symbol for symbol in _extended_symbols if symbol[-3:] != "dot"]
41
42
TypeError: 'int' object is not subscriptable
```
**Environment**
OS: Unknown
Python: eg. 3.7.10
IDE: Google Colab
**Package versions**
TO GET ALL RELEVANT PACKAGE VERSIONS, RUN THIS COMMAND IN BASH AND PASTE THE OUTPUT
pip freeze | grep -i "pyqt\|pandasgui\|plotly\|ipython\|jupyter\|notebook"
EXAMPLE OUTPUT
```
ipython==5.5.0
ipython-genutils==0.2.0
ipython-sql==0.3.9
jupyter==1.0.0
jupyter-client==5.3.5
jupyter-console==5.2.0
jupyter-core==4.7.1
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
notebook==5.3.1
pandasgui==0.2.10.1
plotly==4.4.1
PyQt5==5.15.4
PyQt5-Qt5==5.15.2
PyQt5-sip==12.8.1
PyQtWebEngine==5.15.4
PyQtWebEngine-Qt5==5.15.2
```
| closed | 2021-04-27T14:54:20Z | 2021-04-29T05:55:54Z | https://github.com/adamerose/PandasGUI/issues/131 | [
"bug"
] | ioanpier | 3 |
JohnSnowLabs/nlu | streamlit | 19 | Remove the hard dependency on the pyspark | Right now, the `nlu` package has a hard dependency on the `pyspark` making it hard to use with Databricks runtime, or other compatible Spark runtime. Instead, this package should either rely on implicit dependency completely, or use something like [findspark package](https://github.com/minrk/findspark), something like done [here](https://github.com/holdenk/spark-testing-base/blob/master/python/sparktestingbase/utils.py).
P.S. the spark-nlp package itself doesn't depend on the pyspark | closed | 2020-11-20T09:04:12Z | 2020-12-15T07:43:04Z | https://github.com/JohnSnowLabs/nlu/issues/19 | [
"enhancement"
] | alexott | 3 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 177 | Can not move cursor. ButtonControl's BoundingRectangle is (0,0,0,0)[0x0]. SearchProperties: {Name: '关闭', ControlType: ButtonControl} | when I excuted a demo for notepad,I got an error as below:

error info: Can not move cursor. ButtonControl's BoundingRectangle is (0,0,0,0)[0x0]. SearchProperties: {Name: '关闭', ControlType: ButtonControl} | open | 2021-08-18T02:14:11Z | 2024-04-18T02:33:34Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/177 | [
"question"
] | corei99 | 4 |
graphql-python/graphene | graphql | 1,151 | I would like my enum input values to be the enum instance instead of the enum values | Is there a way for me to do this?
Here is some example code.
```python
from enum import Enum, auto
from graphene import Enum as GQLEnum, ObjectType, Schema, String
from graphene.relay import ClientIDMutation
from graphene.test import Client
class EnumThing(Enum):
a = auto()
b = auto()
GQLEnumThing = GQLEnum.from_enum(EnumThing)
class TestMut(ClientIDMutation):
class Input:
enumthing = GQLEnumThing(required=True)
enumtype = String()
@classmethod
def mutate_and_get_payload(cls, root, info, enumthing, client_mutation_id=None):
print("enumthing is", repr(enumthing), type(enumthing))
return cls(enumtype=type(enumthing).__name__)
class Mutations(ObjectType):
testmut = TestMut.Field()
schema = Schema(mutation=Mutations, auto_camelcase=False)
client = Client(schema)
mutation = '''
mutation whatever {
testmut(input: {enumthing: a}) {
enumtype
}
}
'''
print(client.execute(mutation))
```
When I run this, I get the following output:
```
enumthing is 1 <class 'int'>
{'data': OrderedDict([('testmut', OrderedDict([('enumtype', 'int')]))])}
```
Instead of getting the integer `1` passed to my mutation function, I would like to have `EnumThing.a` passed, which is an instance of `EnumThing`. I haven't figured out where in graphene this translation of the literal `a` to the value `1` is actually happening (I would expect an access of the `.value` attribute on the enum somewhere).
Why? because I don't really care about the integer `1` -- that's just something generated by Python. If I log the value of this enum, I want to see that it's a `EnumThing.a`, not the integer `1`. If I pass this thing to the rest of my codebase which is expecting it to be an instance of `EnumThing`, it breaks. So I end up converting it *back* to the instance from the integer that Graphene gave me. | closed | 2020-03-09T15:43:07Z | 2020-10-21T08:45:13Z | https://github.com/graphql-python/graphene/issues/1151 | [
"wontfix",
"scheduled_for_v3"
] | radix | 5 |
Johnserf-Seed/TikTokDownload | api | 56 | 能增加一个批量下载自己点赞这收藏的视频吗? | 能增加一个批量下载自己点赞这收藏的视频吗? | closed | 2021-09-18T15:13:19Z | 2022-03-02T03:04:29Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/56 | [
"需求建议(enhancement)",
"额外求助(help wanted)"
] | qiumomo | 2 |
pytest-dev/pytest-django | pytest | 1,050 | Not able to reset django form field initial value | I have a django form with a field defined like this:
```python
myField = forms.CharField(
widget=forms.HiddenInput(),
initial=f"2023-{settings.FIELD_BASE}",
)
```
I'm writing a test where I'd like to use `@pytest.mark.parametrize` to test different values of `FIELD_BASE`. Something like:
```python
@pytest.mark.parametrize("test_base", ["aa", "bb", "cc"])
def test_bases(self, client, settings, test_base):
settings.FIELD_BASE = test_base
response = client.get(reverse(someview))
assert f'name="myField" value="2023-{test_base}" in response.content
```
This does not work. My hunch is that it's because `initial` is set when the module is loaded at the beginning of the test run. So I tried reloading, adding the two lines after the settings update:
```python
settings.FIELD_BASE = test_base
import importlib
importlib.reload('myapp.forms')
```
Still no luck.
Any thoughts on how to handle this? | open | 2023-03-10T17:12:54Z | 2023-04-05T07:40:03Z | https://github.com/pytest-dev/pytest-django/issues/1050 | [] | truthdoug | 1 |
remsky/Kokoro-FastAPI | fastapi | 252 | segmentation fault on simultaneous requests for some output formats | **Describe the bug**
Unless `output_format` chosen is `"pcm"` or `"aac"`, making simultaneous requests causes server process to crash with segmentation fault.
**Screenshots or console output**
```
$ ./start-gpu.sh
Resolved 140 packages in 337ms
Built kokoro-fastapi @ file:///home/syn/Kokoro-FastAPI
Prepared 1 package in 836ms
Uninstalled 1 package in 0.39ms
Installed 1 package in 0.80ms
~ kokoro-fastapi==0.1.4 (from file:///home/syn/Kokoro-FastAPI)
11:28:49 AM | INFO | main:57 | Loading TTS model and voice packs...
11:28:49 AM | INFO | model_manager:38 | Initializing Kokoro V1 on cuda
11:28:49 AM | DEBUG | paths:101 | Searching for model in path: /home/syn/Kokoro-FastAPI/api/src/models
11:28:49 AM | INFO | kokoro_v1:45 | Loading Kokoro model on cuda
11:28:49 AM | INFO | kokoro_v1:46 | Config path: /home/syn/Kokoro-FastAPI/api/src/models/v1_0/config.json
11:28:49 AM | INFO | kokoro_v1:47 | Model path: /home/syn/Kokoro-FastAPI/api/src/models/v1_0/kokoro-v1_0.pth
/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/torch/nn/modules/rnn.py:123: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
warnings.warn(
/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:143: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.
WeightNorm.apply(module, name, dim)
11:28:51 AM | DEBUG | paths:153 | Scanning for voices in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:51 AM | DEBUG | paths:131 | Searching for voice in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:51 AM | DEBUG | model_manager:77 | Using default voice 'af_heart' for warmup
11:28:51 AM | INFO | kokoro_v1:78 | Creating new pipeline for language code: a
11:28:52 AM | DEBUG | kokoro_v1:250 | Generating audio for text with lang_code 'a': 'Warmup text for initialization.'
11:28:53 AM | DEBUG | kokoro_v1:257 | Got audio chunk with shape: torch.Size([57600])
11:28:53 AM | INFO | model_manager:84 | Warmup completed in 3457ms
11:28:53 AM | INFO | main:106 |
░░░░░░░░░░░░░░░░░░░░░░░░
╔═╗┌─┐┌─┐┌┬┐
╠╣ ├─┤└─┐ │
╚ ┴ ┴└─┘ ┴
╦╔═┌─┐┬┌─┌─┐
╠╩╗│ │├┴┐│ │
╩ ╩└─┘┴ ┴└─┘
░░░░░░░░░░░░░░░░░░░░░░░░
Model warmed up on cuda: kokoro_v1
CUDA: True
67 voice packs loaded
Beta Web Player: http://0.0.0.0:8880/web/
or http://localhost:8880/web/
░░░░░░░░░░░░░░░░░░░░░░░░
11:28:58 AM | INFO | openai_compatible:69 | Created global TTSService instance
11:28:58 AM | DEBUG | paths:153 | Scanning for voices in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:58 AM | DEBUG | paths:153 | Scanning for voices in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:58 AM | DEBUG | paths:153 | Scanning for voices in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:58 AM | DEBUG | paths:153 | Scanning for voices in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:58 AM | INFO | openai_compatible:149 | Starting audio generation with lang_code: None
11:28:58 AM | DEBUG | paths:131 | Searching for voice in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:58 AM | INFO | openai_compatible:149 | Starting audio generation with lang_code: None
11:28:58 AM | DEBUG | paths:131 | Searching for voice in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:58 AM | DEBUG | tts_service:201 | Using single voice path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0/af_heart.pt
11:28:58 AM | DEBUG | tts_service:269 | Using voice path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0/af_heart.pt
11:28:58 AM | INFO | tts_service:273 | Using lang_code 'a' for voice 'af_heart' in audio stream
11:28:58 AM | INFO | text_processor:131 | Starting smart split for 143 chars
11:28:58 AM | DEBUG | text_processor:54 | Total processing took 16.79ms for chunk: 'Despite its lightweight architecture, it delivers ...'
11:28:58 AM | INFO | text_processor:259 | Yielding final chunk 1: 'Despite its lightweight architecture, it delivers ...' (152 tokens)
11:28:58 AM | DEBUG | tts_service:201 | Using single voice path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0/af_heart.pt
11:28:58 AM | DEBUG | tts_service:269 | Using voice path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0/af_heart.pt
11:28:58 AM | INFO | tts_service:273 | Using lang_code 'a' for voice 'af_heart' in audio stream
11:28:58 AM | INFO | text_processor:131 | Starting smart split for 143 chars
11:28:58 AM | DEBUG | text_processor:54 | Total processing took 0.81ms for chunk: 'Despite its lightweight architecture, it delivers ...'
11:28:58 AM | INFO | text_processor:259 | Yielding final chunk 1: 'Despite its lightweight architecture, it delivers ...' (152 tokens)
11:28:58 AM | DEBUG | kokoro_v1:250 | Generating audio for text with lang_code 'a': 'Despite its lightweight architecture, it delivers comparable quality to larger models while being si...'
11:28:58 AM | DEBUG | kokoro_v1:257 | Got audio chunk with shape: torch.Size([220200])
11:28:58 AM | DEBUG | kokoro_v1:250 | Generating audio for text with lang_code 'a': 'Despite its lightweight architecture, it delivers comparable quality to larger models while being si...'
11:28:59 AM | DEBUG | kokoro_v1:257 | Got audio chunk with shape: torch.Size([220200])
11:28:59 AM | INFO | text_processor:265 | Split completed in 381.69ms, produced 1 chunks
11:28:59 AM | INFO | text_processor:265 | Split completed in 361.99ms, produced 1 chunks
11:28:59 AM | DEBUG | paths:153 | Scanning for voices in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:59 AM | DEBUG | paths:153 | Scanning for voices in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:59 AM | DEBUG | paths:153 | Scanning for voices in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:59 AM | DEBUG | paths:153 | Scanning for voices in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:59 AM | INFO | openai_compatible:149 | Starting audio generation with lang_code: None
11:28:59 AM | DEBUG | paths:131 | Searching for voice in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:59 AM | INFO | openai_compatible:149 | Starting audio generation with lang_code: None
11:28:59 AM | DEBUG | paths:131 | Searching for voice in path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0
11:28:59 AM | DEBUG | tts_service:201 | Using single voice path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0/af_heart.pt
11:28:59 AM | DEBUG | tts_service:269 | Using voice path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0/af_heart.pt
11:28:59 AM | INFO | tts_service:273 | Using lang_code 'a' for voice 'af_heart' in audio stream
11:28:59 AM | INFO | text_processor:131 | Starting smart split for 143 chars
11:28:59 AM | DEBUG | text_processor:54 | Total processing took 0.86ms for chunk: 'Despite its lightweight architecture, it delivers ...'
11:28:59 AM | INFO | text_processor:259 | Yielding final chunk 1: 'Despite its lightweight architecture, it delivers ...' (152 tokens)
11:28:59 AM | DEBUG | tts_service:201 | Using single voice path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0/af_heart.pt
11:28:59 AM | DEBUG | tts_service:269 | Using voice path: /home/syn/Kokoro-FastAPI/api/src/voices/v1_0/af_heart.pt
11:28:59 AM | INFO | tts_service:273 | Using lang_code 'a' for voice 'af_heart' in audio stream
11:28:59 AM | INFO | text_processor:131 | Starting smart split for 143 chars
11:28:59 AM | DEBUG | text_processor:54 | Total processing took 0.83ms for chunk: 'Despite its lightweight architecture, it delivers ...'
11:28:59 AM | INFO | text_processor:259 | Yielding final chunk 1: 'Despite its lightweight architecture, it delivers ...' (152 tokens)
11:28:59 AM | DEBUG | kokoro_v1:250 | Generating audio for text with lang_code 'a': 'Despite its lightweight architecture, it delivers comparable quality to larger models while being si...'
11:28:59 AM | DEBUG | kokoro_v1:257 | Got audio chunk with shape: torch.Size([220200])
11:28:59 AM | DEBUG | kokoro_v1:250 | Generating audio for text with lang_code 'a': 'Despite its lightweight architecture, it delivers comparable quality to larger models while being si...'
```
load-testing script (grafana k6):
```javascript
import http from 'k6/http'
import { check, fail } from 'k6';
export default async function main() {
const res = http.post(`http://localhost:8880/v1/audio/speech`,
JSON.stringify({"response_format":"wav", "input": "Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient."}),
{headers: {'content-type': 'application/json'}}
);
}
export const options = {
discardResponseBodies: true,
scenarios: {
contacts: {
executor: 'shared-iterations',
iterations: 10000,
vus: 2
},
}
}
```
dmesg output:
```
[ 7047.107649] uvicorn[66218]: segfault at 0 ip 00007f43a152e5ed sp 00007ffd18252ec0 error 6 in pyio.cpython-310-x86_64-linux-gnu.so[7f43a1528000+d000] likely on CPU 0 (core 0, socket 0)
[ 7047.107676] Code: 0f 84 0f 06 00 00 48 63 fb e8 af 9d ff ff 48 89 44 24 38 49 89 c7 48 85 c0 0f 84 18 08 00 00 48 8b 7d 28 48 8b 05 53 a9 00 00 <48> 83 07 01 48 89 7c 24 40 48 39 47 08 0f 85 38 05 00 00 4c 8b 5f
```
**Python faulthandler output, running with PYTHONFAULTHANDLER=1**
```
Fatal Python error: Segmentation fault
Thread 0x00007ad2abe006c0 (most recent call first):
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/thread.py", line 81 in _worker
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/threading.py", line 953 in run
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/threading.py", line 973 in _bootstrap
Thread 0x00007ad2b30006c0 (most recent call first):
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/thread.py", line 81 in _worker
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/threading.py", line 953 in run
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/threading.py", line 973 in _bootstrap
Thread 0x00007ad2d3e006c0 (most recent call first):
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/thread.py", line 81 in _worker
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/threading.py", line 953 in run
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/threading.py", line 973 in _bootstrap
Current thread 0x00007ad3da6ff740 (most recent call first):
Garbage-collecting
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/model.py", line 310 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/layers/chain.py", line 54 in forward
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/model.py", line 310 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/layers/chain.py", line 54 in forward
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/model.py", line 310 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/layers/chain.py", line 54 in forward
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/model.py", line 310 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/layers/residual.py", line 41 in forward
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/model.py", line 310 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/layers/chain.py", line 54 in forward
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/model.py", line 310 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/layers/with_array.py", line 77 in _list_forward
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/layers/with_array.py", line 42 in forward
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/model.py", line 310 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/layers/chain.py", line 54 in forward
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/thinc/model.py", line 334 in predict
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/spacy/pipeline/tok2vec.py", line 126 in predict
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/spacy/language.py", line 1049 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/misaki/en.py", line 537 in tokenize
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/misaki/en.py", line 644 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/kokoro/pipeline.py", line 358 in __call__
File "/home/syn/Kokoro-FastAPI/api/src/inference/kokoro_v1.py", line 253 in generate
File "/home/syn/Kokoro-FastAPI/api/src/inference/model_manager.py", line 143 in generate
File "/home/syn/Kokoro-FastAPI/api/src/services/tts_service.py", line 93 in _process_chunk
File "/home/syn/Kokoro-FastAPI/api/src/services/tts_service.py", line 282 in generate_audio_stream
File "/home/syn/Kokoro-FastAPI/api/src/routers/openai_compatible.py", line 150 in stream_audio_chunks
File "/home/syn/Kokoro-FastAPI/api/src/routers/openai_compatible.py", line 266 in single_output
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/starlette/responses.py", line 244 in stream_response
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/starlette/responses.py", line 255 in wrap
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/events.py", line 80 in _run
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/base_events.py", line 1909 in _run_once
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/base_events.py", line 603 in run_forever
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/base_events.py", line 636 in run_until_complete
File "/home/syn/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/runners.py", line 44 in run
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/uvicorn/server.py", line 66 in run
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/uvicorn/main.py", line 579 in run
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/uvicorn/main.py", line 412 in main
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/click/core.py", line 788 in invoke
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/click/core.py", line 1443 in invoke
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/click/core.py", line 1082 in main
File "/home/syn/Kokoro-FastAPI/.venv/lib/python3.10/site-packages/click/core.py", line 1161 in __call__
File "/home/syn/Kokoro-FastAPI/.venv/bin/uvicorn", line 10 in <module>
Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, psutil._psutil_linux, psutil._psutil_posix, scipy._lib._ccallback_c, scipy.signal._sigtools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy._lib._uarray._uarray, scipy.signal._max_len_seq_inner, scipy.signal._upfirdn_apply, scipy.signal._spline, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.spatial.transform._rotation, scipy.interpolate._fitpack, scipy.interpolate._dfitpack, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, scipy.signal._sosfilt, scipy.signal._spectral, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy.stats._ansari_swilk_statistics, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.stats._unuran.unuran_wrapper, scipy.signal._peak_finding_utils, charset_normalizer.md, requests.packages.charset_normalizer.md, requests.packages.chardet.md, yaml._yaml, markupsafe._speedups, PIL._imaging, av._core, av.logging, av.bytesource, av.buffer, av.audio.format, av.error, av.dictionary, av.container.pyio, av.utils, av.option, av.descriptor, av.format, av.stream, av.container.streams, av.sidedata.motionvectors, av.sidedata.sidedata, av.opaque, av.packet, av.container.input, av.container.output, av.container.core, av.codec.context, av.video.format, av.video.reformatter, av.plane, av.video.plane, av.video.frame, av.video.stream, av.codec.hwaccel, av.codec.codec, av.frame, av.audio.layout, av.audio.plane, av.audio.frame, av.audio.stream, av.filter.pad, av.filter.link, av.filter.context, av.filter.graph, av.filter.filter, av.audio.resampler, av.filter.loudnorm, av.audio.codeccontext, av.audio.fifo, av.bitstream, av.video.codeccontext, srsly.ujson.ujson, srsly.msgpack._epoch, srsly.msgpack._packer, srsly.msgpack._unpacker, blis.cy, thinc.backends.cblas, cymem.cymem, preshed.maps, blis.py, thinc.backends.linalg, murmurhash.mrmr, thinc.backends.numpy_ops, thinc.layers.premap_ids, thinc.layers.sparselinear, spacy.symbols, preshed.bloom, spacy.strings, spacy.attrs, spacy.parts_of_speech, spacy.morphology, spacy.lexeme, spacy.tokens.morphanalysis, spacy.tokens.token, spacy.tokens.span, spacy.tokens.span_group, spacy.tokens._retokenize, spacy.tokens.doc, spacy.vectors, spacy.vocab, spacy.training.align, spacy.training.alignment_array, spacy.pipeline._parser_internals.nonproj, spacy.training.example, spacy.training.gold_io, spacy.matcher.levenshtein, spacy.matcher.matcher, spacy.matcher.dependencymatcher, spacy.matcher.phrasematcher, spacy.tokenizer, spacy.pipeline.pipe, spacy.pipeline.trainable_pipe, spacy.pipeline._parser_internals.stateclass, spacy.pipeline._parser_internals.transition_system, spacy.kb.kb, spacy.kb.candidate, spacy.kb.kb_in_memory, spacy.ml.parser_model, thinc.extra.search, spacy.pipeline._parser_internals._beam_utils, spacy.pipeline.transition_parser, spacy.pipeline._parser_internals.arc_eager, spacy.pipeline.dep_parser, spacy.pipeline._edit_tree_internals.edit_trees, spacy.pipeline.tagger, spacy.pipeline.morphologizer, spacy.pipeline._parser_internals.ner, spacy.pipeline.ner, spacy.pipeline.senter, spacy.pipeline.sentencizer, regex._regex, scipy.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, _cffi_backend, curated_tokenizers._bbpe, curated_tokenizers._spp, curated_tokenizers._wordpiece, fugashi.fugashi (total: 237)
```
Although the last portion of segfault trace is different depending on circumstances (maybe the output format chosen), but that's what allowed me to catch the "unless PCM" portion of the issue
**Branch / Deployment used**
* v0.2.3, v.0.2.0, master.
* Does not crash on v0.1.4
**Operating System**
* Nvidia L4 GPU on Google Cloud
* Ubuntu LTS 24.04
* Nvidia 550.144.03 or 535.230.02
* CUDA: 11.8 or 12.2 or 12.8
**Additional context**
| closed | 2025-03-20T11:33:45Z | 2025-03-21T10:00:59Z | https://github.com/remsky/Kokoro-FastAPI/issues/252 | [] | synchrone | 2 |
CPJKU/madmom | numpy | 241 | ffmpeg.py does not check for unicode strings | In ``madmom.audio.ffmpeg.py``, there are several lines where ``isinstance(infile, str):`` is used. We should replace this by ``isinstance(infile, (str, unicode)):`` to also support python 2.7's unicode string type. | closed | 2016-12-22T08:08:53Z | 2017-08-06T16:50:23Z | https://github.com/CPJKU/madmom/issues/241 | [] | flokadillo | 3 |
allure-framework/allure-python | pytest | 767 | parametrized test in allure testops in different cases | # (
There are parameterized tests, each of which has its own identifier in TestOps.
I need them to appear in TestOps each under their own ID.
How can I decorate tests, what decorator methods should I use so that the tests are separated when uploaded to TestOps
There is code in Java on how to do this, but I haven’t found an analogue in Python
)
#### What is the current behavior?
My one parametrized test after testing will upload in testops in different cases
#### Please tell us about your environment:
- Allure TestOps: 4.19.0
- Test framework: pytest@3.0
#### Other information
[//]: # (

)
| closed | 2023-09-21T12:39:21Z | 2023-09-26T06:28:13Z | https://github.com/allure-framework/allure-python/issues/767 | [] | SergyBud | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 865 | loss plots | when i open the http://localhost:8097 ,I can not find loss plots ,how can i see it?thank you!
| open | 2019-12-04T12:03:33Z | 2019-12-04T18:20:21Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/865 | [] | liwanjie1020 | 1 |
benbusby/whoogle-search | flask | 365 | [FEATURE] Option "People also ask" sections in results | <!--
DO NOT REQUEST UI/THEME/GUI/APPEARANCE IMPROVEMENTS HERE
THESE SHOULD GO IN ISSUE #60
REQUESTING A NEW FEATURE SHOULD BE STRICTLY RELATED TO NEW FUNCTIONALITY
-->
**Describe the feature you'd like to see added**
Add Configuration option : disable "People also ask" section.
If disabled section, show more search results.
Add +, - button and default collapse
**Additional context**

| closed | 2021-06-20T07:26:11Z | 2021-06-23T23:05:08Z | https://github.com/benbusby/whoogle-search/issues/365 | [
"enhancement"
] | baek-sang | 1 |
robinhood/faust | asyncio | 62 | Faust breaks if input messages have extra fields that we don't specify in our model | Faust breaks if input messages have extra fields that we don't specify in our model. This means if our upstream adds an extra field, our app breaks. | closed | 2018-01-23T00:56:34Z | 2018-07-31T14:39:13Z | https://github.com/robinhood/faust/issues/62 | [] | danielko-robinhood | 3 |
Guovin/iptv-api | api | 688 | 内网源添加白名单 | 问题:环境docker lite版,我做了feiyang/allinone内网源,在/docker/iptv_api/config/config.ini内部给了参数subscribe_urls = http://192.168.1.5:35455/tv.m3u,然后输出/docker/iptv_api/output/result.m3u显示频道为空。
看到有类似问题,是去/docker/iptv_api/config/demo.txt添加白名单
CCTV源链接是这样的:http://192.168.1.5:35455/ysptp/cctv1.m3u8
测试白名单添加
CCTV-1,http://192.168.1.5$!
CCTV-2,http://192.168.1.5:35455/tv.m3u$!
CCTV-3,http://192.168.1.5:35455/itv/6000000001000022313.m3u8?cdn=wasusyt$!
CCTV-4,http://192.168.1.5:35455$!
输出的result.m3u还是为空
请问下大佬们到底咋添加 | closed | 2024-12-15T11:38:45Z | 2024-12-18T14:11:37Z | https://github.com/Guovin/iptv-api/issues/688 | [
"question"
] | claudecaicai | 8 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 979 | Inconsistent result | Hello! I've been doing experiments with CycleGAN regarding the number of images needed to produce decent result for my translation task.
The problem with this is that two identical trainings (same hyperparameters, architecture, image size, dataset, etc...) yield very different result, i.e. it is difficult reproducing the same result over and over again. As an example, I got very good results using 2500 images one run but poor results the second time I tried with 2500 images. How can this best be explained?
Thanks a lot in advance. | closed | 2020-04-06T11:16:54Z | 2020-04-07T07:03:21Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/979 | [] | fransonf | 2 |
openapi-generators/openapi-python-client | fastapi | 347 | Recursive reference | **Describe the bug**
So... I have this:
```python
class FolderChildren(BaseModel):
"""
Folder schema to output from GET folders/{folder_id}/children method.
"""
id: int
name: str
n_docs: int
children: Optional[List["FolderChildren"]] = None
FolderChildren.update_forward_refs()
```
And the error is:
```bash
Warning(s) encountered while generating. Client was generated, but some pieces may be missing
WARNING parsing GET /api/v1/folders/{folder_id}/children/ within hierarchy.
Cannot parse response for status code 200, response will be ommitted from generated client
Reference(ref='#/components/schemas/FolderChildren')
Unable to parse this part of your OpenAPI document:
invalid data in items of array children
Reference(ref='#/components/schemas/FolderChildren')
```
I believe the problem is the self-reference. Am I wrong?
**Expected behavior**
Not having the error message.
**OpenAPI Spec File**
Can't give you this.
**Desktop (please complete the following information):**
- OS: Ubuntu 20.04
- Python Version: 3.6.9
- openapi-python-client version 0.8.0
| closed | 2021-03-12T17:02:45Z | 2021-03-12T17:39:15Z | https://github.com/openapi-generators/openapi-python-client/issues/347 | [
"🐞bug"
] | Kludex | 1 |
explosion/spaCy | deep-learning | 13,275 | Spacy french NER transformer based model fr_dep_news_trf not working | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
Hello, we want to use spacy to do NER extraction for french texts. The transformer based model fr_dep_news_trf seems to be broken. The list of entities is always empty.
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
We create a minimum example to reproduce the issue with google colab
https://colab.research.google.com/drive/1mngC0EBDOP3SAngeTeNRdK2d3EP2Mc-v?authuser=0#scrollTo=eXeJRQvflErl
```
import spacy
doc = nlp("Bonjour, Emmanuel. Bonjour, monsieur. Donc voilà, je fais plein de choses. Biologie, c'est du pire veau, museau, lentilles, c'est voilà. Donc la pièce est bouchée au sep, c'est pareil. Je fais une sauce au sep avec la crème. Ah, ça doit être pas mal aussi. C'est pas mal aussi. Alors on va prendre un petit pot de quoi ? On a le Beaujolais, on a le Saint-Joseph, le Trois-Hermitages. Ah non, je suis une fille du Beaujolais, moi. Merci. Alors attends, je pousse.")
for w in doc.ents:
print(w.text,w.label_)
```
the model doesn't detect anything.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
It's the default colab environment
## Info about spaCy
- **spaCy version:** 3.6.1
- **Platform:** Linux-6.1.58+-x86_64-with-glibc2.35
- **Python version:** 3.10.12
- **Pipelines:** fr_dep_news_trf (3.6.1), fr_core_news_lg (3.6.0), en_core_web_sm (3.6.0)
| closed | 2024-01-25T22:51:42Z | 2024-01-26T08:54:07Z | https://github.com/explosion/spaCy/issues/13275 | [
"lang / fr",
"feat / transformer"
] | zmy1116 | 1 |
ufoym/deepo | tensorflow | 47 | Spyder (or any other IDE) support | Hello,
This docker image makes developing models so much easier! Is there any way to access the ML libraries installed with this docker image through an IDE such Spyder?
The jupyter notebook access is nice, but being able to run code in Spyder would make debugging a little bit easier. | closed | 2018-07-31T17:31:56Z | 2022-01-29T01:45:40Z | https://github.com/ufoym/deepo/issues/47 | [
"stale"
] | sophia-wright-blue | 4 |
autogluon/autogluon | data-science | 4,328 | Using custom model.hf_text.checkpoint_name | ## Description
In Autogluon multimodal, you can specify a text model on Huggingface (say for the sake of the example roberta-base). If I fine-tuned roberta-base using the Transformers library but did not publish to Huggingface, can I still train on that backbone by specifying the path in the model.hf_text.checkpoint_name hyperparameter?
| open | 2024-07-18T00:47:39Z | 2024-07-18T00:47:39Z | https://github.com/autogluon/autogluon/issues/4328 | [
"enhancement"
] | zkalson | 0 |
recommenders-team/recommenders | machine-learning | 1,943 | [BUG] Review GeoIMC movielens | ### Description
<!--- Describe your issue/bug/request in detail -->
When installing with `pip install .[all]`, there is an error that pymanot is not installed.
After installing the latest version of pymaopt: 2.1.1,
We got an error:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Input In [1], in <cell line: 10>()
8 from recommenders.datasets import movielens
9 from recommenders.models.geoimc.geoimc_data import ML_100K
---> 10 from recommenders.models.geoimc.geoimc_algorithm import IMCProblem
11 from recommenders.models.geoimc.geoimc_predict import Inferer
12 from recommenders.evaluation.python_evaluation import (
13 rmse, mae
14 )
File ~/MS/recommenders/recommenders/models/geoimc/geoimc_algorithm.py:14, in <module>
12 from pymanopt import Problem
13 from pymanopt.manifolds import Stiefel, Product, SymmetricPositiveDefinite
---> 14 from pymanopt.solvers import ConjugateGradient
15 from pymanopt.solvers.linesearch import LineSearchBackTracking
18 class IMCProblem(object):
ModuleNotFoundError: No module named 'pymanopt.solvers'
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
| open | 2023-06-19T06:27:18Z | 2023-06-22T11:00:21Z | https://github.com/recommenders-team/recommenders/issues/1943 | [
"bug"
] | miguelgfierro | 1 |
supabase/supabase-py | flask | 717 | Test failures on Python 3.12 | # Bug report
## Describe the bug
Tests are broken against Python 3.12.
```AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?```
## To Reproduce
Run test script in a python 3.12 environment.
## Expected behavior
Tests should not fail.
## Logs
```bash
ERROR: invocation failed (exit code 1), logfile: /Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/log/py312-3.log
========================================================================== log start ===========================================================================
ERROR: Exception:
Traceback (most recent call last):
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 167, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 247, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/commands/install.py", line 315, in run
session = self.get_default_session(options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 98, in get_default_session
self._session = self.enter_context(self._build_session(options))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 125, in _build_session
session = PipSession(
^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/network/session.py", line 343, in __init__
self.headers["User-Agent"] = user_agent()
^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/network/session.py", line 175, in user_agent
setuptools_dist = get_default_environment().get_distribution("setuptools")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 180, in get_distribution
return next(matches, None)
^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 175, in <genexpr>
matches = (
^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/base.py", line 594, in iter_all_distributions
for dist in self._iter_distributions():
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 168, in _iter_distributions
for dist in finder.find_eggs(location):
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 136, in find_eggs
yield from self._find_eggs_in_dir(location)
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 103, in _find_eggs_in_dir
from pip._vendor.pkg_resources import find_distributions
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2164, in <module>
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/__main__.py", line 31, in <module>
sys.exit(_main())
^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/main.py", line 70, in main
return command.main(cmd_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 101, in main
return self._main(args)
^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 223, in _main
self.handle_pip_version_check(options)
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 179, in handle_pip_version_check
session = self._build_session(
^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 125, in _build_session
session = PipSession(
^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/network/session.py", line 343, in __init__
self.headers["User-Agent"] = user_agent()
^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/network/session.py", line 175, in user_agent
setuptools_dist = get_default_environment().get_distribution("setuptools")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 180, in get_distribution
return next(matches, None)
^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 175, in <genexpr>
matches = (
^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/base.py", line 594, in iter_all_distributions
for dist in self._iter_distributions():
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 168, in _iter_distributions
for dist in finder.find_eggs(location):
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 136, in find_eggs
yield from self._find_eggs_in_dir(location)
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 103, in _find_eggs_in_dir
from pip._vendor.pkg_resources import find_distributions
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2164, in <module>
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
```
## System information
- OS: macOS
## Additional context
Launched tests via a `tox` script. (see https://github.com/supabase-community/supabase-py/issues/696)
| closed | 2024-03-03T05:12:43Z | 2024-03-23T13:24:45Z | https://github.com/supabase/supabase-py/issues/717 | [
"bug"
] | tinvaan | 2 |
danimtb/dasshio | dash | 29 | Dasshio broken in 65.0. | This config worked perfectly in 64.3 and never registers a button press in 65.0.
I have done a rebuild, stop and start, but not an uninstall yet. Log looks good for dasshio.
Maybe the new entity ID stuff?
```
starting version 3.2.4
WARNING: No route found for IPv6 destination :: (no default route?). This affects only IPv6
2018-03-10 18:15:55,107 | INFO | Reading config file: /data/options.json
2018-03-10 18:15:55,113 | INFO | Starting sniffing...
```
```
{
"timeout": 8,
"buttons": [
{
"name": "Hammermill",
"address": "00:fc:8b:54:41:d9",
"domain": "light",
"service": "toggle",
"service_data": "{\"entity_id\": \"light.bathroom_2, light.bathroom_1\"}"
}
]
}
``` | closed | 2018-03-11T00:43:23Z | 2018-05-07T16:41:38Z | https://github.com/danimtb/dasshio/issues/29 | [] | mattlward | 7 |
deezer/spleeter | deep-learning | 635 | [Bug] DDL error from SSL | - [ ] I didn't find a similar issue already open.
- [ ] I read the documentation (README AND Wiki)
- [ ] I have installed FFMpeg
- [ ] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
I'm getting a DLL error after following the installation instructions by order.
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Installed using the following:
'
conda install -c conda-forge ffmpeg libsndfile
pip install spleeter
'
2. Run Jupyter notebook from visual studio code, use the command
'!spleeter separate -o ./output ./tmp/audio_example.mp3'
3. Got error
'
Traceback (most recent call last):
File "c:\programdata\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\programdata\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\Scripts\spleeter.exe\__main__.py", line 7, in <module>
File "c:\programdata\anaconda3\lib\site-packages\spleeter\__main__.py", line 256, in entrypoint
spleeter()
File "c:\programdata\anaconda3\lib\site-packages\typer\main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "c:\programdata\anaconda3\lib\site-packages\typer\main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "c:\programdata\anaconda3\lib\site-packages\spleeter\__main__.py", line 137, in separate
synchronous=False,
File "c:\programdata\anaconda3\lib\site-packages\spleeter\separator.py", line 382, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "c:\programdata\anaconda3\lib\site-packages\spleeter\separator.py", line 325, in separate
return self._separate_librosa(waveform, audio_descriptor)
File "c:\programdata\anaconda3\lib\site-packages\spleeter\separator.py", line 269, in _separate_librosa
sess = self._get_session()
File "c:\programdata\anaconda3\lib\site-packages\spleeter\separator.py", line 240, in _get_session
provider = ModelProvider.default()
File "c:\programdata\anaconda3\lib\site-packages\spleeter\model\provider\__init__.py", line 93, in default
from .github import GithubModelProvider
File "c:\programdata\anaconda3\lib\site-packages\spleeter\model\provider\github.py", line 28, in <module>
import httpx
File "c:\programdata\anaconda3\lib\site-packages\httpx\__init__.py", line 2, in <module>
from ._api import delete, get, head, options, patch, post, put, request, stream
File "c:\programdata\anaconda3\lib\site-packages\httpx\_api.py", line 3, in <module>
from ._client import Client, StreamContextManager
File "c:\programdata\anaconda3\lib\site-packages\httpx\_client.py", line 7, in <module>
import httpcore
File "c:\programdata\anaconda3\lib\site-packages\httpcore\__init__.py", line 2, in <module>
from ._async.connection_pool import AsyncConnectionPool
File "c:\programdata\anaconda3\lib\site-packages\httpcore\_async\connection_pool.py", line 2, in <module>
from ssl import SSLContext
File "c:\programdata\anaconda3\lib\ssl.py", line 98, in <module>
import _ssl # if we can't import it, let the error propagate
ImportError: DLL load failed: The specified module could not be found.
'
## Output
'
Traceback (most recent call last):
File "c:\programdata\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\programdata\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\Scripts\spleeter.exe\__main__.py", line 7, in <module>
File "c:\programdata\anaconda3\lib\site-packages\spleeter\__main__.py", line 256, in entrypoint
spleeter()
File "c:\programdata\anaconda3\lib\site-packages\typer\main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\programdata\anaconda3\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "c:\programdata\anaconda3\lib\site-packages\typer\main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "c:\programdata\anaconda3\lib\site-packages\spleeter\__main__.py", line 137, in separate
synchronous=False,
File "c:\programdata\anaconda3\lib\site-packages\spleeter\separator.py", line 382, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "c:\programdata\anaconda3\lib\site-packages\spleeter\separator.py", line 325, in separate
return self._separate_librosa(waveform, audio_descriptor)
File "c:\programdata\anaconda3\lib\site-packages\spleeter\separator.py", line 269, in _separate_librosa
sess = self._get_session()
File "c:\programdata\anaconda3\lib\site-packages\spleeter\separator.py", line 240, in _get_session
provider = ModelProvider.default()
File "c:\programdata\anaconda3\lib\site-packages\spleeter\model\provider\__init__.py", line 93, in default
from .github import GithubModelProvider
File "c:\programdata\anaconda3\lib\site-packages\spleeter\model\provider\github.py", line 28, in <module>
import httpx
File "c:\programdata\anaconda3\lib\site-packages\httpx\__init__.py", line 2, in <module>
from ._api import delete, get, head, options, patch, post, put, request, stream
File "c:\programdata\anaconda3\lib\site-packages\httpx\_api.py", line 3, in <module>
from ._client import Client, StreamContextManager
File "c:\programdata\anaconda3\lib\site-packages\httpx\_client.py", line 7, in <module>
import httpcore
File "c:\programdata\anaconda3\lib\site-packages\httpcore\__init__.py", line 2, in <module>
from ._async.connection_pool import AsyncConnectionPool
File "c:\programdata\anaconda3\lib\site-packages\httpcore\_async\connection_pool.py", line 2, in <module>
from ssl import SSLContext
File "c:\programdata\anaconda3\lib\ssl.py", line 98, in <module>
import _ssl # if we can't import it, let the error propagate
ImportError: DLL load failed: The specified module could not be found.
'
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 |
| Installation type | pip |
| RAM available | 16GB |
| Hardware spec | CPU + GPU |
## Additional context
At the other (different) closed issues involving DLLs I haven't found direct solution to the problem. | open | 2021-06-28T06:50:06Z | 2021-07-16T09:12:19Z | https://github.com/deezer/spleeter/issues/635 | [
"bug",
"invalid"
] | gitDawn | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,371 | Two problems prevent browser from starting | 1. The notice will freeze browser.

2.The Preferences has been write too many rubbish data, so the browser wouldn't start


| open | 2023-06-28T08:15:21Z | 2023-06-28T08:15:45Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1371 | [] | zhaodice | 0 |
pytest-dev/pytest-mock | pytest | 191 | Serious performance issues from 1.12 and up | The addition of this line (https://github.com/pytest-dev/pytest-mock/commit/6587f795fc7b46053ffbec3f20c60414ae486cf7) that checks if one is inside a context manager caused our test suite to run over 3x slower (5 minutes to 17 minutes).
I understand the desire to report a more useful error message when a user tries to use it as a context manager, but the stack inspection on every patch causes a slowdown that makes pytest-mock unusable for us, especially since our test suite has hundreds of test files. Our observations indicate that the performance degradation is even more pronounced when using pytest-xdist.
One can observe the performance impacts in a simple test example:
In a test.py file:
```
def test_foo(mocker):
mocker.patch('random.randint')
```
Install `pytest-repeat` and `pytest-mock==1.12.0`
```
pytest test.py --count 1000
```
total runtime = 4.45s
Install `pytest-mock==1.11.2`
```
pytest test.py --count 1000
```
total runtime = 2.28s
Now do the same experiment with `pytest-xdist`
with 1.12:
```
pytest test.py --count 1000 -n 4
```
total runtime = 6.67s
with 1.11.2:
```
pytest test.py --count 1000 -n 4
```
total runtime = 3.18s
Although this test case doesn't have performance impacts as pronounced as our test suite, which has hundreds of files and tests thousands of python files, it is still upwards of a 2x slowdown.
Can this check be removed altogether? IMO it is not worth it to slow down a test suite this much for a more useful error message in a few rare circumstances. Would it be possible to instead try to catch the unhelpful error message and then re-raise it as something helpful instead if this check is really aiding in the UX of the library? Thanks for your consideration | closed | 2020-05-29T18:25:31Z | 2020-06-04T14:35:52Z | https://github.com/pytest-dev/pytest-mock/issues/191 | [] | wesleykendall | 3 |
ipython/ipython | jupyter | 13,938 | History search / tab completion does not behave as expected. | Hello
I am on ipython 8.9.0, using python 3.11.0. This is a fresh install on a completely clean Ubuntu 22.04, using pyenv as the virtual environment manager. This is a setup that I have used elsewhere without (until now) any need for additional setings.
On (all of) my systems, I always set bash's history navigation to work in [this way](https://stackoverflow.com/questions/3535192/matlab-like-command-history-retrieval-in-unix-command-line) and the same works within ipython without any extra settings.
With the configuration mentioned above, when I start ipython and try to navigate in history, the search works but instead of bringing back the whole line, marking it as "active", it brings back the rest of the line but shaded. For example:

To "activate" that line, I have to press "End"...which is a bit annoying.
But perhaps more importrantly, tab completion does not work. So, as I stand there looking at that line, taping "Tab" will not make the rest of the line "active" (in the way I described hapenning with the "End" key)
Any ideas what might be going on here?
(I don't remember if I had the libreadline-dev package installed when I install python 3.11 using pyenv, could this be a factor in this?) | closed | 2023-02-09T12:42:18Z | 2023-05-15T12:14:18Z | https://github.com/ipython/ipython/issues/13938 | [
"documentation",
"tab-completion",
"autosuggestions"
] | aanastasiou | 16 |
proplot-dev/proplot | data-visualization | 142 | Cannot use an already registered cmap that have a dot '.' in its name | https://github.com/lukelbd/proplot/blob/3f065778edce6e07b109e436b504238c7194f04f/proplot/styletools.py#L2212
It is necessary to check if a given cmap name is is that of an already registered
colormap before trying to load it from disk.
All cmocean colormaps are registered with a dot in their name, and therefore wen cannot load them through proplot wrappers.
````python
import cmocean
import numpy as np
import proplot as pplt
fig, axs = pplt.subplots()
axs.contourf(np.arange(5*4).reshape(4,5), cmap='cmo.thermal')
```` | closed | 2020-04-21T09:39:19Z | 2020-05-09T21:22:09Z | https://github.com/proplot-dev/proplot/issues/142 | [
"bug"
] | stefraynaud | 2 |
coqui-ai/TTS | pytorch | 2,908 | [Bug] Strange wav sound from short text | ### Describe the bug
I am just a simple user who wants to use coqui for doing text to speech. So I entered
`tts --model_name tts_models/en/ljspeech/tacotron2-DDC --out_path out.wav --text 'Another way is'`
and get a very interesting sound lasting1:56. She kind of starts singing. Different text seems to work fine.
And if this is the wrong place to ask this question please let me know where I should ask,
### To Reproduce
tts --model_name tts_models/en/ljspeech/tacotron2-DDC --out_path out.wav --text 'Another way is'
### Expected behavior
A wav file of one or two seconds length saying "Another way is"
### Logs
```shell
vega> tts --model_name tts_models/en/ljspeech/tacotron2-DDC --out_path out.wav --text 'Another way is'
> tts_models/en/ljspeech/tacotron2-DDC is already downloaded.
> vocoder_models/en/ljspeech/hifigan_v2 is already downloaded.
> Using model: Tacotron2
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Model's reduction rate `r` is set to: 1
> Vocoder Model: hifigan
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Generator Model: hifigan_generator
> Discriminator Model: hifigan_discriminator
Removing weight norm...
> Text: Another way is
> Text splitted to sentences.
['Another way is']
> Decoder stopped with `max_decoder_steps` 10000
> Processing time: 72.94576692581177
> Real-time factor: 0.6252348480556914
> Saving output to out.wav
```
```
### Environment
shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce GTX 750 Ti"
],
"available": true,
"version": "11.8"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cu118",
"TTS": "0.16.5",
"numpy": "1.22.0"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "Intel64 Family 6 Model 60 Stepping 3, GenuineIntel",
"python": "3.9.13",
"version": "10.0.19045"
}
}
```
### Additional context
_No response_ | closed | 2023-08-31T21:34:37Z | 2023-09-04T09:32:50Z | https://github.com/coqui-ai/TTS/issues/2908 | [
"bug"
] | Hk1020 | 4 |
pyg-team/pytorch_geometric | pytorch | 9,359 | Implementation of "Node Similarity Measures" | ### 🚀 The feature, motivation and pitch
I recently came across the paper [A Survey on Oversmoothing in Graph Neural Networks](https://arxiv.org/abs/2303.10993) and thougt that having a ready-to-use implementation of the **Dirichlet energy** and the **Mean Average Distance** would make diagnosing oversmoothing much easier.
### Alternatives
I saw that `PairNorm` has been implemented but I think it'd be nice to be able to quantitatively examine the behavior of node similarity (at least for academic & research purposes).
### Additional context
If this is something people would find useful if it was included in `pytorch_geometric` I could work on this.
**Edit**: Corrected paper link | open | 2024-05-25T13:54:46Z | 2024-06-06T07:35:45Z | https://github.com/pyg-team/pytorch_geometric/issues/9359 | [
"feature"
] | lettlini | 4 |
reloadware/reloadium | flask | 74 | Pycharm plugin 0.9.0 not support for Python 3.10.6 On M2 | **Describe the bug**
When I run the orange button, occurs error:
It seems like your platform or Python version are not supported yet.
Windows, Linux, macOS and Python 64 bit >= 3.7 (>= 3.9 for M1) <= 3.10 are currently supported.
**Desktop (please complete the following information):**
- OS: MacOS
- OS version: 12.5.1
- M1 chip: M2
- Reloadium package version: None
- PyCharm plugin version: 0.9.0
- Editor: pycharm
- Python Version: 3.10.6
| closed | 2022-11-29T06:56:26Z | 2022-11-29T23:09:58Z | https://github.com/reloadware/reloadium/issues/74 | [] | endimirion | 1 |
graphistry/pygraphistry | pandas | 485 | Get graph result | Currently, getting a table result is doable
* We should support getting paired node & edge dataframes back, ideally as graph objects:
```python
import graphistry
def query_graph(query: str, named_params: json) -> graphistry.Plottable:
edges_df, src_col, dst_col = ...
nodes_df, node_col = ...
return (graphistry
.edges(edges_df, src_col, dst_col)
.nodes(nodes_df, node_col))
```
* If only nodes or only edges are available, it's ok to specify only one of `nodes()` or `edges()`
* If a type/label is known, it should be given column name 'type'
* If a value type like float vs datetime is known, it should be matched to the corresponding pandas/arrow types
* A few sample queries should be provided
| closed | 2023-05-13T03:07:00Z | 2023-06-15T01:22:26Z | https://github.com/graphistry/pygraphistry/issues/485 | [
"enhancement",
"p3",
"neptune"
] | lmeyerov | 2 |
huggingface/datasets | computer-vision | 6,478 | How to load data from lakefs | My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
| closed | 2023-12-06T09:04:11Z | 2024-07-03T19:13:57Z | https://github.com/huggingface/datasets/issues/6478 | [] | d710055071 | 3 |
dolevf/graphw00f | graphql | 5 | Create an Attack Surface Matrix Document for AWS AppSync | Graphw00f 1.0.8 has a new AWS AppSync fingerprint signature. It will be useful to create an attack surface matrix markdown file under `docs/` for it to list the type of security features it offers and whether its vulnerable by default to GraphQL-ish things. | closed | 2022-03-25T13:17:46Z | 2022-05-08T01:28:36Z | https://github.com/dolevf/graphw00f/issues/5 | [
"documentation",
"good first issue"
] | dolevf | 1 |
deepspeedai/DeepSpeed | pytorch | 7,012 | nv-ds-chat CI test failure | The Nightly CI for https://github.com/deepspeedai/DeepSpeed/actions/runs/13230975186 failed.
| closed | 2025-02-07T00:26:40Z | 2025-02-10T22:30:14Z | https://github.com/deepspeedai/DeepSpeed/issues/7012 | [
"ci-failure"
] | github-actions[bot] | 0 |
PokeAPI/pokeapi | graphql | 883 | Some requests keep throwing 403: Forbidden | <!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
Steps to Reproduce:
1. Use the API
2. Some request will start returning 403 (at the moment for me - https://pokeapi.co/api/v2/pokemon-species/frosmoth/)
3. Now for a while it will be returning the error no matter what...
It has been happening only recently. Tried setting the `user-agent` header, making the calls with `no-cache`. Yet, when the call hangs, it seems to just not respond for a while. Any suggestions of how to avoid this issue?
| closed | 2023-05-18T14:55:22Z | 2023-05-19T09:38:52Z | https://github.com/PokeAPI/pokeapi/issues/883 | [] | PauliusRap | 1 |
sammchardy/python-binance | api | 1,399 | Binance is upgrading futures websocket | Binance just announced that they are upgrading futures websocket:
https://binance-docs.github.io/apidocs/futures/en/#change-log
> Binance Future is doing Websocket Service upgrade and the upgrade impacts the following:
>
> Before upgrade:
>
> The websocket server will send a ping frame every 3 minutes. If the websocket server does not receive a pong frame back from the connection within a 10 minute period, the connection will be disconnected. Unsolicited pong frames are allowed.
>
> After upgrade:
>
> Websocket server will send a ping frame every 3 minutes.
> If the websocket server does not receive a pong frame back from the connection within a 10 minute period, the connection will be disconnected.
> When you receive a ping, you must send a pong with a copy of ping's payload as soon as possible.
> Unsolicited pong frames are allowed, but will not prevent disconnection. It is recommended that the payload for these pong frames are empty.
Will this affect the current version (1.0.19)? | closed | 2024-02-09T17:03:00Z | 2024-02-23T10:40:21Z | https://github.com/sammchardy/python-binance/issues/1399 | [] | tsunamilx | 2 |
ageitgey/face_recognition | machine-learning | 1,050 | facial features find incorrect | * face_recognition version: last
* Python version: 3.6
* Operating System: ubuntu16.04
### Description
I use face_recognition.face_landmarks to find mouth feature, it can not return correct area when mouth opened wide, how can I solve this?
| open | 2020-02-11T06:29:45Z | 2020-02-11T06:29:45Z | https://github.com/ageitgey/face_recognition/issues/1050 | [] | S534435877 | 0 |
cobrateam/splinter | automation | 842 | splinter 0.15.0 release | closed | 2020-11-05T16:06:31Z | 2021-07-10T18:04:27Z | https://github.com/cobrateam/splinter/issues/842 | [] | andrewsmedina | 1 | |
rio-labs/rio | data-visualization | 92 | Ripple Effect Exceeds `Card` Borders | ### Describe the bug
The ripple effect on the `Card` component currently extends beyond the borders of the `Card`, including the corners. This behavior is visually inconsistent and should be confined within the card's boundaries.
### Experienced Behavior
On clicking the `Card`, the ripple effect works but also ripples outside of the card, including the corners.
### Expected Behavior
The ripple effect should only be applied within the card's borders, respecting the card's corner radius.
### Screenshots/Videos
https://github.com/rio-labs/rio/assets/41641225/00467da4-f11f-447b-b5b9-f1611ac7d008
### Operating System
Windows, MacOS, Linux
### What browsers are you seeing the problem on?
Chrome, Firefox, Safari, Edge, Internet Explorer
### Browser version
_No response_
### What device are you using?
Desktop, Mobile
### Additional context
_No response_ | closed | 2024-07-07T14:43:12Z | 2024-07-07T15:05:23Z | https://github.com/rio-labs/rio/issues/92 | [
"bug",
"layout rework"
] | Sn3llius | 0 |
mirumee/ariadne-codegen | graphql | 252 | Can we generate client with Sync and ASync function | I am using the codegen to create a python client, in my schema I have subscriptions, query and mutations.
The subscriptions should be Async but at the same time need the query and mutations to be sync.
is there a way to do it ? | open | 2023-12-15T09:49:47Z | 2024-05-16T04:27:12Z | https://github.com/mirumee/ariadne-codegen/issues/252 | [] | imadmoussa1 | 3 |
deeppavlov/DeepPavlov | nlp | 1,000 | Add readme to the 'examples' folder | We need description of examples. | closed | 2019-09-17T13:44:04Z | 2019-09-24T10:50:03Z | https://github.com/deeppavlov/DeepPavlov/issues/1000 | [] | DeepPavlovAdmin | 1 |
pytorch/pytorch | machine-learning | 149,422 | Pip-installed pytorch limits threads to 1 when setting GOMP_CPU_AFFINITY (likely due to bundled GOMP) | ### 🐛 Describe the bug
Pip-installed pytorch limits threads to 1 when setting GOMP_CPU_AFFINITY, while a pytorch build from source code will not have this problem. The pip-installed pytorch will use a bundled GOMP.
There is a cpp case can reproduce it.
```
#include <stdio.h>
#include <omp.h>
#include <torch/torch.h>
int main() {
printf("omp_get_max_threads %d\n", omp_get_max_threads());
printf("at::get_num_threads %d\n", at::get_num_threads());
return 0;
}
```
compile command
```g++ -I<PYTHON_INSTALL_DIR>/site-packages/torch/include/torch/csrc/api/include/ -I<PYTHON_INSTALL_DIR>/site-packages/torch/include/ -fopenmp test.cpp -o test.o -L<PYTHON_INSTALL_DIR>/site-packages/torch/lib -ltorch -ltorch_cpu -lc10 -D_GLIBCXX_USE_CXX11_ABI=0```
the result with pip install pytorch

the result with pytorch build from source code

### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease6469.7-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @seemethere @malfet @osalpekar @atalman | open | 2025-03-18T19:04:32Z | 2025-03-21T02:25:30Z | https://github.com/pytorch/pytorch/issues/149422 | [
"module: binaries",
"triaged"
] | yuchengliu1 | 4 |
healthchecks/healthchecks | django | 608 | Adding project doesn't work | Hi,
I installed healthchecks v1.25.0 as Docker instance from https://hub.docker.com/r/linuxserver/healthchecks. I can't add new project, the result is a Server Error (500) in the browser. I tried sqlite and postgres as database with the same result. With PostgreSQL server I see this error message:
```
FEHLER: NULL-Wert in Spalte »owner_id« verletzt Not-Null-Constraint
DETAIL: Fehlgeschlagene Zeile enthält (2, 5be737bc-16f6-4f99-9899-4f3500230e62, testproject, , , null, 1233456, null, f).
ANWEISUNG: INSERT INTO "accounts_project" ("code", "name", "owner_id", "api_key", "api_key_readonly", "badge_key", "ping_key", "show_slugs") VALUES ('5be737bc-16f6-4f99-9899-4f3500230e62'::uuid, E'testproject', NULL, E'', E'', E'1233456', NULL, false) RETURNING "accounts_project"."id"
```
There is a missing owner_id in the SQL statement. In Healthchecks I see a dash as owner on the Add project page. Looks like it should be the current username as shown in the Default Project.
Regards
Ralf | closed | 2022-02-11T18:36:13Z | 2022-02-14T14:17:36Z | https://github.com/healthchecks/healthchecks/issues/608 | [] | rafaelorafaelo | 5 |
iperov/DeepFaceLab | deep-learning | 941 | Error using Xseg trainer | **Please help, i have no idea what this means, bad installation perhaps??**
**I thought tensorflow was already included**
**Following error when using Xseg trainer:**
Running trainer.
Model first run.
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce GTX 1080 Ti
[0] Which GPU indexes to choose? : 0
0
[h] Face type ( h/mf/f/wf/head ?:help ) :
h
[4] Batch_size ( 2-16 ?:help ) : 2
2
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\__init__.py", line 1, in <module>
from .nn import nn
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\nn.py", line 26, in <module>
from core.interact import interact as io
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\interact\__init__.py", line 1, in <module>
from .interact import interact
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\interact\interact.py", line 9, in <module>
import cv2
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\cv2\__init__.py", line 3, in <module>
from .cv2 import *
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
_pywrap_tensorflow_internal = swig_import_helper() File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
File "imp.py", line 243, in load_module
ImportError File "imp.py", line 343, in load_dynamic
: ImportErrorDLL load failed: The paging file is too small for this operation to complete.:
DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
During handling of the above exception, another exception occurred:
File "<string>", line 1, in <module>
Traceback (most recent call last):
File "multiprocessing\spawn.py", line 105, in spawn_main
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 115, in _main
Traceback (most recent call last):
File "multiprocessing\spawn.py", line 105, in spawn_main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_opsfrom tensorflow.python.pywrap_tensorflow_internal import *
from tensorflow.python.ops import init_ops File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import_pywrap_tensorflow_internal = swig_import_helper()
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
from tensorflow.python import pywrap_tensorflow File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)ImportError
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
: ImportError File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
raise ImportError(msg) File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
DLL load failed: The paging file is too small for this operation to complete.:
from tensorflow.python.pywrap_tensorflow_internal import *from tensorflow.python.pywrap_tensorflow_internal import *
ImportErrorfrom tensorflow.python.pywrap_tensorflow_internal import *
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
:
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help. File "<string>", line 1, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "multiprocessing\spawn.py", line 105, in spawn_main
_pywrap_tensorflow_internal = swig_import_helper()
File "multiprocessing\spawn.py", line 115, in _main
_pywrap_tensorflow_internal = swig_import_helper() File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
from tensorflow.python.ops import init_ops File "imp.py", line 243, in load_module
File "imp.py", line 243, in load_module
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "imp.py", line 343, in load_dynamic
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
File "imp.py", line 343, in load_dynamic
ImportError : File "imp.py", line 243, in load_module
ImportErrorfrom tensorflow.python import pywrap_tensorflow # pylint: disable=unused-importDLL load failed: The paging file is too small for this operation to complete. File "imp.py", line 343, in load_dynamic
:
ImportErrorDLL load failed: The paging file is too small for this operation to complete. File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
During handling of the above exception, another exception occurred:
:
Traceback (most recent call last):
DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
from tensorflow.python import pywrap_tensorflow File "<string>", line 1, in <module>
Traceback (most recent call last):
File "multiprocessing\spawn.py", line 105, in spawn_main
During handling of the above exception, another exception occurred:
File "<string>", line 1, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
File "multiprocessing\spawn.py", line 115, in _main
Traceback (most recent call last):
File "multiprocessing\spawn.py", line 105, in spawn_main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 115, in _main
raise ImportError(msg) File "multiprocessing\spawn.py", line 105, in spawn_main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops File "multiprocessing\spawn.py", line 115, in _main
ImportError
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops: File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help. from tensorflow.python.ops import init_ops File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-importfrom tensorflow.python import pywrap_tensorflow File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
from tensorflow.python import pywrap_tensorflow
from tensorflow.python import pywrap_tensorflow File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg) File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg) ImportError
raise ImportError(msg): ImportError
: ImportErrorTraceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.: Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help. | open | 2020-11-05T11:05:05Z | 2023-06-08T21:38:09Z | https://github.com/iperov/DeepFaceLab/issues/941 | [] | LukeU123 | 3 |
Gozargah/Marzban | api | 1,531 | Marzban does not let me change the Xray API port | **Describe the bug**
I tried to enable Xray API feature according to this guide: https://xtls.github.io/en/config/api.html#apiobject. This is my API object:
```
"api": {
"tag": "api",
"listen": "127.0.0.1:7761",
"services": [
"HandlerService",
"LoggerService",
"StatsService"
]
}
```
The problem is, Marzban uses random port for Xray API and does not let me change the port. I cannot access Xray API through my port. I urgently need this feature for my automated setup. If it's not possible to set custom Xray API port, then how can I get the Xray API port set by Marzban? Don't confuse Xray API with Marzban API.
The problem is not reproducible using plain Xray.
**To Reproduce**
Steps to reproduce the behavior:
1. Add the provided API object into your Xray config.
2. Using CLI, try to get the server stats from Xray API: `./$XRAY_PATH api stats --server="127.0.0.1:7761"`
3. See error
**Expected behavior**
Xray API is accessible through the specified port.
**Machine details (please complete the following information):**
- OS: Ubuntu 22.04
- Python version: 3.10.12
- Nodejs version: (not installed)
- Browser: Opera
**Additional context**
Using `sudo netstat -plnt`, we can actually find out what port does Xray API listen, but because the port is randomized, it would be hard to use it in automated environments. | closed | 2024-12-20T10:52:08Z | 2024-12-20T13:19:03Z | https://github.com/Gozargah/Marzban/issues/1531 | [] | TenderDen | 1 |
healthchecks/healthchecks | django | 355 | 2FA support: U2F | Add support for 2FA using U2F security keys. | closed | 2020-04-06T07:20:23Z | 2021-01-18T17:41:14Z | https://github.com/healthchecks/healthchecks/issues/355 | [] | cuu508 | 2 |
holoviz/panel | plotly | 7,219 | build-docs fails because of missing xserver | I was trying to build the docs by running `build-docs`. I get
```bash
Successfully converted examples/gallery/streaming_videostream.ipynb to pyodide-worker target and wrote output to streaming_videostream.html.
/home/jovyan/repos/private/panel/.pixi/envs/docs/lib/python3.11/site-packages/pyvista/plotting/plotter.py:159: UserWarning:
This system does not appear to be running an xserver.
PyVista will likely segfault when rendering.
Try starting a virtual frame buffer with xvfb, or using
``pyvista.start_xvfb()``
warnings.warn(
2024-09-01 04:06:18.005 ( 1.726s) [ 7F029815D740]vtkXOpenGLRenderWindow.:456 ERR| vtkXOpenGLRenderWindow (0x5579fc8fd490): bad X server connection. DISPLAY=
ERROR:root:bad X server connection. DISPLAY=
Traceback (most recent call last):
File "/home/jovyan/repos/private/panel/.pixi/envs/docs/bin/panel", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/jovyan/repos/private/panel/panel/command/__init__.py", line 101, in main
ret = Convert(parser).invoke(args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/command/convert.py", line 113, in invoke
convert_apps(
File "/home/jovyan/repos/private/panel/panel/io/convert.py", line 583, in convert_apps
files = _convert_process_pool(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/io/convert.py", line 483, in _convert_process_pool
result = future.result()
^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/.pixi/envs/docs/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/.pixi/envs/docs/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
```
I'm running on linux inside a docker container on a JupyterHub. Probably something needs to be installed or configured? But what.
A solution or workaround for me to be able to build the docs and work with them would be highly appreciated. | closed | 2024-09-01T04:14:49Z | 2024-09-09T10:32:49Z | https://github.com/holoviz/panel/issues/7219 | [
"type: docs"
] | MarcSkovMadsen | 2 |
rougier/from-python-to-numpy | numpy | 2 | Introduction chapter | To be written | closed | 2016-12-12T10:14:17Z | 2016-12-22T16:17:54Z | https://github.com/rougier/from-python-to-numpy/issues/2 | [
"Done",
"Needs review"
] | rougier | 0 |
modelscope/modelscope | nlp | 619 | 尝试从本地加载模型,但每次都从ModelScope下载到.cache中。 | 尝试从本地加载模型,但每次都从ModelScope下载到.cache中。
即使我将.cache内的模型CP到指定路径,并尝试加载这个路径,它依然从.cache内加载,如果.cache内没有就还是会去拉取模型,而不是从本地加载。
模型地址:https://www.modelscope.cn/models/damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/summary

| closed | 2023-11-06T09:25:23Z | 2024-06-24T01:52:06Z | https://github.com/modelscope/modelscope/issues/619 | [
"Stale"
] | HUAWilliam | 5 |
matterport/Mask_RCNN | tensorflow | 2,141 | Annotation with multiple classes | Hi
I am trying to train a model to recognize what color a certain type of object contains.
My problem is that any these objects may contain zero or more colors, and while right now I am limiting the search on only two colors, in the future this number may increase.
And since the number of possible combinations increase exponentially with the number of colors it seems to me that having one class for each of those combinations is not the best approach.
Ideally, I would use one class for each color (or two, for example "red" and "not_red" and one of those must be selected), but this would mean that:
1. The model must be able to assign more to one class to each pixel, and potentially the number of simultaneous classes is different for each pixel.
2. The variable class_ids is no longer a 1D array (each mask has one class), but a 2D array (each mask can have more than one class).
While I have no idea if [1] would be a problem, I'm pretty sure [2] would be a problem since the code assumes that the variable class_ids returned by Dataset.load_mask() is a 1D array.
I also thought about training multiple models, one for each class, and then somehow merge their outputs, but I am not sure on how I should proceed in this case.
What would be my best option?
Is this even possible, or should I just use one class for each combination of colors while excluding the least likely combinations? | open | 2020-04-23T14:36:44Z | 2020-05-01T12:42:41Z | https://github.com/matterport/Mask_RCNN/issues/2141 | [] | Woyahdrem | 4 |
aiogram/aiogram | asyncio | 1,376 | Последовательные вызовы | ### aiogram version
3.x
### Problem
Столкнулся с проблемой что после вызова good_result_menu, нужно сразу отправить "start menu", а подобного функционала нет.
Единственный вариант явно передавать dji в функцию.
Было бы не плохо сделать вариант последовательного вызова.
### Possible solution
@router.message(id='start_menu', SomeFilter())
async def start_menu(some_dji):
....
@router.message(SomeAnotherFilter())
async def another_menu(some_dji):
await bot.reccurent_update(id='start_menu')
### Alternatives
_No response_
### Code example
_No response_
### Additional information
_No response_ | closed | 2023-11-26T09:43:55Z | 2023-11-26T16:54:39Z | https://github.com/aiogram/aiogram/issues/1376 | [] | Naturalov | 3 |
deepfakes/faceswap | machine-learning | 897 | extract problem, I provide as you ask |
[crash_report.2019.10.08.100039837644.log](https://github.com/deepfakes/faceswap/files/3701451/crash_report.2019.10.08.100039837644.log)
| closed | 2019-10-08T08:27:32Z | 2019-10-08T08:35:29Z | https://github.com/deepfakes/faceswap/issues/897 | [] | rifardo | 1 |
hbldh/bleak | asyncio | 834 | AttributeError in `__enter__` when instantiating BleakClient | * bleak version: 0.14.12
* Python version: 3.9.12
* Operating System: Windows
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### Description
- AttributeError in `__enter__` when instantiating BleakClient
- Tends to happen intermittently, possibly when a previous instance wasn't GC'd yet or something.
- Will stop occurring after about 5 seconds
### What I Did

| closed | 2022-05-24T23:23:03Z | 2022-07-09T15:00:01Z | https://github.com/hbldh/bleak/issues/834 | [] | labishrestha | 1 |
SYSTRAN/faster-whisper | deep-learning | 844 | Limited GPU Utilization with NVIDIA RTX 4000 Ada Gen | I am experiencing limited GPU utilization with the NVIDIA RTX 4000 Ada Gen card while running on Windows 10 1809
CPU: AMD EPYC 3251 8-Core Processor 2.5 GHz
RAM: 32GB
GPU: NVIDIA RTX 4000 Ada Gen 20 GB
CUDA Toolkit Version: 12.3
GPU Driver Version: 546.12
Python code:
```
device = 'cuda'
compute_type = 'int8_float16'
model_size = 'medium.en'
print(f"Loading model...")
start_time = time.time()
model = WhisperModel(model_size, device=device,
compute_type=compute_type)
end_time = time.time()
execution_time = end_time - start_time
print(f"Model loading time: {execution_time:.2f} seconds")
folder_path = r"C:\Users\XYZ\Downloads\AI voice"
max_new_tokens = 10
beam_size = 10
for filename in os.listdir(folder_path):
if filename.endswith(".mp3") or filename.endswith(".m4a") or filename.endswith(".mp4") or filename.endswith(".wav"):
file_path = os.path.join(folder_path, filename)
print(f"Transcribing file: {file_path}")
start_time = time.time()
segments, _ = model.transcribe(file_path,
beam_size=beam_size,
max_new_tokens=max_new_tokens,
word_timestamps = False,
prepend_punctuations = "",
append_punctuations = "",
language="en", condition_on_previous_text=False)
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
end_time = time.time()
execution_time = end_time - start_time
print(f"Execution time: {execution_time:.2f} seconds")
total_processing_time += execution_time
```
While running my code, I'm only observing around 10% GPU utilization.
<img width="696" alt="image" src="https://github.com/SYSTRAN/faster-whisper/assets/139377980/2752a782-4a21-4502-9473-44c5c3f56107">
However, the same code achieves 100% utilization on an NVIDIA GeForce RTX 4070.
<img width="814" alt="image" src="https://github.com/SYSTRAN/faster-whisper/assets/139377980/282bf758-a41a-431a-aa9c-7cd38f112c23">
| open | 2024-05-17T04:23:21Z | 2024-05-31T10:48:59Z | https://github.com/SYSTRAN/faster-whisper/issues/844 | [] | James-Shared-Studios | 13 |
horovod/horovod | machine-learning | 3,156 | Spark/Keras: checkpoint is only relying on local val loss on GPU 0 | **Bug report:**
When multi-GPUs training is enabled:
- Only GPU 0 is doing checkpoint: https://github.com/horovod/horovod/blob/master/horovod/spark/keras/remote.py#L158
- GPU 0 can only access local validation data:https://github.com/horovod/horovod/blob/master/horovod/spark/keras/remote.py#L231
- Checkpoint is biased by GPU 0's local val data and will be used to overwrite model weights in the end of training: https://github.com/horovod/horovod/blob/master/horovod/spark/keras/remote.py#L263
An allreduce scheme should be added to fix local validation issue. | closed | 2021-09-07T22:23:34Z | 2021-09-07T23:53:02Z | https://github.com/horovod/horovod/issues/3156 | [
"bug"
] | chongxiaoc | 1 |
dmlc/gluon-nlp | numpy | 1,298 | Branch usage for v0.x and numpy-based GluonNLP | As we are developing the numpy-based GluonNLP based on mxnet 2.0, we will switch to the following branch usage:
- v0.x: master branch for GluonNLP 0.x version maintenance
- master: numpy-based GluonNLP development
cc @dmlc/gluon-nlp-team | open | 2020-08-13T02:23:33Z | 2020-08-13T02:24:32Z | https://github.com/dmlc/gluon-nlp/issues/1298 | [] | szha | 0 |
ClimbsRocks/auto_ml | scikit-learn | 420 | Can someone please explain the concept of 'categorical ensembling' intuitively? Is it the same as categorical embedding? | closed | 2019-01-02T07:17:18Z | 2019-01-03T06:28:49Z | https://github.com/ClimbsRocks/auto_ml/issues/420 | [] | AshwiniBaipadithayaMadhusudan | 0 | |
dgtlmoon/changedetection.io | web-scraping | 1,691 | Can't attach RSS feed to Netvibes | Hi,
First, thanks for the tool which in its website version, already helps me a lot.
Nevertheless, not being an expert with coding etc., but being a beginner, I am not able to implement the RSS feed from my list of links (which works ...) on my Netvibes dashboard, not recognizing it.
Is it just simply impossible, or do't I just know how to do it ?
Many thanks | closed | 2023-07-12T16:07:52Z | 2023-07-18T07:49:31Z | https://github.com/dgtlmoon/changedetection.io/issues/1691 | [] | Sihtam7292 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.