repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
paperless-ngx/paperless-ngx | django | 7,615 | [BUG] High memory usage when indexing large installation | ### Description
In our current installation we store 70k+ documents and we do experience a few problems with building the index. Often it cannot be completed within the given timeout, but that has been extended. As the index is not committed to disk until after it is complete, one have to restart the entire build if it fails. The most limiting factor is however the amount of memory required to rebuild the index. It currently use above 32 Gb, and, depending on what other processes are running, might be OOM-killed.
In my limited understanding, it seems that the reindexing process starts by loading _all_ documents, and then indexes them one at the time, and then commits the entire index to disk. One way to reduce memory usage might be to read in the documents in smaller chunks, of a few hundred at the time.
A `refresh`-option would be a godsend. It will then only reindex documents which is not already in the index. That would allow for a restart if the refresh fails.
### Steps to reproduce
1. import 70000 documents
2. observe that you do not have a index because it once failed for some reason.
3. try to rebuild the index with `document_index reindex`
4. observe memory usage
### Webserver logs
```bash
From host
CPU% MEM% VIRT RES PID USER TIME+ THR NI S R/s W/s Command ('e' to pin | 'k' to kill)
>91.0 52.3 34.7G 32.9G 1665906 wpu 4:46 34 0 R 0 33K python3 manage.py document_index -v3 --traceback --force-color reindex
```
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.6
### Host OS
Ubuntu 20.04 64bit AMD Ryzen 5 3600 6-Core
### Installation method
Docker - official image
### System status
```json
System Status
Environment
Paperless-ngx Version
2.11.6
Install Type
docker
Server OS
Linux-5.4.0-107-generic-x86_64-with-glibc2.36
Media Storage
3.02 TB available (5.41 TB total)
Database
Type
postgresql
Status
OK
Migration Status
Up to date
Tasks
Redis Status
OK
Celery Status
OK
Search Index
OK
Classifier
OK
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-09-03T09:47:05Z | 2024-10-04T03:08:08Z | https://github.com/paperless-ngx/paperless-ngx/issues/7615 | [
"not a bug"
] | slundell | 2 |
fastapi/fastapi | asyncio | 13,022 | Traceback stack does not show exact place of error | ### Discussed in https://github.com/fastapi/fastapi/discussions/8428
<div type='discussions-op-text'>
<sup>Originally posted by **NewSouthMjos** December 5, 2022</sup>
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [x] I commit to help with one of those options 👆
### Example Code
```python
import uvicorn
from fastapi import FastAPI, Depends
app = FastAPI()
def get_something_sync():
yield True
async def get_something_async():
yield True
@app.get('/1')
def router_func(dependency=Depends(get_something_sync)):
raise ValueError
return
@app.get('/2')
def router_func(dependency=Depends(get_something_async)):
raise ValueError
return
if __name__ == "__main__":
uvicorn.run("main:app", host="0.0.0.0", port=5600, workers=1)
```
### Description
If dependency function uses yield, injecting dependency with sync def function (get_something_sync), the error traceback will broke - it doesn't locate the place where it was rise (should be "/app/main.py", line 24)
calling endpoint /1:
```
fastapi_dependency | INFO: 10.77.78.83:60070 - "GET /1 HTTP/1.1" 500 Internal Server Error
fastapi_dependency | ERROR: Exception in ASGI application
fastapi_dependency | Traceback (most recent call last):
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
fastapi_dependency | result = await app( # type: ignore[func-returns-value]
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
fastapi_dependency | return await self.app(scope, receive, send)
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 270, in __call__
fastapi_dependency | await super().__call__(scope, receive, send)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 124, in __call__
fastapi_dependency | await self.middleware_stack(scope, receive, send)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
fastapi_dependency | raise exc
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
fastapi_dependency | await self.app(scope, receive, _send)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
fastapi_dependency | raise exc
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
fastapi_dependency | await self.app(scope, receive, sender)
fastapi_dependency | File "/usr/local/lib/python3.11/contextlib.py", line 222, in __aexit__
fastapi_dependency | await self.gen.athrow(typ, value, traceback)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/fastapi/concurrency.py", line 36, in contextmanager_in_threadpool
fastapi_dependency | raise e
fastapi_dependency | ValueError
```
When injecting dependency with async def function (get_something_async), the error traceback seems to be right
endpoint /2:
```
fastapi_dependency | INFO: 10.77.78.83:60067 - "GET /2 HTTP/1.1" 500 Internal Server Error
fastapi_dependency | ERROR: Exception in ASGI application
fastapi_dependency | Traceback (most recent call last):
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
fastapi_dependency | result = await app( # type: ignore[func-returns-value]
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
fastapi_dependency | return await self.app(scope, receive, send)
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 270, in __call__
fastapi_dependency | await super().__call__(scope, receive, send)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 124, in __call__
fastapi_dependency | await self.middleware_stack(scope, receive, send)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
fastapi_dependency | raise exc
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
fastapi_dependency | await self.app(scope, receive, _send)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
fastapi_dependency | raise exc
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
fastapi_dependency | await self.app(scope, receive, sender)
fastapi_dependency | File "/usr/local/lib/python3.11/contextlib.py", line 222, in __aexit__
fastapi_dependency | await self.gen.athrow(typ, value, traceback)
fastapi_dependency | File "/app/main.py", line 13, in get_something_async
fastapi_dependency | yield True
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
fastapi_dependency | raise e
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
fastapi_dependency | await self.app(scope, receive, send)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 706, in __call__
fastapi_dependency | await route.handle(scope, receive, send)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
fastapi_dependency | await self.app(scope, receive, send)
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
fastapi_dependency | response = await func(request)
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 235, in app
fastapi_dependency | raw_response = await run_endpoint_function(
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
fastapi_dependency | return await run_in_threadpool(dependant.call, **values)
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
fastapi_dependency | return await anyio.to_thread.run_sync(func, *args)
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/anyio/to_thread.py", line 31, in run_sync
fastapi_dependency | return await get_asynclib().run_sync_in_worker_thread(
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
fastapi_dependency | return await future
fastapi_dependency | ^^^^^^^^^^^^
fastapi_dependency | File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 867, in run
fastapi_dependency | result = context.run(func, *args)
fastapi_dependency | ^^^^^^^^^^^^^^^^^^^^^^^^
fastapi_dependency | File "/app/main.py", line 24, in router_func
fastapi_dependency | raise ValueError
fastapi_dependency | ValueError
```
On python 3.10 there were no such a problem.
Created repo for reproduce the problem: [https://github.com/NewSouthMjos/fastapi_dependency_test](https://github.com/NewSouthMjos/fastapi_dependency_test)
So, should the sync way to inject dependecy show exact plase of where is was raised?
### Operating System
Linux
### Operating System Details
Running in a docker
### FastAPI Version
0.88.0
### Python Version
3.11
### Additional Context
uvicorn[standard]==0.20.0</div> | closed | 2024-12-02T07:48:49Z | 2024-12-03T22:37:13Z | https://github.com/fastapi/fastapi/issues/13022 | [
"question",
"question-migrate"
] | Kludex | 2 |
pytest-dev/pytest-django | pytest | 715 | Why I don't need @pytest.mark.django_db? | Hi, I just installed pytest-django into my project and I realized that I can run tests that try to access the database without using the decorator? Anyone could explain why? Is the [documentation](https://pytest-django.readthedocs.io/en/latest/database.html) outdated?
I run:
`pytest`
Example of one of my tests:
```
def test_my_user(django_user_model):
me = django_user_model.objects.all().count()
assert me == 0
``` | closed | 2019-04-12T16:32:43Z | 2019-04-12T19:02:43Z | https://github.com/pytest-dev/pytest-django/issues/715 | [] | jonbesga | 1 |
plotly/dash | data-science | 2,295 | Dropdown Options Extending Beyond Container | For a space-limited dashboard, it's common to have dropdown options with names that are much longer than the space allocated for the dropdown button. Additionally, for my application assume that:
- Each option needs to be a single line
- The full option text should be visible when the dropdown is open (i.e. no ellipses)
- The size of the dropdown and its container cannot be increased
Dash Bootstrap's dbc.Select component handles this well by treating the dropdown as a pop-up that can extend beyond its container when open. However, dbc.Select lacks the advanced features of dcc.Dropdown and is not an option for me. Thanks!
 | open | 2022-11-01T16:01:59Z | 2024-08-13T19:22:08Z | https://github.com/plotly/dash/issues/2295 | [
"feature",
"P3"
] | TGeary | 2 |
sczhou/CodeFormer | pytorch | 118 | What performance to expect on M1 2021? | I'd like to know how to best utilise my 2021 M1 MBP for video processing. The M1 GPU is an integrated one so I don't think it can be as powerful as stand alone GPU.
I tired 320*240 264Kbps avi file, (21m07s)
and it took me 30+ mins. is that kind of expected or something didn't go properly?
thanks | closed | 2023-01-22T08:51:19Z | 2023-02-04T16:01:17Z | https://github.com/sczhou/CodeFormer/issues/118 | [] | ada1016 | 4 |
localstack/localstack | python | 11,547 | bug: Hadoop install fails to establish network connection | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I have three services enabled with localstack-pro (trial version), s3, glue, rds. When I bring up localstack service, the packages api attempts to install hadoop, but fails to establish a network connection. Attempts to download hadoop-3.3.1 from archive.apache.org. All other package dependencies install fine.
### Expected Behavior
I expect all dependencies to install without error.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
```
services:
localstack:
env_file:
- .env
image: localstack/localstack-pro # required for Pro
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
- "127.0.0.1:443:443" # LocalStack HTTPS Gateway (Pro)
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./localstack_init.sh:/etc/localstack/init/ready.d/localstack_init.sh"
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
In my localstack_init.sh
awslocal s3api create-bucket --bucket test
# Create Glue database
awslocal glue create-database --database-input "{\"Name\": \"test\"}"
# Create Glue Crawlers
awslocal glue create-crawler \
--name test \
--role arn:aws:iam::000000000000:role/glue-role \
--database-name test \
--targets "{\"S3Targets\": [{\"Path\": \"s3://test/psql/platform\"}]}" \
--table-prefix source_
### Environment
```markdown
- OS: Mac 14.6.1
- LocalStack:
LocalStack version: localstack-pro:3.7.2
LocalStack Docker image sha:
LocalStack build date:
LocalStack build git hash:
```
### Anything else?
Terminal Output
```
localstack-1 | 2024-09-19T16:20:11.567 INFO --- [et.reactor-1] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:20:12.411 DEBUG --- [ Thread-16] localstack.dns.server : Deleting skip pattern jfrog-prod-.*.s3.amazonaws.com
localstack-1 | 2024-09-19T16:20:12.412 DEBUG --- [ Thread-16] localstack.utils.run : Executing command: ['ln', '-s', '/usr/lib/jvm/temurin-8-jdk-arm64', '/usr/lib/jvm/java-8']
localstack-1 | 2024-09-19T16:20:12.412 DEBUG --- [ Thread-16] localstack.packages.api : Installation of java finished.
localstack-1 | 2024-09-19T16:20:12.412 DEBUG --- [ Thread-16] localstack.packages.api : Starting installation of spark...
localstack-1 | 2024-09-19T16:20:12.412 DEBUG --- [ Thread-16] localstack.packages.api : Installation of java skipped (already installed).
localstack-1 | 2024-09-19T16:20:12.412 DEBUG --- [ Thread-16] localstack.packages.api : Performing runtime setup for already installed package.
localstack-1 | 2024-09-19T16:20:12.412 DEBUG --- [ Thread-16] localstack.packages.api : Starting installation of hadoop...
localstack-1 | 2024-09-19T16:20:18.813 INFO --- [et.reactor-0] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:20:26.073 INFO --- [et.reactor-1] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:20:33.338 INFO --- [et.reactor-0] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:20:40.587 INFO --- [et.reactor-1] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:20:47.855 INFO --- [et.reactor-0] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:20:55.102 INFO --- [et.reactor-1] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:21:02.355 INFO --- [et.reactor-0] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:21:09.608 INFO --- [et.reactor-1] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:21:16.858 INFO --- [et.reactor-0] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:21:24.109 INFO --- [et.reactor-1] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:21:27.418 WARN --- [ Thread-16] localstack.utils.archives : Attempt 1. Failed to download archive from https://archive.apache.org/dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz: MyHTTPSConnectionPool(host='archive.apache.org', port=443): Max retries exceeded with url: /dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff47f80310>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
localstack-1 | 2024-09-19T16:21:31.348 INFO --- [et.reactor-0] localstack.request.aws : AWS glue.GetCrawler => 200
localstack-1 | 2024-09-19T16:21:38.604 INFO --- [et.reactor-1] localstack.request.aws : AWS glue.GetCrawler => 200
...
localstack-1 | 2024-09-19T16:25:21.472 WARN --- [ Thread-16] localstack.utils.archives : Attempt 4. Failed to download archive from https://archive.apache.org/dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz: MyHTTPSConnectionPool(host='archive.apache.org', port=443): Max retries exceeded with url: /dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffff475404d0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
localstack-1 | 2024-09-19T16:25:21.473 ERROR --- [ Thread-16] l.p.c.s.athena.query_utils : Unable to run query: CREATE DATABASE IF NOT EXISTS test - Installation of hadoop failed.
localstack-1 | Traceback (most recent call last):
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/api.py", line 91, in install
localstack-1 | self._install(target)
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/core.py", line 113, in _install
localstack-1 | download_and_extract(
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/archives.py", line 206, in download_and_extract
localstack-1 | raise Exception("Failed to download archive from %s: . Retries exhausted", archive_url)
localstack-1 | Exception: ('Failed to download archive from %s: . Retries exhausted', 'https://archive.apache.org/dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz')
localstack-1 |
localstack-1 | The above exception was the direct cause of the following exception:
localstack-1 |
localstack-1 | Traceback (most recent call last):
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/athena/query_utils.py.enc", line 19, in execute_query_safe
localstack-1 | try:return execute_query(A,*D,**E)
localstack-1 | ^^^^^^^^^^^^^^^^^^^^^^^
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/services/athena/query_utils.py.enc", line 42, in execute_query
localstack-1 | C=context;B=database;A=query;start_trino_server();start_hive_server(wait=_A);D=is_hive_ddl_query(A);A=prepare_query(A);LOG.debug('Running query as type "%s": %s','Hive'if D else'Trino',A)
localstack-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/utils/sync.py", line 99, in _wrapper
localstack-1 | return wrapped(*args, **kwargs)
localstack-1 | ^^^^^^^^^^^^^^^^^^^^^^^^
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/utils/bigdata/server_utils.py.enc", line 10, in start_hive_server
localstack-1 | from localstack.pro.core.packages.hive import hive_package as A;from localstack.pro.core.utils.bigdata.hive_server import HiveServer as B;A.install();global hive_server
localstack-1 | ^^^^^^^^^^^
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/api.py", line 210, in install
localstack-1 | self.get_installer(version).install(target)
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/api.py", line 100, in install
localstack-1 | raise e
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/api.py", line 90, in install
localstack-1 | self._prepare_installation(target)
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/packages/hive.py", line 29, in _prepare_installation
localstack-1 | def _prepare_installation(E,target):A=target;from localstack.pro.core.packages.hadoop import hadoop_package as B;from localstack.pro.core.packages.java import java_package as C;from localstack.pro.core.packages.spark import spark_package as D;C.install(version='8',target=A);D.install(target=A);B.install(target=A)
localstack-1 | ^^^^^^^^^^^^^^^^^^^
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/api.py", line 210, in install
localstack-1 | self.get_installer(version).install(target)
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/api.py", line 100, in install
localstack-1 | raise e
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/api.py", line 90, in install
localstack-1 | self._prepare_installation(target)
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/packages/spark.py", line 42, in _prepare_installation
localstack-1 | def _prepare_installation(A,target):B=target;A._install_java(B);A._install_hadoop(B);A._install_spark_drivers(B)
localstack-1 | ^^^^^^^^^^^^^^^^^^^^
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/pro/core/packages/spark.py", line 51, in _install_hadoop
localstack-1 | def _install_hadoop(A,target):from localstack.pro.core.packages.hadoop import hadoop_package as B;C=A.get_hadoop_version_for_spark(A.version);B.install(version=C,target=target)
localstack-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/api.py", line 210, in install
localstack-1 | self.get_installer(version).install(target)
localstack-1 | File "/opt/code/localstack/.venv/lib/python3.11/site-packages/localstack/packages/api.py", line 102, in install
localstack-1 | raise PackageException(f"Installation of {self.name} failed.") from e
localstack-1 | localstack.packages.api.PackageException: Installation of hadoop failed.
``` | closed | 2024-09-19T16:23:56Z | 2024-10-04T15:14:49Z | https://github.com/localstack/localstack/issues/11547 | [
"type: bug",
"aws:s3",
"aws:glue",
"status: backlog"
] | nickatnight | 4 |
dynaconf/dynaconf | flask | 1,239 | [bug] JSONDecodeError when using Jinja | **Describe the bug**
When using Jinja to format a {{ this.DICT.keys() }} into json you get the following error
E.g. `"@json @jinja {{ this.DICT.keys() }}"`
```
File "/usr/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
The file producing this problem is `django42_dynaconf_jinja/jinja_test.json`
**To Reproduce**
I've created a minimal reproduction of the problem in this repo: https://github.com/Hammit/dynaconf_jinja_bug
1. Clone it
2. cd into repo
3. Install venv (python3 -m venv venv)
4. Activate venv (source venv/bin/activate)
5. Install dependencies (pip3 install -r requirements.txt)
NOTE: There are additional libraries in requirements.txt unrelated to the minimal repo that I know aren't causing problems.
Steps to reproduce the behavior:
```bash
./manage.py runserver --settings django42_dynaconf_jinja.development
```
**Expected behavior**
I expect runserver to load and not error due to the jinja_test.json file
**Environment (please complete the following information):**
- OS: [WSL2 running Ubuntu 22.04]
- Dynaconf Version [3.2.6]
- Django 4.2.2
**Additional context**
The error happens at startup due to the print statement in `django42_dynaconf_jinja/development.py`
If this print is omitted, it still happens when the setting is used, such is the case when the newapp/view.py accesses it from http://localhost:8000/newapp/ | closed | 2025-02-10T03:00:32Z | 2025-02-11T01:12:04Z | https://github.com/dynaconf/dynaconf/issues/1239 | [
"Not a Bug",
"Docs"
] | Hammit | 4 |
albumentations-team/albumentations | deep-learning | 2,294 | [Speed up] ChannelShuffle | Benchmark shows that `torchvision` has faster `ChannelShuffle` implementation => need to learn from it and fix. | open | 2025-01-24T15:57:38Z | 2025-01-24T16:04:23Z | https://github.com/albumentations-team/albumentations/issues/2294 | [
"enhancement",
"Speed Improvements"
] | ternaus | 0 |
PokeAPI/pokeapi | graphql | 440 | New Flutter wrapper | I've published a wrapper for Flutter
https://pub.dev/packages/pokeapi/versions/1.0.0 | closed | 2019-08-03T18:08:16Z | 2020-05-01T11:04:43Z | https://github.com/PokeAPI/pokeapi/issues/440 | [] | prathanbomb | 5 |
thomaxxl/safrs | rest-api | 31 | Search now requires ID | Currently testing the latest release, https://github.com/thomaxxl/safrs/commit/8afc35d7c532eb480d70cec5d20b86e83e0832c5 but the search and re_search now requires an ID

Before extending on the `SAFRSBase` class I do the following;
```python
from safrs import SAFRSBase
from safrs.api_methods import search, re_search
SAFRSBase.search = search
SAFRSBase.re_search = re_search
``` | closed | 2019-03-15T09:37:39Z | 2019-04-05T14:12:39Z | https://github.com/thomaxxl/safrs/issues/31 | [] | patvdleer | 3 |
piskvorky/gensim | data-science | 2,919 | Uninitialized dictionary.id2token used in CoherenceModel | #### Problem description
I have created multiple LdaModels and a CoherenceModel.
Calling ```coherence_model.compare_models([lda_model_1, lda_model_2])``` throws a KeyError.
This is caused by the following line:
https://github.com/RaRe-Technologies/gensim/blob/817cac99422a255001034203dc0720f7d0df0ce6/gensim/models/coherencemodel.py#L447
Initializing the dictionary (dictionary.id2token) beforehand fixes the problem (e.g. call ```dictionary[0]```).
The problem could be fixed by simply replacing the line with ``` topic = (self.dictionary[_id] for _id in topic)```. | open | 2020-08-18T15:47:59Z | 2020-10-17T10:14:44Z | https://github.com/piskvorky/gensim/issues/2919 | [
"bug"
] | UnfinishedArchitect | 3 |
facebookresearch/fairseq | pytorch | 4,712 | How is the BLEU of the WMT14 test set calculated? | ## ❓ Questions and Help
### Before asking:
1. search the issues.
2. search the docs.
<!-- If you still can't find what you need: -->
#### What is your question?
How is the BLEU of the WMT14 test set calculated?(transformer wmt14 bleu =27.3)
sum of all test sentences BLEU ?or avarage test sentences BLEU?
one sentence BLEU between 0,1 ?
In,
https://github.com/facebookresearch/fairseq/blob/5ec3a27ea8dcd9fd11a6990dacd180e9ddd51f6c/fairseq/scoring/bleu.py#L136
**return self.brevity() * math.exp(psum / order) * 100**
in list line *100 is the reason BLEU 27.3(bigger than 1)?
thank you!
#### Code
<!-- Please paste a code snippet if your question requires it! -->
#### What have you tried?
#### What's your environment?
- fairseq Version (e.g., 1.0 or main):
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux):
- How you installed fairseq (`pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
| open | 2022-09-10T16:48:46Z | 2022-09-10T17:04:44Z | https://github.com/facebookresearch/fairseq/issues/4712 | [
"question",
"needs triage"
] | tjshu | 0 |
akfamily/akshare | data-science | 4,984 | 获取同花顺概念板块成分股接口-蠢办法修改后可正常访问的建议 | ak.stock_board_cons_ths(symbol=‘881158‘)
我python水平比较菜,看不懂源码中的cookies那个语句,但问题似乎是出在cookies那里;
我通过手工添加我在浏览器里复制出来的cookie到源码中,可以正常访问,需要的朋友可以**救个急**;
原本的代码比较高级在我能力之外,**我这个笨办法不是个长久之计**

只是好像现在同花顺对这块是不是有一些更严格的管理,访问频率被压的很低,稍微频繁一点就又会返回空值
原本的代码获得cookies的方式有点高级(代码小白实不敢妄测),瞎说之处请多原谅(抱拳)
| closed | 2024-06-21T20:03:35Z | 2024-06-22T09:21:45Z | https://github.com/akfamily/akshare/issues/4984 | [
"bug"
] | tiaolaidage | 1 |
mage-ai/mage-ai | data-science | 5,484 | To be able to use the same interaction among multiple pipelines | We are using mage version 0.9.74 to create pipelines .
Most pipelines we have need the same parameters (ex. source_hostname, source_username, destiantion_hostname, destination_username etc. ).
We were thinking of using interactions for this so that the users will just fill in the UI interactions to create new pipelines without having to modify any of the code.
I created a template for the pipeline in the hopes that the interactions are saved as well and we can reuse them.
However, the issue is that i cannot reuse the interactions and every time i create interactions i need to give them a unique name. This way we will end up with hundreds of interactions and it wont be saving us any effort.

Can this be resolved please ?
thanks a lot. | open | 2024-10-09T12:17:59Z | 2024-10-09T17:13:54Z | https://github.com/mage-ai/mage-ai/issues/5484 | [
"enhancement"
] | B88BB | 0 |
hankcs/HanLP | nlp | 1,092 | DoubleArrayTrie的2个问题 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
*[x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:portable-1.7.1
我使用的版本是:hanlpVersion='portable-1.7.1'
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
1. 每次刷新自定义词典时出现下面的错误,经过分析,是因为扩展数组时出错,
DoubleArrayTrie.java的private int resize(int newSize)方法的如下2行代码: 数据下标越界异常
每次都是allocSize比base2的实际size大一个。
System.arraycopy(base, 0, base2, 0, allocSize);
System.arraycopy(check, 0, check2, 0, allocSize);
---错误堆栈信息如下
[2019-02-12 17:43:31,193] WARN [http-nio-0.0.0.0-8083-exec-3] com.hankcs.hanlp.dictionary.CustomDictionary.loadMainDictionary(CustomDictionary.java:150) 自定义词典D:/myDev/hanlpData/data/dictionary/custom/CustomDictionary.txt缓存失败!
java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at com.hankcs.hanlp.collection.trie.DoubleArrayTrie.resize(DoubleArrayTrie.java:94)
at com.hankcs.hanlp.collection.trie.DoubleArrayTrie.build(DoubleArrayTrie.java:403)
at com.hankcs.hanlp.collection.trie.DoubleArrayTrie.build(DoubleArrayTrie.java:338)
at com.hankcs.hanlp.collection.trie.DoubleArrayTrie.build(DoubleArrayTrie.java:365)
at com.hankcs.hanlp.collection.trie.DoubleArrayTrie.build(DoubleArrayTrie.java:378)
at com.hankcs.hanlp.dictionary.CustomDictionary.loadMainDictionary(CustomDictionary.java:107)
at com.hankcs.hanlp.dictionary.CustomDictionary.loadMainDictionary(CustomDictionary.java:157)
at com.hankcs.hanlp.dictionary.CustomDictionary.reload(CustomDictionary.java:658)
2. DoubleArrayTrie.java的 private BitSet used; 这个字段曾经多次出现过空指针异常。
当时debug,发现虽然构造函数里面有new, 但是clear和build等诸多方法都有设置为null,
设置为null之后,下一次运行,肯定空指针。
debug中途意外重启,之后又ok了。就没有深入分析了,目前没有再次重现。
| closed | 2019-02-12T10:02:07Z | 2020-01-01T10:55:26Z | https://github.com/hankcs/HanLP/issues/1092 | [
"ignored"
] | xiyuan27 | 2 |
aimhubio/aim | data-visualization | 2,936 | Document how to "up" on dual stack IPv4 and IPv6 | ## 🚀 Feature
Extend the documentation to include that specifying `aim up --host '*'` makes the service listen on both IPv4 (i.e. `--host 0.0.0.0`) and IPv6 (i.e. `--host '::'`).
### Motivation
I had to dig through the source code e.g. to examine what server is being used (uvicorn). Having the various options being document e.g. as part of CLI help text, I would have figured it out much easier.
### Pitch
Dual stack IPv4 and IPv6 is required in our k8s setup, and I can imagine this being fairly common scenario. This kind of information would be valuable to include in the documentation.
| open | 2023-07-20T12:16:24Z | 2023-07-31T09:16:39Z | https://github.com/aimhubio/aim/issues/2936 | [
"type / enhancement"
] | tachylatus | 1 |
Avaiga/taipy | automation | 2,000 | <Optimizing the blank space> | ### Description
there is blank space at the bottom on the left side.
### Solution Proposed
i'm thinking to optimize it by increasing the size of social midea icons and adding username to it.
### Impact of Solution
It will have better visiblity as well someone can just remember the username to go and follow later.
### Acceptance Criteria
- [ yes ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ yes ] Create related issue in taipy-doc for documentation and Release Notes.
- [ yes ] Check if a new demo could be provided based on this, or if legacy demos could be benefit from it.
- [ yes ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional) | closed | 2024-10-09T21:29:52Z | 2024-10-10T09:13:45Z | https://github.com/Avaiga/taipy/issues/2000 | [
"✨New feature"
] | Aazib-at-hub | 3 |
encode/apistar | api | 685 | Can you publish a wheel? | If one is in an environment where sdist is prohibited and only wheels are allowed (as a security mitigation), it would be handy for a wheel to have been published.
If you add bdist_wheel here:
https://github.com/encode/apistar/blob/8015bc1b3c9f43bcf9baa8407330338224232689/scripts/publish#L18
making `python setup.py sdist bdist_wheel` then it will build a wheel and you can upload it to pypi
Thanks! | open | 2021-12-11T04:35:13Z | 2021-12-11T04:35:13Z | https://github.com/encode/apistar/issues/685 | [] | matthewdeanmartin | 0 |
JaidedAI/EasyOCR | pytorch | 1,060 | Training a custom OCR | while Training i get training and validation accuracy of about 90% but when i test the custom model my accuracy shows less than 5%.
can u provide step to use custom training .I am using the steps provided in "https://www.youtube.com/watch?v=-j3TbyceShY&t=207s".
Also how can i use trained weights after 20,000 iteration to retrain for another 20000.Means how should i use this or another pretrained weights for transfer learning.
The repository doesnot give a clear idea of this. | open | 2023-06-21T07:03:33Z | 2023-10-30T12:10:37Z | https://github.com/JaidedAI/EasyOCR/issues/1060 | [] | Jacky2357 | 3 |
OWASP/Nettacker | automation | 106 | Adding Automatic Code Review for new pull requests | Codacy is an automated code review tool that helps developers save time in code reviews, Codacy can be added to this project along with Travis CI in order to make this project better.
https://app.codacy.com
Best Regards | closed | 2018-04-17T10:00:07Z | 2018-04-19T21:29:46Z | https://github.com/OWASP/Nettacker/issues/106 | [
"enhancement",
"done"
] | pradeepjairamani | 3 |
miguelgrinberg/python-socketio | asyncio | 447 | unclose session after disconnect | server.py
```python
sio = socketio.AsyncServer(
async_mode='asgi', client_manager=mgr, cors_allowed_origins="*")
@sio.event
async def connect(sid, environ):
claims = get_jwt_claims_from_environ(environ)
ga = get_ga_from_environ(environ)
async with sio.session(sid) as session:
await Lock().wait()
if claims:
user_ID = claims.get('user_ID')
session['user_ID'] = user_ID
ret = await RedisClient.add_to_set(user_ID)
if not ret:
raise ConnectionRefusedError('Duplicate connect')
@sio.event
async def disconnect(sid):
async with sio.session(sid) as session:
user_ID = session.get('user_ID')
if user_ID is not None:
await RedisClient.remove_from_set(session.get('user_ID', ''))
```
version:
```
# python 3.6.9
aioredis 1.3.1
async-timeout 3.0.1
click 7.1.1
h11 0.9.0
hiredis 1.0.1
httptools 0.1.1
pip 19.3.1
python-engineio 3.12.1
python-socketio 4.5.0
setuptools 41.6.0
six 1.14.0
uvicorn 0.11.3
uvloop 0.14.0
websockets 8.1
wheel 0.33.6
```
start:
```
uvicorn server:app --host 0.0.0.0 --workers 1 --log-level error --use-colors
```
nginx conf:
```
location /socketio/ {
proxy_pass http://localhost:8888/;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
```
When connect, I store `user_ID` in session and redis sets. If `user_ID` exists in redis sets, i will refuse connect.
When disconnect, remove user_ID from redis sets.
I find some user_ID not exists in redis sets but in session, I get session use follow code:
```python
def get_online_user(sio):
for s in sio.eio.sockets.values():
session = s.session.get('/', {})
if session.get('user_ID') is not None:
online_user.append(session['user_ID'])
return online_user
```
Sometimes, it has duplicate user_ID in session.

| closed | 2020-03-24T03:33:40Z | 2020-05-22T18:40:42Z | https://github.com/miguelgrinberg/python-socketio/issues/447 | [
"bug"
] | wangjiancn | 9 |
flasgger/flasgger | api | 292 | POST try it out does not work with multiform file data | When trying to POST to the endpoint, that has the following parameter, the browse button does not work (no select file window pop-up appears).
```
- in: formData
name: image
type: file
description: The testkey image to predict upon
required: true
```
While it does work for GET. | open | 2019-04-10T21:19:58Z | 2019-04-10T21:19:58Z | https://github.com/flasgger/flasgger/issues/292 | [] | tjhgit | 0 |
tortoise/tortoise-orm | asyncio | 871 | KeyError xxx :: The program was running normally when this error suddenly appeared | Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 708, in _init_from_db
setattr(self, model_field, kwargs[key])
KeyError: 'account'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/hongliaoback/frontend/api.py", line 2181, in getUserInfo
apply_obj = await AuthorApply.get(user_id=user_obj.id)
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 871, in _execute
instance_list = await self._db.executor_class(
File "/usr/local/lib/python3.8/dist-packages/tortoise/backends/base/executor.py", line 132, in execute_select
instance: "Model" = self.model._init_from_db(
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 721, in _init_from_db
setattr(self, key, meta.fields_map[key].to_python_value(value))
KeyError: 'username'
| open | 2021-08-21T04:46:04Z | 2022-01-08T12:07:04Z | https://github.com/tortoise/tortoise-orm/issues/871 | [] | ChangeMoreNate | 3 |
Johnserf-Seed/TikTokDownload | api | 93 | 抖音主页视频不能全部下载 | 用户主页视频数量比较多,有四万多,程序运行后只下载了400个左右就自动关闭了,不知道怎么回事,重试后,就是提示视频已存在,跑了几分钟自动推出了,期望作者解决,谢谢 | closed | 2022-02-15T11:08:48Z | 2022-03-30T09:34:51Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/93 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | zhou8898 | 2 |
HIT-SCIR/ltp | nlp | 538 | 发现一个小bug | 在这一行[transformer_rel_linear.py](https://github.com/HIT-SCIR/ltp/blob/f3d4a25ee2fbb71613f76c99a47e70a5445b8c03/ltp/transformer_rel_linear.py#L49),是不是应该
由:
```python
if not self.use_cls:
```
改成:
```python
if not self.use_sep:
```
| closed | 2021-09-28T07:05:29Z | 2021-10-19T08:40:21Z | https://github.com/HIT-SCIR/ltp/issues/538 | [] | geasyheart | 1 |
tensorpack/tensorpack | tensorflow | 1,127 | SaverRestore(SessionInit) don't work | when i try to use SaverRestore(filename,ignore=["loc:@linear/W"]) to make the linear layer
Invalid.
however it don't work
```
# -*- coding: utf-8 -*-
# File: sessinit.py
import os
import numpy as np
import six
import tensorflow as tf
from ..utils import logger
from .common import get_op_tensor_name
from .varmanip import SessionUpdate, get_checkpoint_path, get_savename_from_varname, is_training_name
__all__ = ['SessionInit', 'ChainInit',
'SaverRestore', 'SaverRestoreRelaxed', 'DictRestore',
'JustCurrentSession', 'get_model_loader']
class SessionInit(object):
""" Base class for utilities to load variables to a (existing) session. """
def init(self, sess):
"""
Initialize a session
Args:
sess (tf.Session): the session
"""
self._setup_graph()
self._run_init(sess)
def _setup_graph(self):
pass
def _run_init(self, sess):
pass
class JustCurrentSession(SessionInit):
""" This is a no-op placeholder"""
pass
class CheckpointReaderAdapter(object):
"""
An adapter to work around old checkpoint format, where the keys are op
names instead of tensor names (with :0).
"""
def __init__(self, reader):
self._reader = reader
m = self._reader.get_variable_to_shape_map()
self._map = {k if k.endswith(':0') else k + ':0': v
for k, v in six.iteritems(m)}
def get_variable_to_shape_map(self):
return self._map
def get_tensor(self, name):
if self._reader.has_tensor(name):
return self._reader.get_tensor(name)
if name in self._map:
assert name.endswith(':0'), name
name = name[:-2]
return self._reader.get_tensor(name)
def has_tensor(self, name):
return name in self._map
# some checkpoint might not have ':0'
def get_real_name(self, name):
if self._reader.has_tensor(name):
return name
assert self.has_tensor(name)
return name[:-2]
class MismatchLogger(object):
def __init__(self, exists, nonexists):
self._exists = exists
self._nonexists = nonexists
self._names = []
def add(self, name):
self._names.append(get_op_tensor_name(name)[0])
def log(self):
if len(self._names):
logger.warn("The following variables are in the {}, but not found in the {}: {}".format(
self._exists, self._nonexists, ', '.join(self._names)))
class SaverRestore(SessionInit):
"""
Restore a tensorflow checkpoint saved by :class:`tf.train.Saver` or :class:`ModelSaver`.
"""
def __init__(self, model_path, prefix=None, ignore=[]):
"""
Args:
model_path (str): a model name (model-xxxx) or a ``checkpoint`` file.
prefix (str): during restore, add a ``prefix/`` for every variable in this checkpoint.
ignore (list[str]): list of tensor names that should be ignored during loading, e.g. learning-rate
"""
if model_path.endswith('.npy') or model_path.endswith('.npz'):
logger.warn("SaverRestore expect a TF checkpoint, but got a model path '{}'.".format(model_path) +
" To load from a dict, use 'DictRestore'.")
model_path = get_checkpoint_path(model_path)
self.path = model_path # attribute used by AutoResumeTrainConfig!
self.prefix = prefix
self.ignore = [i if i.endswith(':0') else i + ':0' for i in ignore]
def _setup_graph(self):
dic = self._get_restore_dict()
# for dicts in dic:
# print(dicts)
# exit()
self.saver = tf.train.Saver(var_list=dic, name=str(id(dic)))
def _run_init(self, sess):
logger.info("Restoring checkpoint from {} ...".format(self.path))
self.saver.restore(sess, self.path)
@staticmethod
def _read_checkpoint_vars(model_path):
""" return a set of strings """
reader = tf.train.NewCheckpointReader(model_path)
reader = CheckpointReaderAdapter(reader) # use an adapter to standardize the name
ckpt_vars = reader.get_variable_to_shape_map().keys()
return reader, set(ckpt_vars)
def _match_vars(self, func):
reader, chkpt_vars = SaverRestore._read_checkpoint_vars(self.path)
graph_vars = tf.global_variables()
chkpt_vars_used = set()
mismatch = MismatchLogger('graph', 'checkpoint')
for v in graph_vars:
name = get_savename_from_varname(v.name, varname_prefix=self.prefix)
if name in self.ignore and reader.has_tensor(name):
logger.info("Variable {} in the graph will not be loaded from the checkpoint!".format(name))
else:
if reader.has_tensor(name):
func(reader, name, v)
chkpt_vars_used.add(name)
else:
# use tensor name (instead of op name) for logging, to be consistent with the reverse case
if not is_training_name(v.name):
mismatch.add(v.name)
mismatch.log()
mismatch = MismatchLogger('checkpoint', 'graph')
if len(chkpt_vars_used) < len(chkpt_vars):
unused = chkpt_vars - chkpt_vars_used
for name in sorted(unused):
if not is_training_name(name):
mismatch.add(name)
mismatch.log()
def _get_restore_dict(self):
var_dict = {}
def f(reader, name, v):
name = reader.get_real_name(name)
assert name not in var_dict, "Restore conflict: {} and {}".format(v.name, var_dict[name].name)
var_dict[name] = v
self._match_vars(f)
return var_dict
class SaverRestoreRelaxed(SaverRestore):
""" Same as :class:`SaverRestore`, but has more relaxed constraints.
It allows upcasting certain variables, or reshape certain
variables when there is a mismatch that can be fixed.
Another advantage is that it doesn't add any new ops to the graph.
But it is also slower than :class:`SaverRestore`.
"""
def _run_init(self, sess):
logger.info(
"Restoring checkpoint from {} ...".format(self.path))
def f(reader, name, v):
val = reader.get_tensor(name)
v.load(SessionUpdate.relaxed_value_for_var(val, v))
with sess.as_default():
self._match_vars(f)
class DictRestore(SessionInit):
"""
Restore variables from a dictionary.
"""
def __init__(self, variable_dict):
"""
Args:
variable_dict (dict): a dict of {name: value}
"""
assert isinstance(variable_dict, dict), type(variable_dict)
# use varname (with :0) for consistency
self._prms = {get_op_tensor_name(n)[1]: v for n, v in six.iteritems(variable_dict)}
def _run_init(self, sess):
variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
variable_names = set([k.name for k in variables])
param_names = set(six.iterkeys(self._prms))
intersect = variable_names & param_names
logger.info("Variables to restore from dict: {}".format(', '.join(map(str, intersect))))
mismatch = MismatchLogger('graph', 'dict')
for k in sorted(variable_names - param_names):
if not is_training_name(k):
mismatch.add(k)
mismatch.log()
mismatch = MismatchLogger('dict', 'graph')
for k in sorted(param_names - variable_names):
mismatch.add(k)
mismatch.log()
upd = SessionUpdate(sess, [v for v in variables if v.name in intersect])
logger.info("Restoring {} variables from dict ...".format(len(intersect)))
upd.update({name: value for name, value in six.iteritems(self._prms) if name in intersect})
class ChainInit(SessionInit):
"""
Initialize a session by a list of :class:`SessionInit` instance, executed one by one.
This can be useful for, e.g., loading several models from different files
to form a composition of models.
"""
def __init__(self, sess_inits):
"""
Args:
sess_inits (list[SessionInit]): list of :class:`SessionInit` instances.
"""
self.inits = sess_inits
def _setup_graph(self):
for i in self.inits:
i._setup_graph()
def _run_init(self, sess):
for i in self.inits:
i._run_init(sess)
def get_model_loader(filename):
"""
Get a corresponding model loader by looking at the file name.
Returns:
SessInit: either a :class:`DictRestore` (if name ends with 'npy/npz') or
:class:`SaverRestore` (otherwise).
"""
assert isinstance(filename, six.string_types), filename
filename = os.path.expanduser(filename)
if filename.endswith('.npy'):
assert tf.gfile.Exists(filename), filename
return DictRestore(np.load(filename, encoding='latin1').item())
elif filename.endswith('.npz'):
assert tf.gfile.Exists(filename), filename
obj = np.load(filename)
# print(obj)
# exit()
return DictRestore(dict(obj))
else:
return SaverRestore(filename,ignore=["loc:@linear/W"])
``` | closed | 2019-04-02T09:51:16Z | 2019-04-10T07:29:47Z | https://github.com/tensorpack/tensorpack/issues/1127 | [] | qianwen96 | 1 |
Kludex/mangum | asyncio | 119 | Store the 'requestContext' in WebSocket message events | Currently just store the initial connection event data, should add a key to the scope for updating the message request context. | closed | 2020-05-21T08:27:16Z | 2020-06-28T01:52:35Z | https://github.com/Kludex/mangum/issues/119 | [
"improvement",
"websockets"
] | jordaneremieff | 0 |
PaddlePaddle/models | nlp | 4,767 | Can not find library pointnet_lib.so pycharm | OS: Ubuntu 16.04
g++: 4.8
paddle: 1.8.1.post107
CUDA: 10.1
你好,
我在PointNet++成功编译pointnet_util.so动态库之后,termial中运行test也都通过的情况下,但是在pycharm中运行test会报错:
```
W0724 15:14:30.073087 1176505 dynamic_loader.cc:120] Can not find library: /home/jake/Documents/paddle/models/PaddleCV/3d_vision/PointNet++/ext_op/src/pointnet_lib.so. The process maybe hang. Please try to add the lib path to LD_LIBRARY_PATH.
Traceback (most recent call last):
File "/home/jake/Documents/paddle/models/PaddleCV/3d_vision/VoteNet/ext_op/tests/test_farthest_point_sampling_op.py", line 23, in <module>
import pointnet_lib
File "../pointnet_lib.py", line 19, in <module>
fluid.load_op_library(os.path.join(file_dir, 'src/pointnet_lib.so'))
File "/home/jake/anaconda3/envs/paddle-dev/lib/python3.7/site-packages/paddle/fluid/framework.py", line 5162, in load_op_library
core.load_op_library(lib_filename)
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2 paddle::platform::dynload::GetOpDsoHandle(std::string const&)
3 paddle::framework::LoadOpLib(std::string const&)
----------------------
Error Message Summary:
----------------------
Error: Failed to find dynamic library: /home/jake/Documents/paddle/models/PaddleCV/3d_vision/PointNet++/ext_op/src/pointnet_lib.so ( libpaddle_framework.so: cannot open shared object file: No such file or directory )
Please specify its path correctly using following ways:
Method. set environment variable LD_LIBRARY_PATH on Linux or DYLD_LIBRARY_PATH on Mac OS.
For instance, issue command: export LD_LIBRARY_PATH=...
Note: After Mac OS 10.11, using the DYLD_LIBRARY_PATH is impossible unless System Integrity Protection (SIP) is disabled. at (/paddle/paddle/fluid/platform/dynload/dynamic_loader.cc:177)
```
能帮忙看看是怎么回事吗?谢谢! | closed | 2020-07-24T07:34:46Z | 2020-08-06T02:36:53Z | https://github.com/PaddlePaddle/models/issues/4767 | [] | jakeju92 | 1 |
dynaconf/dynaconf | django | 851 | [bug] dynaconf is not installable in venvs without setuptools | ### Bug description
`dynaconf` doesn't define any dependencies (due to vendoring), but that's not really true, because there is one runtime dependency - `pkg_resources` distributed with `setuptools`:
https://github.com/dynaconf/dynaconf/blob/0439bf836f1a22e96e4c71d388c2e68fd9b70425/dynaconf/contrib/flask_dynaconf.py#L17
How is it possible that it actually works? Only thanks to a "de facto" standard of pre-installing `setuptools` a) together with Python interpreter b) when [creating virtual environments](https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments):
>[venv](https://docs.python.org/3/library/venv.html) is available by default in Python 3.3 and later, and installs [pip](https://packaging.python.org/en/latest/key_projects/#pip) and [setuptools](https://packaging.python.org/en/latest/key_projects/#setuptools) into created virtual environments in Python 3.4 and later.
>
>[virtualenv](https://packaging.python.org/en/latest/key_projects/#virtualenv) needs to be installed separately, but supports Python 2.7+ and Python 3.3+, and [pip](https://packaging.python.org/en/latest/key_projects/#pip), [setuptools](https://packaging.python.org/en/latest/key_projects/#setuptools) and [wheel](https://packaging.python.org/en/latest/key_projects/#wheel) are always installed into created virtual environments by default (regardless of Python version).
It means `setuptools` was not explicitly declared, but it was assumed it would be there anyway. But as I mentioned - it never was a real standard and it caused serious issues in the Python build system as precisely described in [PEP 518](https://peps.python.org/pep-0518/#rationale):
>But when a project chooses to use setuptools, the use of an executable file like setup.py becomes an issue. You can’t execute a setup.py file without knowing its dependencies, but currently there is no standard way to know what those dependencies are in an automated fashion without executing the setup.py file where that information is stored. It’s a catch-22 of a file not being runnable without knowing its own contents which can’t be known programmatically unless you run the file.
And that's why `PEP 518` introduced the [build-system table](https://peps.python.org/pep-0518/#build-system-table) defined in `pyproject.toml`:
```
[build-system]
# Minimum requirements for the build system to execute.
requires = ["setuptools", "wheel"] # PEP 508 specifications.
```
Thanks to that and [PEP 517](https://peps.python.org/pep-0517/) - package managers know what are the actual build dependencies and how they should be handled. What does it imply? No need to blindly install `setuptools` anymore.
Based on that `poetry` introduced a flag [`virtualenvs.options.no-setuptools`](https://python-poetry.org/docs/configuration/#virtualenvsoptionsno-setuptools), which is currently disabled by default, but generally recommended:
<img width="945" alt="image" src="https://user-images.githubusercontent.com/24907857/210487739-354f7b1c-7a70-4853-8c1d-69aab3444644.png">
What's the implication of the above? Well, I guess you already know:
```shell
❯ poetry config --local virtualenvs.options.no-setuptools true
❯ poetry add dynaconf
❯ poetry run python -c "from dynaconf.contrib.flask_dynaconf import DynaconfConfig"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/lanskij/Repositories/dyna/.venv/lib/python3.10/site-packages/dynaconf/__init__.py", line 5, in <module>
from dynaconf.contrib import DjangoDynaconf # noqa
File "/Users/lanskij/Repositories/dyna/.venv/lib/python3.10/site-packages/dynaconf/contrib/__init__.py", line 4, in <module>
from dynaconf.contrib.flask_dynaconf import DynaconfConfig # noqa
File "/Users/lanskij/Repositories/dyna/.venv/lib/python3.10/site-packages/dynaconf/contrib/flask_dynaconf.py", line 17, in <module>
import pkg_resources
ModuleNotFoundError: No module named 'pkg_resources'
```
### Required fixes
- **package building**
As showed above - we can't assume anymore `setuptools` would be **for sure** pre-installed in given environment. That's why a proper definition of build dependency should be added to `pyproject.toml`:
```toml
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
```
PS The current usage of `setup_requires` in `setup.py` is redundant, it's even mentioned directly in `PEP 518`:
> Setuptools tried to solve this with a setup_requires argument to its setup() function [[3]](https://peps.python.org/pep-0518/#setup-args). This solution has a number of issues, such as:
> - This cannot include setuptools itself nor can it include a replacement to setuptools, which means that projects such as numpy.distutils are largely incapable of utilizing it and projects cannot take advantage of newer setuptools features until their users naturally upgrade the version of setuptools to a newer one.
- **package runtime**
Here we have 3 solutions:
- specify `setuptools` as a package dependency
- vendor `setuptools`
- get rid of `setuptools`- and `pkg_resources`-related runtime logic at all
Personally - I would vote for the last one. Looking quickly - there's exactly one such line, which is:
https://github.com/dynaconf/dynaconf/blob/0439bf836f1a22e96e4c71d388c2e68fd9b70425/dynaconf/contrib/flask_dynaconf.py#L226
That was probably copy-pasted from the `flask` codebase. But `flask` maintainers already fixed that on their side in https://github.com/pallets/flask/issues/4419 and switched to the built-in `importlib.metadata`, so when aligned - there would be also no need for `dynaconf` to depend on `setuptools` in runtime anymore. | closed | 2023-01-04T05:50:32Z | 2023-07-13T19:11:05Z | https://github.com/dynaconf/dynaconf/issues/851 | [
"bug",
"HIGH"
] | jaklan | 2 |
brightmart/text_classification | tensorflow | 126 | intermediate data files | First of all thanks for your effort to make this repo interesting. I ran the preprocessing notebook and was able to get some of the files, however the other scripts use lot of data files which is not easily accessible. I tried lot of time getting the Baidu storage account but couldn't because of oversees phone number. I was just wondering if you can share the script that generates those data files you used in your scripts. | open | 2019-07-12T13:52:36Z | 2019-07-24T11:23:18Z | https://github.com/brightmart/text_classification/issues/126 | [] | rbaral | 1 |
deepspeedai/DeepSpeed | pytorch | 6,920 | Cannot install async_io op even if it's compatible flag is displaying OK by ds_report cmd! | When i use `DS_BUILD_AIO=1 CFLAGS="-I$CONDA_PREFIX/include/ -I/usr/include/" LDFLAGS="-L$CONDA_PREFIX/lib/ -L/usr/lib/x86_64-linux-gnu/" pip install -e .` to install async_io op, i get fake successful msg.
it indeed displays `Successfully installed deepspeed` , but i use `ds_report` and only get  .
And i use print stderr msg and i find that 
To figure out how to result in this case's coming. I read the source code such as "setup.py"...
and i find problem in "setup.py line 182"
`for op_name, builder in ALL_OPS.items():
op_compatible = builder.is_compatible()`
When op_name is "async_io", builder.is_compatible() returns false. And i open the "DeepSpeed/deepspeed/ops/op_builder/async_io.py" and find "line 93" `def is_compatible(self, verbose=False)` . It's result depends on "line 99" `aio_compatible = self.has_function('io_submit', ('aio', ))` .
Go on to find `def has_function()` in "DeepSpeed/deepspeed/ops/op_builder/builder.py line308" , and i confirm it raise linkerror in line362
` compiler.link_executable(objs,
os.path.join(tempdir, 'a.out'),
extra_preargs=self.strip_empty_entries(ldflags),
libraries=libraries,
library_dirs=library_dirs)` by "distutils.unixccompiler.UnixCCompiler"
I don't know why it happened and to address this issue i had to change the "class AsyncIOBuilder"("DeepSpeed/deepspeed/ops/op_builder/async_io.py") like the following picture  .
And i install it again and get the correct result.
I hope u can figure out why it caused link error. And i don't know my change whether to cause aio disabled when i use offload.
| closed | 2024-12-31T21:34:54Z | 2025-01-13T17:08:17Z | https://github.com/deepspeedai/DeepSpeed/issues/6920 | [
"bug",
"build"
] | LZhengguo | 3 |
autokey/autokey | automation | 513 | [Request] Is there any possability of adding in addition to Python, the ability to trigger PowerShell scripts from hotkeys? | Hey there,
I don't have an issue per se, but was wondering if it might be possible to add the ability to trigger/call a [PowerShell](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.1) script as a secondary option to Python? If not, I can certainly understand but figured it was worth an ask.
Thanks,
-MH | closed | 2021-02-21T00:18:43Z | 2021-02-24T00:13:08Z | https://github.com/autokey/autokey/issues/513 | [
"user support"
] | MostHated | 2 |
python-security/pyt | flask | 54 | Write tests for __main__.py | As we can see on CodeClimate https://codeclimate.com/github/python-security/pyt/coverage/5935971dbf92ed000102998b there is pretty low test coverage of main, I understand why this is but adding some tests for it would increase our test coverage percentage and 75% isn't satisfying.
If you have any trouble with this I can help, I am going to label this issue as Easy so new comers see it. | closed | 2017-06-30T21:57:33Z | 2018-04-28T18:41:05Z | https://github.com/python-security/pyt/issues/54 | [
"good first issue"
] | KevinHock | 9 |
ray-project/ray | data-science | 51,455 | RLLIB from_checkpoint keeps the jobs stuck in "Waiting for Scheduling" status | ### What happened + What you expected to happen
I saved the model successfully. But when I restarted and tried loading. The call to from_checkpoint never completes and the dashboard shows that it is waiting to schedule. But if I load them from the states using lower level API I separately saved, it works fine. This is APPO config.
Error loading checkpoint: Placement group creation timed out. Make sure your cluster either has enough resources or use an autoscaling cluster. If you are running on a cluster, make sure you specify an address in ray.init(), for example, ray.init("auto"). You can also increase the timeout by setting the TRAIN_PLACEMENT_GROUP_TIMEOUT_S environment variable.
Note that there is no lack of resources.
### Versions / Dependencies
Version 2.43.0, linux
### Reproduction script
Just normal APPO config
### Issue Severity
Blocking | open | 2025-03-18T17:57:15Z | 2025-03-20T14:06:32Z | https://github.com/ray-project/ray/issues/51455 | [
"bug",
"triage",
"rllib"
] | Vetti420 | 0 |
dbfixtures/pytest-postgresql | pytest | 727 | License confusion: GPLv3 and LGPLv3 in repository | ### What action do you want to perform
Move COPYING.lesser to COPYING to make it clear it's LGPL 3, and remove the GPLv3 license entirely from this repository.
### What are the results
Github shows that the project is licensed under LGPLv3 in the side-bar, and not both LGPL and GPL. This would match what is being published to PyPI
### What are the expected results
Clarifying what license the project is licensed under | closed | 2023-04-17T21:31:25Z | 2023-12-15T12:37:16Z | https://github.com/dbfixtures/pytest-postgresql/issues/727 | [] | archoversight | 5 |
numba/numba | numpy | 9,209 | Numpy dependency not updated in numba wheels | <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
Numba recently shifted it's oldest supported numpy version from 1.21 to 1.22, but the minimum required numpy version in `setup.py` was not updated. As a result, numba 0.58 will happily install in an environment with numpy 1.21 but will then complain that it doesn't support numpy 1.21 when it is run.
## Steps to reproduce
1. Install numpy 1.21
2. Install latest numba release
```shell
$ pip install numba Collecting numba
Obtaining dependency information for numba from https://files.pythonhosted.org/packages/a1/d2/e3d9752c53244a5cc7abb0c156e0a13bae3dfd99946f9793872963d946af/numba-0.58.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata
Using cached numba-0.58.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.7 kB)
Requirement already satisfied: llvmlite<0.42,>=0.41.0dev0 in /home6/achris10/.pyenv/versions/mphys/lib/python3.9/site-packages (from numba) (0.41.0)
Requirement already satisfied: numpy<1.26,>=1.21 in /nasa/pkgsrc/toss4/2022Q1-rome/lib/python3.9/site-packages (from numba) (1.21.5)
Using cached numba-0.58.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.6 MB)
Installing collected packages: numba
Successfully installed numba-0.58.0
```
3. Try running numba
```shell
$ python -c "from numba import njit"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home6/achris10/.pyenv/versions/mphys/lib/python3.9/site-packages/numba/__init__.py", line 55, in <module>
_ensure_critical_deps()
File "/home6/achris10/.pyenv/versions/mphys/lib/python3.9/site-packages/numba/__init__.py", line 40, in _ensure_critical_deps
raise ImportError(msg)
ImportError: Numba needs NumPy 1.22 or greater. Got NumPy 1.21.
```
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
| closed | 2023-09-22T17:26:12Z | 2023-10-18T15:24:41Z | https://github.com/numba/numba/issues/9209 | [
"bug - build/packaging"
] | A-CGray | 12 |
statsmodels/statsmodels | data-science | 9,330 | ZeroInflatedNegativeBinomialP with random slopes | #### Is your feature request related to a problem? Please describe
I'd like to run this R code in python statsmodels:
```R
model <- glmmTMB(
y ~ x + (1 + x | z),
ziformula = ~ x + a + b + c,
data = df, family = nbinom2(link = "log")
)
```
As far as I understand this is not currently possible because of the mixed model random slopes part doesn't fit in `ZeroInflatedNegativeBinomialP`.
#### Describe the solution you'd like
An implementation of `ZeroInflatedNegativeBinomialP` with random intercepts and slopes.
#### Describe alternatives you have considered
`pymer4`doesn't have it as far as I could see. Also, `ZeroInflatedNegativeBinomialP` in statsmodels.
| open | 2024-08-13T13:20:42Z | 2024-08-13T15:07:01Z | https://github.com/statsmodels/statsmodels/issues/9330 | [
"type-enh",
"comp-discrete"
] | david26694 | 1 |
iperov/DeepFaceLab | machine-learning | 5,270 | 3090 Failed training SAEHD | Hey everyone,
Someone help me address this issue please?
got this error message when starting training.
Running trainer.
[new] No saved models found. Enter a name of a new model :
new
Model first run.
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce RTX 3090
[0] Which GPU indexes to choose? :
0
Caching GPU kernels...
[0] Autobackup every N hour ( 0..24 ?:help ) :
0
[n] Write preview history ( y/n ?:help ) :
n
[0] Target iteration :
0
[y] Flip faces randomly ( y/n ?:help ) :
y
[16] Batch_size ( ?:help ) :
16
[256] Resolution ( 64-640 ?:help ) :
256
[wf] Face type ( h/mf/f/wf/head ?:help ) :
wf
[liae-ud] AE architecture ( ?:help ) :
liae-ud
[256] AutoEncoder dimensions ( 32-1024 ?:help ) :
256
[64] Encoder dimensions ( 16-256 ?:help ) :
64
[64] Decoder dimensions ( 16-256 ?:help ) :
64
[22] Decoder mask dimensions ( 16-256 ?:help ) :
22
[y] Masked training ( y/n ?:help ) :
y
[n] Eyes and mouth priority ( y/n ?:help ) :
n
[n] Uniform yaw distribution of samples ( y/n ?:help ) :
n
[y] Place models and optimizer on GPU ( y/n ?:help ) :
y
[y] Use AdaBelief optimizer? ( y/n ?:help ) :
y
[n] Use learning rate dropout ( n/y/cpu ?:help ) :
n
[y] Enable random warp of samples ( y/n ?:help ) :
y
[0.0] GAN power ( 0.0 .. 10.0 ?:help ) :
0.0
[0.0] Face style power ( 0.0..100.0 ?:help ) :
0.0
[0.0] Background style power ( 0.0..100.0 ?:help ) :
0.0
[none] Color transfer for src faceset ( none/rct/lct/mkl/idt/sot ?:help ) :
none
[n] Enable gradient clipping ( y/n ?:help ) :
n
[n] Enable pretraining mode ( y/n ?:help ) :
n
Initializing models: 60%|#####################################8 | 3/5 [00:10<00:06, 3.16s/it]Traceback (most recent call last):
Traceback (most recent call last):
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
Traceback (most recent call last):
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
Traceback (most recent call last):
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
from tensorflow.python._pywrap_tensorflow_internal import * from tensorflow.python._pywrap_tensorflow_internal import *
from tensorflow.python._pywrap_tensorflow_internal import *
from tensorflow.python._pywrap_tensorflow_internal import *ImportError
ImportError
: ImportError: ImportErrorDLL load failed: The paging file is too small for this operation to complete.: DLL load failed: The paging file is too small for this operation to complete.:
DLL load failed: The paging file is too small for this operation to complete.
DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
During handling of the above exception, another exception occurred:
File "<string>", line 1, in <module>
Traceback (most recent call last):
File "<string>", line 1, in <module>
Traceback (most recent call last):
File "multiprocessing\spawn.py", line 105, in spawn_main
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 115, in _main
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "multiprocessing\spawn.py", line 105, in spawn_main
File "D:\software\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
File "multiprocessing\spawn.py", line 115, in _main
File "D:\software\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
File "multiprocessing\spawn.py", line 115, in _main
File "D:\software\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops from tensorflow.python.ops import init_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import init_ops File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 41, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 41, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 41, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util from tensorflow.python.tools import module_util as _module_util
from tensorflow.python.tools import module_util as _module_util
from tensorflow.python.tools import module_util as _module_util File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 39, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 39, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 39, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 39, in <module>
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 83, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 83, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 83, in <module>
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 83, in <module>
raise ImportError(msg) raise ImportError(msg)
raise ImportError(msg)
raise ImportError(msg)ImportError
ImportError
: ImportError: Traceback (most recent call last):
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.: ImportErrorTraceback (most recent call last):
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
: Traceback (most recent call last):
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Traceback (most recent call last):
File "D:\software\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in <module>
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help. | open | 2021-01-30T03:01:53Z | 2023-06-08T22:21:00Z | https://github.com/iperov/DeepFaceLab/issues/5270 | [] | Mavourn3en | 1 |
JaidedAI/EasyOCR | machine-learning | 593 | very bad results when training on customized English data | Thank you so much for the project. It helps a lot!
But when we want to train the customized data, we could only see significant performance decline comparing to the default model (english_g2.pth). Could you help to see what's going on?
We followed the [Instruction](https://github.com/JaidedAI/EasyOCR/blob/master/custom_model.md). We first generated ~2M images using the customized content (all English with the same chars and orders as in the default config file). Second, we ran training by the following configs.
1. Retraining from scratch ( freeze_FeatureFxtraction is False, no saved model). We tried different batch sizes (32, 64, 256). Each case we trained about 600K iterations. The training accuracy reaches about 80%, training loss reduced to 0.2. But the best_accuracy model performed very bad. The testing accuracy dropped from 36% (english_g2) to around 1%.
2. Finetuning from english_g2. (freeze_FeatureExtraction is True). We tried batch size 32, 150K iterations. The training acc is around 60%. The testing is worse and worse as we fine tune more iterations. (36% original -> 22% -> 2%)
We wonder what could go wrong. Can you please help?
| closed | 2021-11-11T14:30:42Z | 2022-08-07T05:01:21Z | https://github.com/JaidedAI/EasyOCR/issues/593 | [] | deeptek012 | 1 |
strawberry-graphql/strawberry | django | 3,444 | Broken documentation examples in page https://strawberry.rocks/docs/guides/dataloaders | Example within https://strawberry.rocks/docs/guides/dataloaders#usage-with-context is broken and can't be run due to invalid imports. | closed | 2024-04-10T12:15:52Z | 2025-03-20T15:56:41Z | https://github.com/strawberry-graphql/strawberry/issues/3444 | [] | tejusp | 6 |
stanfordnlp/stanza | nlp | 1,094 | [QUESTION] how to use slightly older(4.4) corenlp version in stanza | As my company already uses corenlp 4.4, I need use corenlp 4.4 for compatability.
I instanlled stanza and install corenlp from stanza 1.4, I extrally and definitely download 4.4 English-extra model.
stanza.install_corenlp()
stanza.download_corenlp_models(model='english-extra', version='4.4.0', dir="/data/stanza_corenlp-4.4.0")
from stanza.server import CoreNLPClient
with CoreNLPClient(
server_id='second-server-name',
annotators=['tokenize', 'pos'],
) as client:
How can I definitely assign corenlp english model 4.4 in stanza?
In a word, corenlp 4.4 is really very excellent on pos-tag and we hope defenitely use this version from stanza.
thanks in advance!!
| closed | 2022-08-08T09:19:44Z | 2022-08-08T16:07:40Z | https://github.com/stanfordnlp/stanza/issues/1094 | [
"question"
] | rocke2020 | 2 |
yt-dlp/yt-dlp | python | 12,329 | Nhaccuatui new site request | ### Checklist
- [x] I'm reporting a new site support request
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [x] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar requests **including closed ones**. DO NOT post duplicates
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Vietnam
### Example URLs
https://www.nhaccuatui.com/bai-hat/this-love-davichi.R07lnYhmtOXV.html
### Provide a description that is worded well enough to be understood
Hi. I would like to make a new site request for nhaccuatui.com.
This site is quite famous in Vietnam.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['https://www.nhaccuatui.com/bai-hat/this-love-davichi.R07lnYhmtOXV.html', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.26 from yt-dlp/yt-dlp [3b4531934] (pip)
[debug] Python 3.12.6 (CPython AMD64 64bit) - Windows-11-10.0.22631-SP0 (OpenSSL 3.0.15 3 Sep 2024)
[debug] exe versions: ffmpeg N-118448-g43be8d0728-20250209 (setts), ffprobe N-118448-g43be8d0728-20250209
[debug] Optional libraries: certifi-2024.12.14, requests-2.32.3, sqlite3-3.45.3, urllib3-2.3.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.01.26 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.01.26 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.nhaccuatui.com/bai-hat/this-love-davichi.R07lnYhmtOXV.html
[generic] this-love-davichi.R07lnYhmtOXV: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] this-love-davichi.R07lnYhmtOXV: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.nhaccuatui.com/bai-hat/this-love-davichi.R07lnYhmtOXV.html
Traceback (most recent call last):
File "C:\Users\Phan Linh\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1637, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Phan Linh\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1772, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "C:\Users\Phan Linh\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Phan Linh\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.nhaccuatui.com/bai-hat/this-love-davichi.R07lnYhmtOXV.html
``` | open | 2025-02-10T15:05:58Z | 2025-02-11T02:30:46Z | https://github.com/yt-dlp/yt-dlp/issues/12329 | [
"site-request",
"geo-blocked",
"triage"
] | phanlinh86 | 4 |
apify/crawlee-python | web-scraping | 712 | How to define the executeable_path? | Due to some reason, I hope the project can use a specific version:https://googlechromelabs.github.io/chrome-for-testing/
| closed | 2024-11-20T02:58:02Z | 2024-11-20T13:02:47Z | https://github.com/apify/crawlee-python/issues/712 | [
"t-tooling"
] | QThans | 1 |
ijl/orjson | numpy | 82 | Integer overflows | There are two unchecked integer operation that can lead to an overflow. For one of them there is an input that triggers the overflow.
Overflow 1
```
thread '<unnamed>' panicked at 'attempt to add with overflow', serde_orjson-1.0.51/src/de.rs:787:38
```
This can be triggered with the code:
`orjson.loads("[65.356933999999967]")`
In `f64_from_parts()`:
`product_middle1=17707409016718886656`
`product_middle2=1020491046953003386`
and their sum exceeds 2^64.
Overflow 2
```
thread '<unnamed>' panicked at 'attempt to subtract with overflow', serde_orjson-1.0.51/src/de.rs:815:33
``` | closed | 2020-04-30T03:08:30Z | 2020-04-30T12:27:20Z | https://github.com/ijl/orjson/issues/82 | [] | opsengine | 1 |
napari/napari | numpy | 7,595 | Use clear and consistent symbols for displaying keybindings | Currently in the interface we are not consistent in how we display keybindings to the user, and this can be confusing, as pointed out recently by @willingc in community meeting. For example, in the layer controls buttons, we use round brackets with the keybinding inside:

or sometimes we have a keybinding in brackets but then an alternative keybinding below it:

However in the status bar we use angled brackets to display the keybindings:

I think I've also commonly seen square brackets in other software.
We should use a consistent style for displaying keybindings across all user interface elements. I would vote for anything but round brackets, I don't think round brackets is very common for this use case.
| open | 2025-02-11T02:20:42Z | 2025-02-15T16:20:33Z | https://github.com/napari/napari/issues/7595 | [
"design",
"enhancement",
"UI/UX"
] | DragaDoncila | 6 |
huggingface/transformers | nlp | 36,865 | Multiple processor classes have input side-effects | Multiple processor classes mutate their `text` input when it's a list.
Example:
https://github.com/huggingface/transformers/blob/42c489f2ae738a3b690bb90aab274f02ff024795/src/transformers/models/qwen2_5_vl/processing_qwen2_5_vl.py#L156C21-L156C25
This results in unwanted downstream behaviour. For example, see [this comment](https://github.com/huggingface/trl/pull/3072#issuecomment-2741246702).
This behaviour shouldn't be handled neither by TRL nor vLLM, in my opinion.
### Who can help?
@ArthurZucker @qubvel
### Reproduction
The case of Qwen2.5-VL can be tested [here](https://gist.github.com/nph4rd/f003323ac4c8940f779f44a24b815ff7).
### Expected behavior
Ideally the function should have no input side-effects. | open | 2025-03-20T17:54:12Z | 2025-03-20T17:58:53Z | https://github.com/huggingface/transformers/issues/36865 | [
"bug"
] | nph4rd | 1 |
keras-rl/keras-rl | tensorflow | 374 | C:\Python\Python37\lib\site-packages\keras_rl-0.4.2-py3.7.egg\rl\agents\dqn.py in __init__(self, model, policy, test_policy, enable_double_dqn, enable_dueling_network, dueling_type, *args, **kwargs) | C:\Python\Python37\lib\site-packages\keras_rl-0.4.2-py3.7.egg\rl\agents\dqn.py in __init__(self, model, policy, test_policy, enable_double_dqn, enable_dueling_network, dueling_type, *args, **kwargs)
106
107 # Validate (important) input.
--> 108 if hasattr(model.output, '__len__') and len(model.output) > 1:
109 raise ValueError('Model "{}" has more than one output. DQN expects a model that has a single output.'.format(model))
110 if model.output._keras_shape != (None, self.nb_actions):
C:\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\keras_tensor.py in __len__(self)
238
239 def __len__(self):
--> 240 raise TypeError('Keras symbolic inputs/outputs do not '
241 'implement `__len__`. You may be '
242 'trying to pass Keras symbolic inputs/outputs '
TypeError: Keras symbolic inputs/outputs do not implement `__len__`. You may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model. This error will also get raised if you try asserting a symbolic input/output directly.
please help me .....
_Originally posted by @Shravansuthar211 in https://github.com/keras-rl/keras-rl/issues/348#issuecomment-777403033_ | closed | 2021-02-11T12:04:56Z | 2021-09-24T18:28:51Z | https://github.com/keras-rl/keras-rl/issues/374 | [
"wontfix"
] | shravansuthar210 | 4 |
ShishirPatil/gorilla | api | 100 | [bug] Hosted Gorilla misinterpreted the requirement | **Describe the bug**
Gorilla misinterpreted the requirement as a video classifier when I clearly gave instructions that it is not a video classification app but a 1-1 video calling app.
**To Reproduce**
Use the following prompt:
prompt = "I want to build a one-to-one video calling application for web and pwa. This is NOT a classification app. Please provide code snippets"
print(get_gorilla_response(prompt, model="gorilla-7b-hf-v0"))
**Screenshots**

| closed | 2023-08-14T19:32:48Z | 2023-08-27T08:19:43Z | https://github.com/ShishirPatil/gorilla/issues/100 | [
"hosted-gorilla"
] | ravindrakr | 2 |
pyg-team/pytorch_geometric | pytorch | 8,872 | torch_geometric.transforms.RandomLinkSplit is not interoperable with torch_geometric.loader.DataLoader | ### 🐛 Describe the bug
I am trying to build a GAT model with PyG and PyTorch Lightning. The problem I am trying to solve is a link prediction task and to that end, I need to split my edges into a train and val set. Since we have an independent held-out test graph, we don't need to get that with RandomLinkSplit. After my preprocessing, my graph object looks as follows:
`Data(x=[507, 4], edge_index=[2, 39607], edge_label=[39607, 1])`
Now, when I perform the train and val splitting, the two resulting graphs look as follows:
train: `Data(x=[507, 4], edge_index=[2, 33668], edge_label=[16834, 1], edge_label_index=[2, 16834])`
val: `Data(x=[507, 4], edge_index=[2, 33668], edge_label=[5940, 1], edge_label_index=[2, 5940])`
My model is as follows:
```python
import torch
from torch_geometric.nn import GATv2Conv, Linear, HGTConv, to_hetero
from torch_geometric.loader import DataLoader
import lightning as pl
import torch.nn.functional as F
import torch_geometric.transforms as T
class HGSLNetDataModule(pl.LightningDataModule):
def __init__(self, graph, batch_size=1):
super().__init__()
self.graph = graph
self.batch_size = batch_size
def setup(self, stage=None):
self.train_graph, self.val_graph, _ = self.link_split_transform()
def train_dataloader(self):
return DataLoader(self.train_graph, batch_size=self.batch_size, shuffle=True)
def val_dataloader(self):
return DataLoader(self.val_graph, batch_size=self.batch_size, shuffle=False)
def link_split_transform(self):
transform = T.RandomLinkSplit(
num_val=0.15,
num_test=0,
is_undirected=True,
add_negative_train_samples=False
)
return transform(self.graph)
class GATEncoder(torch.nn.Module):
def __init__(self, num_layers, hidden_channels, out_channels, dropout=0.5):
super().__init__()
self.convs = torch.nn.ModuleList()
for _ in range(num_layers):
self.convs.append(GATv2Conv(-1, hidden_channels, heads=1, dropout=dropout))
print('break')
self.conv_out = GATv2Conv(hidden_channels, out_channels)
def forward(self, x, edge_index):
for conv in self.convs:
x = conv(x, edge_index).relu()
z = self.conv_out(x, edge_index)
return z
class GATDecoder(torch.nn.Module):
def __init__(self, num_layers, hidden_channels):
super().__init__()
self.lin_in = Linear(-1, hidden_channels)
self.lins_hidden = torch.nn.ModuleList()
for _ in range(num_layers):
self.lins_hidden.append(Linear(hidden_channels, hidden_channels))
self.lin_out = Linear(hidden_channels, 1)
def forward(self, z, edge_index):
z = torch.cat([z[edge_index[0]], z[edge_index[1]]], dim=-1)
z = self.lin_in(z).relu()
for lin in self.lins_hidden:
z = lin(z).relu()
z = self.lin_out(z)
return z.view(-1)
class HGSLNet(pl.LightningModule):
def __init__(self, num_layers, hidden_channels):
super().__init__()
self.encoder = GATEncoder(num_layers, hidden_channels, hidden_channels)
self.decoder = GATDecoder(num_layers, hidden_channels)
def forward(self, x, edge_index):
z = self.encoder(x, edge_index)
logit = self.decoder(z, edge_index).reshape(-1, 1)
proba = torch.sigmoid(logit)
y = torch.where(proba > 0.5, torch.tensor(1), torch.tensor(0)).long()
return logit, proba, y
def training_step(self, batch, batch_idx):
x, edge_index, y = batch.x, batch.edge_index, batch.edge_label
y = y.float()
logit, _, _ = self(x, edge_index)
loss = F.binary_cross_entropy_with_logits(logit, y)
self.log('train_loss', loss)
print(f"Training loss: {loss}")
return loss
def validation_step(self, batch, batch_idx):
x, edge_index, y = batch.x, batch.edge_index, batch.edge_label
# get size of edge_label
edge_label_size = batch.edge_label.size(0)
logit, _, _ = self(x, edge_index)
y = y.float()
loss = F.binary_cross_entropy_with_logits(logit, y)
self.log('val_loss', loss)
print(f"Validation loss: {loss}")
return loss
# def validation_epoch_end(self, outputs):
# avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
# self.log('avg_val_loss', avg_loss)
def test_step(self, batch, batch_idx):
x, edge_index, y = batch.x, batch.edge_index, batch.y
y_hat, _, _ = self(x, edge_index)
loss = F.binary_cross_entropy_with_logits(y_hat, y)
self.log('test_loss', loss)
return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=0.001)
return optimizer
```
Then, I proceed to construct my datamodule as follows:
```python
from lightning import Trainer
from torch_geometric.data.lightning import LightningLinkData
from src.preprocessing import CellLineGraphData, MultiOmicsLoader
from src.model import HGSLNet, HGSLNetDataModule
def homotrain(graph):
datamodule = HGSLNetDataModule(graph)
# Initialize your model
model = HGSLNet(num_layers=4, hidden_channels=128)
# Initialize the trainer
trainer = Trainer(max_epochs=100, accelerator='cpu', enable_progress_bar=False)
# Train the model
trainer.fit(model, datamodule)
```
Running this leads to the following error:
```
GPU available: True (mps), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/trainer/setup.py:187: GPU available but not used. You can set it by doing `Trainer(accelerator='gpu')`.
/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py:67: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `lightning.pytorch` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/utilities/model_summary/model_summary.py:452: A layer with UninitializedParameter was found. Thus, the total number of parameters detected may be inaccurate.
| Name | Type | Params
---------------------------------------
0 | encoder | GATEncoder | 35.3 K
1 | decoder | GATDecoder | 66.3 K
---------------------------------------
101 K Trainable params
0 Non-trainable params
101 K Total params
0.407 Total estimated model params size (MB)
/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:441: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=7` in the `DataLoader` to improve performance.
Traceback (most recent call last):
File "/Users/aaronw/Desktop/PhD/Research/QMUL/Research/synthetic-lethality-prediction/synthetic-lethality-prediction/src/main.py", line 42, in <module>
main()
File "/Users/aaronw/Desktop/PhD/Research/QMUL/Research/synthetic-lethality-prediction/synthetic-lethality-prediction/src/main.py", line 37, in main
homotrain(graph)
File "/Users/aaronw/Desktop/PhD/Research/QMUL/Research/synthetic-lethality-prediction/synthetic-lethality-prediction/src/main.py", line 20, in homotrain
trainer.fit(model, datamodule)
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 544, in fit
call._call_and_handle_interrupt(
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 580, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 989, in _run
results = self._run_stage()
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 1033, in _run_stage
self._run_sanity_check()
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 1062, in _run_sanity_check
val_loop.run()
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/loops/utilities.py", line 182, in _decorator
return loop_run(self, *args, **kwargs)
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/loops/evaluation_loop.py", line 127, in run
batch, batch_idx, dataloader_idx = next(data_fetcher)
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/loops/fetchers.py", line 127, in __next__
batch = super().__next__()
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/loops/fetchers.py", line 56, in __next__
batch = next(self.iterator)
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/utilities/combined_loader.py", line 326, in __next__
out = next(self._iterator)
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/lightning/pytorch/utilities/combined_loader.py", line 132, in __next__
out = next(self.iterators[0])
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 53, in fetch
data = self.dataset[possibly_batched_index]
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/torch_geometric/data/data.py", line 457, in __getitem__
return self._store[key]
File "/Users/aaronw/miniconda3/envs/sl-prediction/lib/python3.8/site-packages/torch_geometric/data/storage.py", line 104, in __getitem__
return self._mapping[key]
KeyError: 0
```
It suggests that somehow PyG can not index over my graph, i.e. Data object from torch_geometric. I reached out to folks over at PyTorch Lightning, and Lightning developer Justin Goheen was so kind to step through my code with me. We traced back the problems to the torch_geometric.loader.DataLoader, which if we feed it any of our {train, val}_graphs after RandomLinkSplit, returns the exact error seen above. Note, that we get the same error if we use the standard torch DataLoader from torch.utils.DataLoader
### Versions
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.1.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.0.40.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:11:32) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.0.1
[pip3] torch-geometric==2.3.1
[pip3] torchmetrics==1.3.0.post0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torchmetrics 1.3.0.post0 pypi_0 pypi | closed | 2024-02-06T15:22:47Z | 2024-02-10T18:53:27Z | https://github.com/pyg-team/pytorch_geometric/issues/8872 | [
"bug"
] | aaronwtr | 10 |
lexiforest/curl_cffi | web-scraping | 439 | Safari fingerprint for MacOS v18_0 not matching the fingerprint for v17_0 impersonate | **The question**
The safari browser v17_0 impersonate fingerprint observed on https://tls.browserleaks.com/json is:
```
{
"user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1 Safari/605.1.15",
"ja3_hash": "773906b0efdefa24a7f2b8eb6985bf37",
"ja3_text": "771,4865-4866-4867-49196-49195-52393-49200-49199-52392-49162-49161-49172-49171-157-156-53-47-49160-49170-10,0-23-65281-10-11-16-5-13-18-51-45-43-27-21,29-23-24-25,0",
"ja3n_hash": "44f7ed5185d22c92b96da72dbe68d307",
"ja3n_text": "771,4865-4866-4867-49196-49195-52393-49200-49199-52392-49162-49161-49172-49171-157-156-53-47-49160-49170-10,0-5-10-11-13-16-18-21-23-27-43-45-51-65281,29-23-24-25,0",
"ja4": "t13d2014h2_a09f3c656075_14788d8d241b",
"ja4_r": "t13d2014h2_000a,002f,0035,009c,009d,1301,1302,1303,c008,c009,c00a,c012,c013,c014,c02b,c02c,c02f,c030,cca8,cca9_0005,000a,000b,000d,0012,0015,0017,001b,002b,002d,0033,ff01_0403,0804,0401,0503,0203,0805,0805,0501,0806,0601,0201",
"ja4_o": "t13d2014h2_de3eb69493ac_65135c5c1a6b",
"ja4_ro": "t13d2014h2_1301,1302,1303,c02c,c02b,cca9,c030,c02f,cca8,c00a,c009,c014,c013,009d,009c,0035,002f,c008,c012,000a_0000,0017,ff01,000a,000b,0010,0005,000d,0012,0033,002d,002b,001b,0015_0403,0804,0401,0503,0203,0805,0805,0501,0806,0601,0201",
"akamai_hash": "959a7e813b79b909a1a0b00a38e8bba3",
"akamai_text": "2:0;4:4194304;3:100|10485760|0|m,s,p,a"
}
```
Whereas, the safari v18_0 fingerprint on a MAC laptop observed on https://tls.browserleaks.com/json is:
```
{
"user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1 Safari/605.1.15",
"ja3_hash": "78d450322efcbf3f5d2d92f9d7769566",
"ja3_text": "771,49199-49171-49191-49200-49172-49192-156-47-60-157-53-61-65-132-49195-49324-49326-49161-49187-49196-49325-49327-49162-49188-158-51-103-159-57-107-69-136-4865-4866-255,0-10-11-13-23-43-51,23-29-24-256-257-258,0",
"ja3n_hash": "78d450322efcbf3f5d2d92f9d7769566",
"ja3n_text": "771,49199-49171-49191-49200-49172-49192-156-47-60-157-53-61-65-132-49195-49324-49326-49161-49187-49196-49325-49327-49162-49188-158-51-103-159-57-107-69-136-4865-4866-255,0-10-11-13-23-43-51,23-29-24-256-257-258,0",
"ja4": "t13d350700_98f6b5ae8975_e74d7d97755c",
"ja4_r": "t13d350700_002f,0033,0035,0039,003c,003d,0041,0045,0067,006b,0084,0088,009c,009d,009e,009f,00ff,1301,1302,c009,c00a,c013,c014,c023,c024,c027,c028,c02b,c02c,c02f,c030,c0ac,c0ad,c0ae,c0af_000a,000b,000d,0017,002b,0033_0401,0804,0403,0501,0805,0503,0601,0806,0603",
"ja4_o": "t13d350700_f6fb28e5a594_070f1dc89d25",
"ja4_ro": "t13d350700_c02f,c013,c027,c030,c014,c028,009c,002f,003c,009d,0035,003d,0041,0084,c02b,c0ac,c0ae,c009,c023,c02c,c0ad,c0af,c00a,c024,009e,0033,0067,009f,0039,006b,0045,0088,1301,1302,00ff_0000,000a,000b,000d,0017,002b,0033_0401,0804,0403,0501,0805,0503,0601,0806,0603",
"akamai_hash": "",
"akamai_text": ""
}
```
Neither the `ja3n` nor the `ja3` fingerprints match. So, it would not be possible to use `safari17_0` impersonate with safari version 18_0 and above on a mac.
As for iOS, I see that safari is on v17_6_1 and it still matches the fingerprint of `safari17_2_ios`. Fingerprint might change for v18_0 for iOS. Will keep you posted.
Is a new impersonate in the works for safari? How should we handle this?
| closed | 2024-11-20T20:43:33Z | 2024-12-02T08:20:47Z | https://github.com/lexiforest/curl_cffi/issues/439 | [
"question"
] | charliedelta02 | 6 |
arogozhnikov/einops | numpy | 21 | Preparing an initial detailed guide for pytorch+einops | subj. For now concentrating on a single framework. | closed | 2018-11-30T03:09:57Z | 2018-12-01T01:17:03Z | https://github.com/arogozhnikov/einops/issues/21 | [] | arogozhnikov | 1 |
profusion/sgqlc | graphql | 53 | When mutation takes a list as an argument - passing in a list of the type does not work | I have an input type that takes a list as an argument. I haven't seen an example of this use case so I just guessed that I could pass in a python list of the right type. But I get the following error:
`AttributeError: 'MyInput' object has no attribute 'items'`
And, in fact, it is expecting a dict. Here's an edited version of the code I use to run the mutation:
list_arg = [
gql.MyInput({"key1":"val1"})
gql.MyInput({"key1":"val2"})
]
op.my_mutation(my_list=list_arg)
I'm assuming that passing a simple list into the argument is not the right way to go about it, but I'm not sure how to construct the list otherwise.
Thoughts? | closed | 2019-07-18T21:13:22Z | 2019-09-20T12:09:07Z | https://github.com/profusion/sgqlc/issues/53 | [
"enhancement",
"good first issue"
] | kshehadeh | 6 |
fbdesignpro/sweetviz | data-visualization | 167 | The data cannot be output in the original order | The data cannot be output in the original order, and it is forced to be sorted according to the amount of data from large to small
for example:

I want the data be sorted as the label order(00 01 02 03) , | open | 2024-01-18T04:51:45Z | 2024-02-17T03:10:56Z | https://github.com/fbdesignpro/sweetviz/issues/167 | [
"feature request"
] | Tangdanxu | 1 |
microsoft/nni | tensorflow | 5,514 | Using latency metric (nn-meter) with NAS | Hi, seems like [ProxylessNAS example](https://github.com/microsoft/nni/tree/master/examples/nas/oneshot/proxylessnas) is not supported by NNI anymore because of a completely different backend (instead of retiarri).
Please, do you have an updated example that works with the newer version?
Or a way to use latency-metric for a specific device with ProxylessNAS, like you did with nn-meter before?
Thanks. | open | 2023-04-10T08:38:41Z | 2023-04-17T14:25:54Z | https://github.com/microsoft/nni/issues/5514 | [] | singagan | 5 |
lexiforest/curl_cffi | web-scraping | 116 | CurlOpt.IGNORE_CONTENT_LENGTH | I cannot get the IGNORE_CONTENT_LENGTH option to be working for a server that delivers a wrong chunked content header. Do you have an example?
I have tried something like this:
```python
session = Session(impersonate="chrome110")
session.curl.setopt(curl_cffi.CurlOpt.IGNORE_CONTENT_LENGTH, 1)
res = session.get(url)
```
Could it be that the IGNORE_CONTENT_LENGTH option [requires a long type](https://curl.se/libcurl/c/CURLOPT_IGNORE_CONTENT_LENGTH.html)? When I look at the `setopt` function, it doesn't consider long types as a input option. The value of 1 seems to be converted to an int. Could this be why it is not working? | closed | 2023-08-30T11:49:15Z | 2023-11-25T10:40:30Z | https://github.com/lexiforest/curl_cffi/issues/116 | [] | iiLaurens | 2 |
mwouts/itables | jupyter | 55 | Should we offer support for Ag-Grid? | [Ag-Grid](https://www.ag-grid.com/javascript-data-grid/getting-started/) is a renowned JS table library.
It has a mixed community/enterprise licensing - see [here](https://www.ag-grid.com/angular-data-grid/licensing/) for a description of the features available in either version.
Is there any user interested in seeing support for ag-grid in this project?
(An experimental version is available in [this branch](https://github.com/mwouts/itables/tree/support_ag_grid))
If so, I'll need help to setup the default table size and style (cc @btribonde) | closed | 2022-01-19T23:25:32Z | 2022-11-13T22:21:27Z | https://github.com/mwouts/itables/issues/55 | [] | mwouts | 4 |
home-assistant/core | asyncio | 141,124 | ZHA doesn't connect to Sonoff Zigbee dongle. Baud rate problem? | ### The problem
Can't get ZHA to connect to usb-Itead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_V2_9c414fdf5bd9ee118970b24c37b89984-if00-port0. I noticed in the logs it is trying to connect at 460800 baud. I don't know if that has anything to do with it.
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
ZHA
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/zha/
### Diagnostics information
[zha.log](https://github.com/user-attachments/files/19404059/zha.log)
### Example YAML snippet
```yaml
I don't see any YAML for ZHA. I can tell you the port:
/dev/serial/by-id/usb-Itead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_V2_9c414fdf5bd9ee118970b24c37b89984-if00-port0 -> ../../ttyUSB0
radio type: EZSP
baud rate: 115200
Flow Control: Software
```
### Anything in the logs that might be useful for us?
```txt
2025-03-22 11:25:04.105 DEBUG (MainThread) [homeassistant.components.zha] Failed to set up ZHA
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/bellows/uart.py", line 109, in reset
return await self._reset_future
^^^^^^^^^^^^^^^^^^^^^^^^
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/zha/__init__.py", line 156, in async_setup_entry
await zha_gateway.async_initialize()
File "/usr/local/lib/python3.13/site-packages/zha/application/gateway.py", line 271, in async_initialize
await self._async_initialize()
File "/usr/local/lib/python3.13/site-packages/zha/application/gateway.py", line 254, in _async_initialize
await self.application_controller.startup(auto_form=True)
File "/usr/local/lib/python3.13/site-packages/zigpy/application.py", line 220, in startup
await self.connect()
File "/usr/local/lib/python3.13/site-packages/bellows/zigbee/application.py", line 153, in connect
await self._ezsp.connect(use_thread=self.config[CONF_USE_THREAD])
File "/usr/local/lib/python3.13/site-packages/bellows/ezsp/__init__.py", line 138, in connect
await self.startup_reset()
File "/usr/local/lib/python3.13/site-packages/bellows/ezsp/__init__.py", line 127, in startup_reset
await self.reset()
File "/usr/local/lib/python3.13/site-packages/bellows/ezsp/__init__.py", line 146, in reset
await self._gw.reset()
File "/usr/local/lib/python3.13/site-packages/bellows/uart.py", line 108, in reset
async with asyncio_timeout(RESET_TIMEOUT):
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/asyncio/timeouts.py", line 116, in __aexit__
raise TimeoutError from exc_val
TimeoutError
Later in the logs:
2025-03-22 11:25:04.106 DEBUG (MainThread) [zigpy.serial] Opening a serial connection to '/dev/serial/by-id/usb-Itead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_V2_9c414fdf5bd9ee118970b24c37b89984-if00-port0' (baudrate=115200, xonxoff=False, rtscts=False)
2025-03-22 11:25:06.115 DEBUG (MainThread) [zigpy.serial] Opening a serial connection to '/dev/serial/by-id/usb-Itead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_V2_9c414fdf5bd9ee118970b24c37b89984-if00-port0' (baudrate=460800, xonxoff=False, rtscts=False)
```
### Additional information
_No response_ | open | 2025-03-22T16:25:57Z | 2025-03-23T15:39:35Z | https://github.com/home-assistant/core/issues/141124 | [
"integration: zha"
] | ncp1113 | 4 |
tensorly/tensorly | numpy | 416 | Implementation of tensor PLS | #### Is your feature request related to a problem? Please describe.
This is just a note that @cyrillustan and @JacksonLChin will be working on an implementation of tensor PLS over the summer.
#### Describe the solution you'd like
We'll open a PR of this with some testing. One question is whether this should go into the regression module, or somewhere else. For example, scikit-learn has a module called cross_decomposition...
#### Describe alternatives you've considered
There don't seem to be alternative implementations in Python.
#### Additional context
We plan to use this, and then extend it to using PLS with coupled tensors as X. The latter method won't be as generally useful, so we will leave that out of the PR.
| closed | 2022-06-23T20:55:20Z | 2022-09-21T18:21:56Z | https://github.com/tensorly/tensorly/issues/416 | [
"new feature"
] | aarmey | 2 |
biosustain/potion | sqlalchemy | 46 | Add $contains, $icontains filters for fields.String() | closed | 2015-10-28T10:56:53Z | 2015-11-03T14:08:50Z | https://github.com/biosustain/potion/issues/46 | [] | lyschoening | 0 | |
Lightning-AI/pytorch-lightning | data-science | 20,496 | PaliGemma fine-tuning - error with distributed training | ### Bug description
I'm having an issue while adapting the fine-tuning logic from this HF tutorial:
https://github.com/NielsRogge/Transformers-Tutorials/blob/master/PaliGemma/Fine_tune_PaliGemma_for_image_%3EJSON.ipynb
I don't seem to be able to run distributed training on multiple gpus, when I run the training script with a config that includes gpus 0 and 1, I'm getting a Segmentation fault (core dumped) error. I am using Q-Lora also.
Please advise.
### What version are you seeing the problem on?
master
### How to reproduce the bug
```python
# Create trainer
trainer = L.Trainer(
accelerator="gpu",
devices=[0,1], # Use devices from config
strategy="ddp",
...
)
```
### Error messages and logs
```
`low_cpu_mem_usage` was None, now default to True since model is quantized.
Downloading shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 9709.04it/s]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.17s/it]
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------
Segmentation fault (core dumped)
```
### Environment
pyproject.toml:
transformers = "^4.44.2"
torch = "^2.4.1"
lightning = "^2.4.0"
peft = "^0.13.2"
accelerate = "^1.1.1"
bitsandbytes = "^0.45.0"
### More info
_No response_ | open | 2024-12-13T13:18:09Z | 2025-02-22T11:15:18Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20496 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | vdrvar | 1 |
jupyter-incubator/sparkmagic | jupyter | 620 | Release 0.15 | Looks like it's time for a new release.
@devstein want to try doing one? In theory it's documented in `RELEASING.md`. | closed | 2020-01-22T15:51:44Z | 2020-01-22T16:58:37Z | https://github.com/jupyter-incubator/sparkmagic/issues/620 | [] | itamarst | 1 |
plotly/plotly.py | plotly | 4,130 | lightposition for surface plots doesn't work | The parameter lightposition for surface plots expects an x, y, z vector. This vector is not explained in the documentation, does not make sense in any way, and playing with it only produces aggravation. It appears to be completely broken. This parameter should be defined in spherical coordinates and explained clearly in the documentation.
Not only does trying different values for x, y, z not make sense, I had a surface plot defined over the domains in x and y (my coordinates) of 0 to 6. I then realized that the domain should be -3 to 3; when I made this change, the lighting changed radically and I can't figure out how to get back the original lighting without reverting to the incorrect domains.
(1) lightposition needs to be fixed.
(2) lightposition needs to be in spherical coordinates.
(3) lightposition needs to be properly documented.
(4) lightposition needs to have at least an option which has a definition in "head" space so that the lighting doesn't change when merely the domain of the data is changed. | open | 2023-03-28T22:48:59Z | 2024-08-12T20:51:23Z | https://github.com/plotly/plotly.py/issues/4130 | [
"bug",
"P3"
] | oscarrutt | 0 |
MagicStack/asyncpg | asyncio | 537 | Trying to debug `asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress` | * **asyncpg version**: 0.20.1
* **PostgreSQL version**: 12
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: no
* **Python version**: 3.7.5
* **Platform**: OS X
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: yes
* **If you built asyncpg locally, which version of Cython did you use?**: -
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: yes
<!-- Enter your issue details below this comment. -->
Hi!
I'm trying to debug an issue that I have with errors like `asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress` from this line: https://github.com/MagicStack/asyncpg/blob/dcef29804095990f25178b642b920f3fbcb4ca22/asyncpg/protocol/protocol.pyx#L664
I am trying to understand where I could have gone wrong. So, I would like to ask the following preliminary question:
Suppose my code interacts with `asyncpg` only through a pool, and only using context managers:
```
async with pool.acquire() as connection:
...
```
I also do not share the `connection` object around. On it, I use the methods `fetchrow`, `fetchval` and `fetch`. They are used sequentially from the same asyncio "thread" (for lack of better word). That is to say, they are never called in parallel.
In this setup, I would think that the error `cannot perform operation: another operation is in progress` never shows up, but it does. Am I missing something?
| open | 2020-03-02T11:11:24Z | 2020-05-19T09:33:15Z | https://github.com/MagicStack/asyncpg/issues/537 | [] | AndreaCensi | 7 |
tflearn/tflearn | tensorflow | 1,042 | ImportError: cannot import name config | Hi ,
When I am trying to import tflearn i am getting the following error
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-31-1dcc0aeffa10> in <module>()
----> 1 import tflearn
/Applications/anaconda2/lib/python2.7/site-packages/tflearn/__init__.py in <module>()
2
3 # Config
----> 4 from . import config
5 from .config import is_training, get_training_mode, init_graph
6
ImportError: cannot import name config
Please help .
Thank you
PS: tensorflow is working fine | closed | 2018-04-25T18:40:55Z | 2018-04-26T14:02:11Z | https://github.com/tflearn/tflearn/issues/1042 | [] | apoorvapantoola | 1 |
open-mmlab/mmdetection | pytorch | 11,706 | 权重文件的使用方法 |
目前我正在复现将mask rcnn的主干网络修改成swin_T,但是我想在FPN中加入一些注意力机制,我这个时候怎么用这个mask-rcnn_swin-t-p4-w7_fpn_amp-ms-crop-3x_coco.pth权重文件呢? 具体使用方法可以告知一下嘛 谢谢!!!!!!!! | open | 2024-05-12T03:21:52Z | 2024-05-13T11:39:05Z | https://github.com/open-mmlab/mmdetection/issues/11706 | [
"reimplementation"
] | Leomin12138 | 1 |
hack4impact/flask-base | flask | 163 | Database CRUD/ dashboard? | Hi guys,
Thanks for putting together this project. I've extended the Models, and I now need to be able to modify perform crud operations to the DB tables, kind of like in the django admin section. Before I try to roll my own solution , what approach would be most efficient to allow this? I have read through http://hack4impact.github.io/flask-base/manage_commands/ and I see this is partially discussed in make_shell_context. Is there a plugin that you would suggest? I'd be interested in getting your thoughts.
Thanks,
KC | closed | 2018-04-25T17:12:32Z | 2018-04-26T21:52:45Z | https://github.com/hack4impact/flask-base/issues/163 | [] | kc1 | 3 |
feature-engine/feature_engine | scikit-learn | 371 | add inverse_transform functionality to sklearnTransformerWrapper | Please add inverse_transform functionality to the sklearnTransformerWrapper . | closed | 2022-02-07T21:54:49Z | 2022-03-26T08:02:34Z | https://github.com/feature-engine/feature_engine/issues/371 | [] | rajshree8217 | 2 |
pydantic/logfire | fastapi | 152 | Live tail does not stop during a network error | ### Description
The timer of the Live tail does not stop as it failed to export spans during a network error (maybe?).
Here is the traceback:

```shell
[WARNING 2024-05-07 09:38:15,509 _showwarnmsg:109] /usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/file.py:58: WritingFallbackWarning: Failed to export spans, writing to fallback file: /root/cnocr/.logfire/logfire_spans.bin
warnings.warn(
[ERROR 2024-05-07 09:38:15,524 _export_batch:369] Exception while exporting Span batch.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 449, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 444, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 316, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 277, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.8/ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.8/ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/util/retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/packages/six.py", line 770, in reraise
raise value
File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 451, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 340, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='logfire-api.pydantic.dev', port=443): Read timed out. (read timeout=10)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/opentelemetry/sdk/trace/export/__init__.py", line 367, in _export_batch
self.span_exporter.export(self.spans_list[:idx]) # type: ignore
File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/remove_pending.py", line 45, in export
return super().export(result)
File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/wrapper.py", line 14, in export
return self.wrapped_exporter.export(spans)
File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/fallback.py", line 20, in export
res = self.exporter.export(spans)
File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/otlp.py", line 56, in export
return super().export(spans)
File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/wrapper.py", line 14, in export
return self.wrapped_exporter.export(spans)
File "/usr/local/lib/python3.8/dist-packages/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py", line 145, in export
resp = self._export(serialized_data)
File "/usr/local/lib/python3.8/dist-packages/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py", line 114, in _export
return self._session.post(
File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/sessions.py", line 635, in post
return self.request("POST", url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/otlp.py", line 41, in send
return super().send(request, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/adapters.py", line 578, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='logfire-api.pydantic.dev', port=443): Read timed out. (read timeout=10)
```
### Python, Logfire & OS Versions, related packages (not required)
```TOML
logfire="0.30.0"
platform="Linux-5.10.104-tegra-aarch64-with-glibc2.29"
python="3.8.10 (default, Nov 14 2022, 12:59:47)
[GCC 9.4.0]"
[related_packages]
requests="2.22.0"
requests="2.28.2"
pydantic="2.7.1"
fastapi="0.110.2"
protobuf="4.25.3"
rich="13.7.1"
tomli="2.0.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-instrumentation-asgi="0.45b0"
opentelemetry-instrumentation-fastapi="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-util-http="0.45b0"
```
| closed | 2024-05-08T01:28:44Z | 2024-06-03T19:48:18Z | https://github.com/pydantic/logfire/issues/152 | [
"Platform Bug"
] | ShuminFu | 2 |
deepset-ai/haystack | nlp | 8,731 | HuggingFaceLocal ChatGenerator - support for Tool | closed | 2025-01-16T14:10:03Z | 2025-02-10T08:46:51Z | https://github.com/deepset-ai/haystack/issues/8731 | [
"P2"
] | anakin87 | 1 | |
PaddlePaddle/models | computer-vision | 4,719 | PaddleCV-video-ctcn 训练到Epoch21,iter1365停止不动 | 你好,我在训练CTCN的时候,发现训到Epoch21,iter1365时程序会停止不动,但同时程序仍然在使用GPU和CPU资源。
详情如图。


希望能够获得帮助解决这个问题,谢谢。
| open | 2020-06-24T05:21:56Z | 2024-02-26T05:11:12Z | https://github.com/PaddlePaddle/models/issues/4719 | [] | Fordacre | 7 |
keras-team/autokeras | tensorflow | 1,127 | IO API, multi-modal classification, predict method problem | ### Bug Description
IO API, multi-modal classification, predict method problem
### Bug Reproduction
https://github.com/datamllab/automl-in-action-notebooks/blob/master/3.4.2-Functional-API-Multi-Input.ipynb
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python:
- autokeras: 1.0.2
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow: 2.1.0
### Additional context
<!---
If applicable, add any other context about the problem.
-->
| closed | 2020-05-09T04:01:36Z | 2020-05-25T00:47:13Z | https://github.com/keras-team/autokeras/issues/1127 | [
"bug report",
"pinned"
] | qingquansong | 0 |
deezer/spleeter | deep-learning | 130 | [Bug] Tuple formatting incorrectly included in output directory name | ## Description
I am using the [separator.py](https://github.com/deezer/spleeter/blob/master/spleeter/separator.py) file to include `spleeter` in my own Python development. The [separate_to_file](https://github.com/deezer/spleeter/blob/85ff00797f6c615c62885793923eca952e9e791f/spleeter/separator.py#L93) function is erroneously including parentheses in the name of the output directory. The user does not have a way to avoid this formatting.
Example:
> Input filename: `GilScottHeron_WeAlmostLostDetroit.mp3`
> Output directory name : `('GilScottHeron_WeAlmostLostDetroit', '.mp3')/`
## Step to reproduce
1. Installed using `pip`
2. Run as a Python script
3. Got no error. The output directory name formatting is tuple.
```python
from spleeter.spleeter.separator import Separator
separator = Separator('spleeter:2stems')
filein = 'GilScottHeron_WeAlmostLostDetroit.mp3'
fileout = './stems'
separator.separate_to_file(filein, fileout, codec='mp3')
```
## Output
Output directory name : `./stems/('GilScottHeron_WeAlmostLostDetroit', '.mp3')/`
Expected output: `./stems/GilScottHeron_WeAlmostLostDetroit/`
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | MacOS |
| Installation type | `pip` |
| RAM available | 8 GB |
| Hardware spec | CPU: 3.2 GHz Intel Core i5, GPU: NVIDIA GeForce GT 755M 1 GB |
## Additional context
The reason for this bug is [line 124 in `separator.py`](https://github.com/deezer/spleeter/blob/85ff00797f6c615c62885793923eca952e9e791f/spleeter/separator.py#L124). There needs to be a `[0]` added after the output of `splitext` so that the directory name is created from a `string`, not a `tuple`. | closed | 2019-11-23T23:27:28Z | 2019-11-25T14:34:13Z | https://github.com/deezer/spleeter/issues/130 | [
"bug",
"next release"
] | johnwmillr | 2 |
tensorpack/tensorpack | tensorflow | 831 | Can we simplify the roialign to remove the avg pool part | The faster rcnn roi_align part cropped` 2* resolution`, then use` avg pool`:
https://github.com/tensorpack/tensorpack/blob/17c25692fcbcb6d3235cacc3b67c3d26bf716084/examples/FasterRCNN/model_box.py#L167-L171
but in my case, the avgpool part needs to deal with the` N*proposal*C*2resolutio*2resolution` sized feature map, which caused OOM for the input size, so can we simplify this part so memory can be saved? like directly pool `resolutio*resolution` sized feature map | closed | 2018-07-17T12:15:29Z | 2019-04-18T11:57:12Z | https://github.com/tensorpack/tensorpack/issues/831 | [
"examples"
] | twangnh | 6 |
oegedijk/explainerdashboard | plotly | 87 | How to create a "warning component"? | Hi Oege,
I want to add a "warning component" to my What-If-Tab which is e.g.
a) a red card displaying "Critical"
b) a normal card displaying "OK"
if the prediction is above / below a certain threshold (say 100).
How to do this?
I saw your answer to #85 but I'm not sure
- how to get the prediction itself (what is X_row?)
- where to put the threshold (probably the callback would return the value only and layout() would perform the comparison between predicted value & threshold and return the corresponding card?)
- where to put the class itself (is there a special place for custom components s.t. they are not overwritten when updating explainerdashboard? Maybe I'm also confused because of the aspect custom tab vs custom component)
- how to make sure I can connect it using IndexConnector
Anyway, here is what I would try (how to set the color?):
```
class WarningBox(ExplainerComponent):
def __init__(self, explainer, title="Warning Box", name=None, hide_index=False, index=None):
super().__init__(explainer)
def layout(self):
card = dbc.Card([
dbc.CardHeader([html.Div([html.H3("Status"),
html.H6("", style={'margin-top': '-6px'})])]),
dbc.CardBody([html.Div(id='prediction-text' + self.name)])
], color= **???**, style={'margin-bottom': '-60px', 'display': 'flex'})
return dbc.Container([
dbc.Row([
dbc.Col([
card
]),
]),
])
def component_callbacks(self, app):
@app.callback(
[Output('prediction-text' + self.name, 'value'),
Output('prediction-color' + self.name, 'value')],
Input('prediction-index-' + self.name, 'value'))
def update_predictions_table(index):
if index is not None:
pred = self.explainer.preds[explainer.get_idx(index)]
if pred > 100:
return "Critical", "warning"
else:
return "Ok", None
raise PreventUpdate
``` | closed | 2021-02-17T18:25:26Z | 2021-02-23T17:20:28Z | https://github.com/oegedijk/explainerdashboard/issues/87 | [] | hkoppen | 2 |
jupyter/nbviewer | jupyter | 535 | Can't `invoke bower` | I'm trying to follow the [local installation](https://github.com/jupyter/nbviewer#local-installation) but when I run `invoke bower` I get
```
/bin/sh: /Users/sam/gitrepos/nbviewer/node_modules/.bin/bower: No such file or directory
```
Similar issue with `invoke less`.
I'm not familiar with `invoke` so I'm not sure how to debug.
| closed | 2015-11-11T06:40:06Z | 2015-11-12T06:33:26Z | https://github.com/jupyter/nbviewer/issues/535 | [] | lendle | 3 |
Textualize/rich | python | 3,249 | [BUG] significantly changes the text to be printed. [v13.7.0] | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
`python3 -c 'import rich; rich.get_console().print(open("man.1", "rt").read())'` And this is what I get:
```
MAN(1) General Commands Manual MAN(1)
NNAAMMEE
mmaann, aapprrooppooss, wwhhaattiiss – display online manual documentation pages
SSYYNNOOPPSSIISS
mmaann [--aaddhhoo] [--tt | --ww] [--MM _m_a_n_p_a_t_h] [--PP _p_a_g_e_r] [--SS _m_a_n_s_e_c_t]
[--mm _a_r_c_h[:_m_a_c_h_i_n_e]] [--pp [_e_p_r_t_v]] [_m_a_n_s_e_c_t] _p_a_g_e _._._.
mmaann --ff [--dd] [--MM _m_a_n_p_a_t_h] [--PP _p_a_g_e_r] [--SS _m_a_n_s_e_c_t] _k_e_y_w_o_r_d _._._.
wwhhaattiiss [--dd] [--ss _m_a_n_s_e_c_t] _k_e_y_w_o_r_d _._._.
mmaann --kk [--dd] [--MM _m_a_n_p_a_t_h] [--PP _p_a_g_e_r] [--SS _m_a_n_s_e_c_t] _k_e_y_w_o_r_d _._._.
aapprrooppooss [--dd] [--ss _m_a_n_s_e_c_t] _k_e_y_w_o_r_d _._._.
DDEESSCCRRIIPPTTIIOONN
The mmaann utility finds and displays online manual documentation pages. If
_m_a_n_s_e_c_t is provided, mmaann restricts the search to the specific section of
the manual.
```
Specifying `markup=False` to `console.print` does not make any difference.
The original document content looks like this:
```
MAN(1) General Commands Manual MAN(1)
NAME
man, apropos, whatis – display online manual documentation pages
SYNOPSIS
man [-adho] [-t | -w] [-M manpath] [-P pager] [-S mansect]
[-m arch[:machine]] [-p [eprtv]] [mansect] page ...
man -f [-d] [-M manpath] [-P pager] [-S mansect] keyword ...
whatis [-d] [-s mansect] keyword ...
man -k [-d] [-M manpath] [-P pager] [-S mansect] keyword ...
apropos [-d] [-s mansect] keyword ...
DESCRIPTION
The man utility finds and displays online manual documentation pages. If
mansect is provided, man restricts the search to the specific section of
the manual.
```
I got this original text by `man man > man.1`.
Mac OS 14.0. Found this issue in Terminal.

If you're using Rich in a terminal:
```
python3 -m rich.diagnoze
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=202 ColorSystem.EIGHT_BIT> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = '256' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 53 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=202, height=53), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=202, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=53, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=202, height=53) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 202 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭──────── Environment Variables ────────╮
│ { │
│ 'TERM': 'xterm-256color', │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': 'Apple_Terminal', │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰───────────────────────────────────────╯
platform="Darwin"
```
| open | 2024-01-07T04:44:24Z | 2024-01-08T18:13:33Z | https://github.com/Textualize/rich/issues/3249 | [
"Needs triage"
] | cdluminate | 4 |
tflearn/tflearn | data-science | 358 | 'tflearn_logs' dir permission issue. | I've get '/tmp/tflearn_logs/ permission denied'.
My traceback is attached.
I think we can handle that.
With setting permission for all users when tflearn makes tflearn_logs dir.
> Run id: Q17392
> Log directory: /tmp/tflearn_logs/
> Traceback (most recent call last):
> File "learner.py", line 82, in <module>
> model.fit(trainX, trainY, validation_set=0.1, n_epoch = N_EPOCH, show_metric=True, batch_size=256, snapshot_epoch=True, snapshot_step=500)
> File "/usr/local/lib/python2.7/dist-packages/tflearn/models/dnn.py", line 213, in fit
> callbacks=callbacks)
> File "/usr/local/lib/python2.7/dist-packages/tflearn/helpers/trainer.py", line 231, in fit
> self.tensorboard_dir + run_id, self.session.graph_def)
> File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/summary_io.py", line 102, in **init**
> gfile.MakeDirs(self._logdir)
> File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/gfile.py", line 295, in MakeDirs
> os.makedirs(path, mode)
> File "/usr/lib/python2.7/os.py", line 157, in makedirs
> mkdir(name, mode)
> OSError: [Errno 13] Permission denied: '/tmp/tflearn_logs/Q17392'
| open | 2016-09-27T18:23:33Z | 2016-12-15T03:54:12Z | https://github.com/tflearn/tflearn/issues/358 | [] | changukshin | 2 |
indico/indico | flask | 6,093 | Update install docs with libs (e.g. libpango) required by weasyprint | The system libraries required by weasyprint are not installed by default on a typical server.
We need to mention in the docs (ideally after updating them to get rid of ancient stuff like centos7) what people have to install.
Alternatively (or additionally?) let's catch the failing import and maybe disable template creation if it cannot be imported due to missing libraries?
```
>>> import weasyprint
-----
WeasyPrint could not import some external libraries. Please carefully follow the installation steps before reporting an issue:
https://doc.courtbouillon.org/weasyprint/stable/first_steps.html#installation
https://doc.courtbouillon.org/weasyprint/stable/first_steps.html#troubleshooting
-----
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/indico/.venv/lib/python3.12/site-packages/weasyprint/__init__.py", line 387, in <module>
from .css import preprocess_stylesheet # noqa isort:skip
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/indico/.venv/lib/python3.12/site-packages/weasyprint/css/__init__.py", line 25, in <module>
from . import computed_values, counters, media_queries
File "/opt/indico/.venv/lib/python3.12/site-packages/weasyprint/css/computed_values.py", line 11, in <module>
from ..text.ffi import ffi, pango, units_to_double
File "/opt/indico/.venv/lib/python3.12/site-packages/weasyprint/text/ffi.py", line 431, in <module>
pango = _dlopen(
^^^^^^^^
File "/opt/indico/.venv/lib/python3.12/site-packages/weasyprint/text/ffi.py", line 417, in _dlopen
return ffi.dlopen(names[0]) # pragma: no cover
^^^^^^^^^^^^^^^^^^^^
File "/opt/indico/.venv/lib/python3.12/site-packages/cffi/api.py", line 150, in dlopen
lib, function_cache = _make_ffi_library(self, name, flags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/indico/.venv/lib/python3.12/site-packages/cffi/api.py", line 832, in _make_ffi_library
backendlib = _load_backend_lib(backend, libname, flags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/indico/.venv/lib/python3.12/site-packages/cffi/api.py", line 827, in _load_backend_lib
raise OSError(msg)
OSError: cannot load library 'pango-1.0-0': pango-1.0-0: cannot open shared object file: No such file or directory. Additionally, ctypes.util.find_library() did not manage to locate a library called 'pango-1.0-0'
>>>
``` | closed | 2023-12-12T20:11:26Z | 2024-03-28T12:15:44Z | https://github.com/indico/indico/issues/6093 | [] | ThiefMaster | 1 |
HIT-SCIR/ltp | nlp | 714 | 在进行中文分句时为什么对.也进行了分词 | 我的数据是:1.联通华盛电商分公司办公室内的灯火彻夜不熄,这已经成为常态。这句话正常是一句话的。但是给我返回的结果是[1.,联通华盛电商分公司办公室内的灯火彻夜不熄,这已经成为常态。]两句话,请问这个问题如何解决 | closed | 2024-09-27T09:41:03Z | 2024-10-15T09:53:32Z | https://github.com/HIT-SCIR/ltp/issues/714 | [] | Alex-DeepL | 8 |
mwaskom/seaborn | data-visualization | 3,191 | Only the upper limit is plotted when `so.Bar` handles log-transformed axes | I find that when setting the y-axis to log-transformed coordinates, it seems that only the upper limit is plotted.
```
(
so.Plot(x=[1,2,3], y=[10, 100, 1000])
.add(so.Bar())
) # original
```

```
(
so.Plot(x=[1,2,3], y=[10, 100, 1000])
.add(so.Bar())
.scale(y='log')
) # log-transformed
```

```
fig = (
so.Plot(x=[1,2,3], y=[10, 100, 1000])
.add(so.Bar())
.plot()._figure
)
ax = fig.axes[0]
ax.set_yscale('log')
fig # expected
```

| closed | 2022-12-19T05:44:02Z | 2022-12-19T05:48:31Z | https://github.com/mwaskom/seaborn/issues/3191 | [] | liuzj039 | 1 |
jina-ai/clip-as-service | pytorch | 93 | REDUCE_MEAN is calculated without "input_mask"? | Maybe only calculate the mean of non_padding characters is more reasonable?
i.e., only calculate the mean of the top 6 characters in the following example
tokens: [CLS] 我 还 可 以 [SEP]
input_ids: 101 2769 6820 1377 809 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
input_mask: 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
| closed | 2018-12-05T13:12:17Z | 2018-12-06T09:59:47Z | https://github.com/jina-ai/clip-as-service/issues/93 | [
"bug"
] | nrty | 3 |
chaoss/augur | data-visualization | 2,524 | Issue data cntrb_id null for closed issues | Cntrb_id is stated to be "The ID of the person that closed the issue" and even for closed issues I am seeing nulls | open | 2023-09-08T20:11:34Z | 2023-10-05T17:37:53Z | https://github.com/chaoss/augur/issues/2524 | [
"bug"
] | cdolfi | 1 |
Nekmo/amazon-dash | dash | 144 | Integration with home assistant: 1st press doesn't send event to home assistant. 2th press ok. | ### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with amazon-dash)
- [ ] Feature request (request for a new functionality)
- [x] Question
- [ ] Other
* amazon-dash version: Amazon-dash v1.3.3
* Python version: Python 3.6.4.
* Pip & Setuptools version: pip 19.2.3
* Operating System: setuptools 33.1.1
- [x] The `pip install` or `setup install` command has been completed without errors
- [x] The `python -m amazon_dash.install` command has been completed without errors
- [x] The `amazon-dash discovery` command works without errors
- [x] I have created/edited the configuration file
- [x] *Amazon-dash service* or `amazon-dash --debug run` works
#### Description
The tool works great. But sometimes the first time that it runs home assistant doesn't receive the event. If I press again the bottom it works.
If I wait 1 hour (and I press the button) it's work again at the first pressed. But after 24hr if I press 1 time the button home assistant doesn't receive the trigger event.
I use authentication with "Long-Lived Access Tokens"
#### What I Did
I used telegram notification inside amazon-dash.yml so I understood that when I press the button (also the first time) the amazon-dash tool works and I receive the telegram notification so the problem is only the event that is not received from home assistant (or send by amazon-dash).
Maybe there is some timeout during the authentication process that stop the first event.
maybe can I try a workaround sending 2 events?
Is it possible to send more that one event to home assistant? I was thinking
-send 1st event
-wait
-send 2th event (same of the 1st one)
Am I the only one to have this problem?
I used the dush button as doorbell so I prefer to be sure to receive the event trigger in home assistant, recording camera etc...etc..
amazon-dash.yml
B4:7C:XX:XX:XX:XX:
name: Dash button
homeassistant: http://192.168.1.XXX:8123
access_token: XXXXXXXXXXXXXXX
event: dash_pressed
automation.yaml
- alias: per evitare più richieste registrazione campanello
trigger:
- platform: event
event_type: dash_pressed
action:
- service: timer.start
entity_id: timer.campanello
- alias: Campanello premendo dash button marca DASH
trigger:
- platform: state
entity_id: timer.campanello
to: active
action:
- service: script.turn_on
entity_id: script.buzzer
- service: notify.campanello
data:
message: " Hanno suonato il campanello \n Data e ora: {{ states.sensor.date_time.state\
\ }} "
- service: input_boolean.turn_on
data:
entity_id: input_boolean.registrocampanello
- service: shell_command.attesaregistrazionecampanello
- service: camera.record
data:
entity_id: camera.ingresso
duration: 5
lookback: 5
filename: /home/homeassistant/.homeassistant/videoTelecamere/campanello.mp4
- wait_template: '{{ is_state(''input_boolean.registrocampanello'', ''off'') }}'
- service: notify.campanello
data:
title: Send a video
message: Video Telecamera Muro
data:
video:
- file: /home/homeassistant/.homeassistant/videoTelecamere/campanello.mp4
| closed | 2019-09-07T15:16:18Z | 2021-04-21T12:11:12Z | https://github.com/Nekmo/amazon-dash/issues/144 | [] | Metus88 | 5 |
mwaskom/seaborn | data-visualization | 3,474 | I am trying to align labels in a histogram using seaborn though it not happening, the same graph in excel is properly aligned with labels, how can i do the same using seaborn |  | closed | 2023-09-15T17:02:20Z | 2023-09-15T21:41:06Z | https://github.com/mwaskom/seaborn/issues/3474 | [] | Utsav-2301 | 1 |
jupyter/nbviewer | jupyter | 490 | Broken link: notebook tutorials / "JavaScript Notebook Extensions" | In: http://nbviewer.ipython.org/github/ipython/ipython/blob/3.x/examples/Notebook/Index.ipynb
the link: [JavaScript Notebook Extensions](http://nbviewer.ipython.org/github/ipython/ipython/blob/3.x/examples/Notebook/JavaScript%20Notebook%20Extensions.ipynb)
Is broken.
| closed | 2015-08-20T14:28:56Z | 2015-08-20T15:05:38Z | https://github.com/jupyter/nbviewer/issues/490 | [
"tag:Upstream"
] | coej | 3 |
pytorch/pytorch | python | 149,635 | avoid guarding on max() unnecessarily | here's a repro. theoretically the code below should not require a recompile. We are conditionally padding, producing an output tensor of shape max(input_size, 16). Instead though, we specialize on the pad value, and produce separate graphs for the `size_16` and `size_greater_than_16` cases
```
import torch
@torch.compile(backend="eager")
def f(x):
padded_size = max(x.shape[0], 16)
padded_tensor = torch.ones(padded_size, *x.shape[1:])
return padded_tensor + x.sum()
x = torch.arange(15)
torch._dynamo.mark_dynamic(x, 0)
out = f(x)
x = torch.arange(17)
torch._dynamo.mark_dynamic(x, 0)
out = f(x)
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @zou3519 | open | 2025-03-20T17:08:26Z | 2025-03-24T09:52:13Z | https://github.com/pytorch/pytorch/issues/149635 | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"vllm-compile"
] | bdhirsh | 5 |
supabase/supabase-py | flask | 472 | Invalid signature trying to access buckets using python api (self-hosted supabase) | Hi everyone,
the issue applies to supabase self-hosted.
When trying to access the storage e.g. by listing my buckets I´m running into the following error:
> StorageException: {'statusCode': 400, 'error': 'invalid signature', 'message': 'invalid signature'}
Code to reproduce:
```
import os
from supabase import create_client, Client
from dotenv import load_dotenv
load_dotenv()
url: str = os.environ.get("SUPABASE_PUBLIC_URL")
key: str = os.environ.get("ANON_KEY")
supabase: Client = create_client(url, key)
session, token = supabase.auth.sign_in_with_password({"email": "myemail@googlemail.com", "password": "mypassword"})
data = supabase.auth.get_user()
postgrest_client = supabase.postgrest
postgrest_client.auth(supabase.auth.get_session().access_token)
res = supabase.storage.get_bucket("mybucket")
```
Host OS: Windows 11
supabase-py: 1.0.3
Further Information:
- Interacting with the client besides trying to access the storage works fine.
- Accessing the storage from my react frontend is also working.
Many thanks in advance for your help!
| closed | 2023-06-19T21:08:44Z | 2023-09-08T17:56:05Z | https://github.com/supabase/supabase-py/issues/472 | [
"question",
"storage"
] | mdanner93 | 3 |
jonra1993/fastapi-alembic-sqlmodel-async | sqlalchemy | 2 | field required (type=value_error.missing) | After executing `docker compose up --build` this error keeps running forever
How can fix it please?
```
nginx | 2022/07/25 09:21:32 [notice] 1#1: start worker process 32
traefik-proxy | time="2022-07-25T09:21:33Z" level=info msg="Configuration loaded from file: /traefik.yml"
fastapi_server | Traceback (most recent call last):
fastapi_server | File "/usr/local/bin/alembic", line 8, in <module>
fastapi_server | sys.exit(main())
fastapi_server | File "/usr/local/lib/python3.9/site-packages/alembic/config.py", line 588, in main
fastapi_server | CommandLine(prog=prog).main(argv=argv)
fastapi_server | File "/usr/local/lib/python3.9/site-packages/alembic/config.py", line 582, in main
fastapi_server | self.run_cmd(cfg, options)
fastapi_server | File "/usr/local/lib/python3.9/site-packages/alembic/config.py", line 559, in run_cmd
fastapi_server | fn(
fastapi_server | File "/usr/local/lib/python3.9/site-packages/alembic/command.py", line 320, in upgrade
fastapi_server | script.run_env()
fastapi_server | File "/usr/local/lib/python3.9/site-packages/alembic/script/base.py", line 563, in run_env
fastapi_server | util.load_python_file(self.dir, "env.py")
fastapi_server | File "/usr/local/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 92, in load_python_file
fastapi_server | module = load_module_py(module_id, path)
fastapi_server | File "/usr/local/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 108, in load_module_py
fastapi_server | spec.loader.exec_module(module) # type: ignore
fastapi_server | File "<frozen importlib._bootstrap_external>", line 850, in exec_module
fastapi_server | File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
fastapi_server | File "alembic/env.py", line 7, in <module>
fastapi_server | from app.core.config import Settings
fastapi_server | File "/code/app/core/config.py", line 51, in <module>
fastapi_server | settings = Settings()
fastapi_server | File "pydantic/env_settings.py", line 38, in pydantic.env_settings.BaseSettings.__init__
fastapi_server | File "pydantic/main.py", line 331, in pydantic.main.BaseModel.__init__
fastapi_server | pydantic.error_wrappers.ValidationError: 1 validation error for Settings
fastapi_server | REDIS_PORT
fastapi_server | field required (type=value_error.missing)
fastapi_server exited with code 1
```
| closed | 2022-07-25T09:27:47Z | 2022-07-26T14:00:52Z | https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/issues/2 | [] | mbnoimi | 3 |
mouredev/Hello-Python | fastapi | 307 | 【在网络上赌被黑流水不足不给提款怎么办】卫【kks06666】 |
在网络上赌博被黑且因流水不足不给提款,很可能涉及到诈骗行为,应采取以下措施来应对:
首先,需要立即报警。向当地公安机关详细陈述被黑的经过,提供相关证据,如交易记录、聊天记录、诈骗网站或APP的截图等,以便警方能够迅速展开调查。网络赌博是违法的,情形严重的会构成刑事犯罪,因此,报警是维护自己合法权益的重要途径。
其次,寻求法律咨询与援助。在报警的同时,可以咨询专业律师,获取更为详细和针对性的法律建议。律师可以帮助分析合同条款是否合法,指导如何收集有利证据,以及如何通过法律途径追讨损失。
此外,尝试与平台沟通。在保留好证据的基础上,可以尝试心平气和地与平台客服交流,了解被限制提款的具体原因,看是否存在误解或可以解决的问题。如果平台能够积极处理并解决问题,那么这是最理想的解决方式。
同时,需要保留所有与提款相关的证据。这些证据包括但不限于交易记录、银行流水、平台截图、聊天记录等。这些证据将在后续的法律程序中起到关键作用,有助于维护自己的合法权益。
最后,需要警惕风险,加强防范。网络赌博存在很高的风险,应尽量避免参与。在选择网络平台时,应选择合法、正规、信誉良好的平台,以免遭受不必要的损失。同时,也要时刻关注自己的账户安全,及时发现并处理异常情况。
需要注意的是,网络赌博是违法行为,不仅可能导致财产损失,还可能面临法律责任。因此,应坚决抵制网络赌博,遵守法律法规。 | closed | 2025-01-04T04:42:42Z | 2025-01-10T21:06:29Z | https://github.com/mouredev/Hello-Python/issues/307 | [] | kks02222 | 0 |
microsoft/qlib | deep-learning | 1,494 | Timeout downloading 1min interval data | I follow the data prepration step from README
```
python -m qlib.run.get_data qlib_data --target_dir ~/.qlib/qlib_data/cn_data_1min --region cn --interval 1min
```
But I can not download the data. How to add config parameters, such as timeout, or increasing buffer, ... to fix this issue
```requests.exceptions.ConnectionError: HTTPSConnectionPool(host='qlibpublic.blob.core.windows.net', port=443): Read timed out.``` | closed | 2023-04-17T12:55:09Z | 2023-10-24T03:03:20Z | https://github.com/microsoft/qlib/issues/1494 | [
"bug"
] | kyhoolee | 1 |
KaiyangZhou/deep-person-reid | computer-vision | 552 | Inconsistent inference speed in different runs using osnet_ain_x1_0 | Thanks for your great work, it helped me a lot!
I'm facing a problem when passing a batch of size~32 persons crops to FeatureExtractor object with osnet_ain_x1_0 model. It takes too long to finish (about 0.1 seconds in worst case and 0.05 in best case). In addition to the problem of inconsistent inference speed in different runs.
python version = 3.11.4
pytorch version = 2.0.1
cuda version = release 11.2, V11.2.67
GPU = GTX 1060
what could be the problem?
Thanks in advance. | open | 2023-08-10T21:14:36Z | 2023-08-10T21:14:36Z | https://github.com/KaiyangZhou/deep-person-reid/issues/552 | [] | Ahmad-Hammoudeh | 0 |
iMerica/dj-rest-auth | rest-api | 34 | Logging out without passing a token results in a "Successfully logged out." message | When you hit the log out endpoint, but don't send a token, you get a "Successfully logged out." message back. This is confusing to me, since no user is actually logged out. Is this a bug, or was this done on purpose? | open | 2020-04-08T00:50:40Z | 2020-04-12T21:04:19Z | https://github.com/iMerica/dj-rest-auth/issues/34 | [
"known-issue-from-original-project"
] | mrshwah | 2 |
cvat-ai/cvat | computer-vision | 9,130 | Add ability to add custom options for Data Export. | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
_No response_
### Describe the solution you'd like
I was following [this](https://docs.cvat.ai/docs/contributing/new-annotation-format/) guide and added some custom export methods to my self hosted cvat instance.
Is it somehow possible to add more input fields to this Exporter selection screen?

I would like to export my annotations and dataset to a seperate MinIO (S3) instance that hosts all our labeled datasets at a central location where it can be interfaced and accessed batch by batch without copying it to the PC a Neural Net is trained on.
### Describe alternatives you've considered
My current workaround is to have a script running on a different PC, that gets all the annotations and images, processes them (or get the predefined export zip file) to build the dataset and uploads it to the central MinIO server.
### Additional context
_No response_ | closed | 2025-02-20T17:49:38Z | 2025-02-24T12:41:08Z | https://github.com/cvat-ai/cvat/issues/9130 | [
"enhancement"
] | TheKorbi | 4 |
benbusby/whoogle-search | flask | 692 | [FEATURE] Multiple Social Media Alternatives | Hi,
Is it possible to implement an array of multiple social media alternative sites? With that feature, we don't rely on one instance, and every time a user search, it randomly selects alternative social media. | closed | 2022-03-22T06:19:26Z | 2022-03-25T00:17:18Z | https://github.com/benbusby/whoogle-search/issues/692 | [
"enhancement"
] | 0xspade | 5 |
waditu/tushare | pandas | 1,121 | pro_bar 获取数据报错 | 运行以下代码时
data = tushare.pro_bar(xxxxx)
出现错误,显示:
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-23: ordinal not in range(128) | closed | 2019-08-16T02:54:53Z | 2019-08-27T06:03:14Z | https://github.com/waditu/tushare/issues/1121 | [] | GoRockets | 3 |
jupyterhub/jupyterhub-deploy-docker | jupyter | 60 | 500 : Internal Server Error --help | *Edited*
maybe some problems on the proxy: the “PORTS ” of the user‘s containmer is “8888/tcp ”?
### 1
the logs of jupyterhub‘s container as follows:
```
[E 2018-01-12 21:46:49.248 JupyterHub log:124] 500 GET /hub/user/jane/ (jane@::ffff:159.226.12.83) 4444.00ms
```
### 2
the useer's container is:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8d4514fb49d jupyter/scipy-notebook:8f56e3c47fec "tini -- start-not..." 12 hours ago Up 3 seconds 8888/tcp jupyter-jane
```
### 3
the logs of user's container as follows:
```
[I 09:14:04.433 NotebookApp] Running the core application with no additional extensions or settings
[I 09:14:04.436 NotebookApp] Serving notebooks from local directory: /home/jovyan
[I 09:14:04.436 NotebookApp] 0 active kernels
[I 09:14:04.436 NotebookApp] The Jupyter Notebook is running at: http://0.0.0.0:8888/?token=91a9eb2ddaef73e1164c08bcb7bc8fcbce749b70a1ffbd5d
[I 09:14:04.436 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 09:14:04.437 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://0.0.0.0:8888/?token=91a9eb2ddaef73e1164c08bcb7bc8fcbce749b70a1ffbd5d
[C 09:19:55.084 NotebookApp] received signal 15, stopping
[I 09:19:55.087 NotebookApp] Shutting down kernel
``` | closed | 2018-01-12T22:23:18Z | 2022-12-05T00:54:45Z | https://github.com/jupyterhub/jupyterhub-deploy-docker/issues/60 | [
"question"
] | zhenm99 | 1 |
RobertCraigie/prisma-client-py | asyncio | 166 | Support setting the connect timeout from the Client constructor | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
For #103, we should support configuring the connection timeout from the class constructor.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should still support changing the timeout on a per-connect basis.
```py
client = Client(
connect_timeout=5,
)
await client.connect() # timeout 5
await client.connect(timeout=10) # timeout 10
```
| closed | 2021-12-05T15:23:33Z | 2021-12-13T12:33:58Z | https://github.com/RobertCraigie/prisma-client-py/issues/166 | [
"kind/feature"
] | RobertCraigie | 0 |
PokeAPI/pokeapi | api | 1,054 | Feature Request: Show if a pokemon in Nintendo Switch games can be transfered from Pokemon Home | There is quite a long list of pokemon, that you cannot catch in the switch games, but which are transferable via Pokemon Home. I would love to get this information into my spreadsheet.
This is a list from serebii listing all transfer only pokemon in scarlet and violet: https://www.serebii.net/scarletviolet/transferonly.shtml
| open | 2024-03-07T06:35:58Z | 2024-03-07T16:54:26Z | https://github.com/PokeAPI/pokeapi/issues/1054 | [] | L4R5 | 1 |
mckinsey/vizro | plotly | 670 | Consolidate CSS styling between `dash_data_table` and `dash_ag_grid` | Currently, the `dash_ag_grid` and `dash_data_table` have slightly different CSS styling. While it's acceptable for the column header styling to vary, it would be ideal if they shared the same paddings, row heights, font sizes etc.. This way, when switching between the two tables, they will be consistently aligned with other components. | closed | 2024-09-02T13:38:02Z | 2025-01-09T08:51:53Z | https://github.com/mckinsey/vizro/issues/670 | [
"Nice to have :cherries:"
] | huong-li-nguyen | 1 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 103 | Н | closed | 2024-08-28T12:15:33Z | 2024-08-28T16:08:20Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/103 | [] | Igorka221085 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.