repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,054 | For Training KITTI dataset | Hello, it's really brilliant code! Thank you for releasing.
I want to train the KITTI dataset for generating "night scene". (based on Nuscene dataset's night scene)
KITTI dataset is "day scene".
So where do I have to change your code? Can you briefly explain this? Thank you very much | open | 2020-06-03T10:18:10Z | 2025-01-07T07:25:08Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1054 | [] | skyphix | 5 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 301 | stage1 微调在单机多卡上跑不起来 | closed | 2023-05-11T01:17:21Z | 2023-07-04T06:44:53Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/301 | [] | yuemengrui | 4 | |
dgtlmoon/changedetection.io | web-scraping | 2,703 | [feature] (UI) Add autocomplete OR dropdown selector for watch tag. | **Version and OS**
0.46.04, termux
**Is your feature request related to a problem? Please describe.**
Now one have to manually write watch tag each time (for each new watch)
**Describe the solution you'd like**
Autocomplete writes it for you after first letters (with tab)
**Describe the use-case and give concrete real-world examples**
I guess most users have 3-5 groups most of watches belong to, it well be helpful to autocomplete tags.
(Sometimes typo is not big deal, but for some notification api might have logic which depends on watch tag, and typo will be a big deal) | closed | 2024-10-12T01:16:32Z | 2024-10-25T21:52:19Z | https://github.com/dgtlmoon/changedetection.io/issues/2703 | [
"enhancement"
] | gety9 | 2 |
nolar/kopf | asyncio | 842 | Does kopf depend on k8s version? | ### Keywords
k8s version
### Problem
Does kopf depend on k8s version?My k8s version is 1.13. | open | 2021-09-30T07:45:15Z | 2021-09-30T09:10:40Z | https://github.com/nolar/kopf/issues/842 | [
"question"
] | wyw64962771 | 1 |
sinaptik-ai/pandas-ai | data-science | 1,395 | docker compose up error 16/10/2024 | ### System Info
latest
ubuntu 22
3.11
### 🐛 Describe the bug
felipe@grupovanti:~/pandas-ai$ docker compose up
WARN[0000] The "cxwR1S88WmYVOw0BFpo4vuJ0Od5zrNXevkZcFt65wf5eTdMbGFMr6" variable is not set. Defaulting to a blank string.
[+] Running 9/9
✔ postgresql Pulled 9.6s
✔ df9b9388f04a Pull complete 1.6s
✔ 7902437d3a12 Pull complete 1.6s
✔ 709e2267bc98 Pull complete 1.6s
✔ 10c5a0a9c34e Pull complete 6.4s
✔ b46af7f38693 Pull complete 6.4s
✔ 65aa0c237f80 Pull complete 6.4s
✔ f6493ce74812 Pull complete 6.4s
✔ eaac3b44f9d0 Pull complete 6.4s
[+] Running 4/3
✔ Network pandas-ai_pandabi-network Created 0.1s
✔ Container pandas-ai-postgresql-1 Created 0.3s
✔ Container pandabi-frontend Crea... 0.3s
✔ Container pandabi-backend Creat... 0.0s
Attaching to pandabi-backend, pandabi-frontend, postgresql-1
postgresql-1 | The files belonging to this database system will be owned by user "postgres".
postgresql-1 | This user must also own the server process.
postgresql-1 |
postgresql-1 | The database cluster will be initialized with locale "en_US.utf8".
postgresql-1 | The default database encoding has accordingly been set to "UTF8".
postgresql-1 | The default text search configuration will be set to "english".
postgresql-1 |
postgresql-1 | Data page checksums are disabled.
postgresql-1 |
postgresql-1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgresql-1 | creating subdirectories ... ok
postgresql-1 | selecting dynamic shared memory implementation ... posix
postgresql-1 | selecting default max_connections ... 100
postgresql-1 | selecting default shared_buffers ... 128MB
postgresql-1 | selecting default time zone ... UTC
postgresql-1 | creating configuration files ... ok
postgresql-1 | running bootstrap script ... ok
postgresql-1 | sh: locale: not found
postgresql-1 | 2024-10-16 12:01:14.361 UTC [31] WARNING: no usable system locales were found
pandabi-backend | startup.sh: line 6: log: command not found
pandabi-frontend |
pandabi-frontend | > client@0.1.0 start
pandabi-frontend | > next start
pandabi-frontend |
pandabi-frontend | ⚠ You are using a non-standard "NODE_ENV" value in your environment. This creates inconsistencies in the project and is strongly advised against. Read more: https://nextjs.org/docs/messages/non-standard-node-env
postgresql-1 | performing post-bootstrap initialization ... ok
postgresql-1 | initdb: warning: enabling "trust" authentication for local connections
postgresql-1 | You can change this by editing pg_hba.conf or using the option -A, or
postgresql-1 | --auth-local and --auth-host, the next time you run initdb.
postgresql-1 | syncing data to disk ... ok
postgresql-1 |
postgresql-1 |
postgresql-1 | Success. You can now start the database server using:
postgresql-1 |
postgresql-1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
postgresql-1 |
postgresql-1 | waiting for server to start....2024-10-16 12:01:15.285 UTC [37] LOG: starting PostgreSQL 14.2 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
postgresql-1 | 2024-10-16 12:01:15.287 UTC [37] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgresql-1 | 2024-10-16 12:01:15.289 UTC [38] LOG: database system was shut down at 2024-10-16 12:01:15 UTC
postgresql-1 | 2024-10-16 12:01:15.293 UTC [37] LOG: database system is ready to accept connections
postgresql-1 | done
postgresql-1 | server started
pandabi-frontend | ▲ Next.js 14.2.3
pandabi-frontend | - Local: http://localhost:3000
pandabi-frontend |
pandabi-frontend | ✓ Starting...
postgresql-1 | CREATE DATABASE
postgresql-1 |
postgresql-1 |
postgresql-1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgresql-1 |
postgresql-1 | waiting for server to shut down....2024-10-16 12:01:15.459 UTC [37] LOG: received fast shutdown request
postgresql-1 | 2024-10-16 12:01:15.460 UTC [37] LOG: aborting any active transactions
postgresql-1 | 2024-10-16 12:01:15.462 UTC [37] LOG: background worker "logical replication launcher" (PID 44) exited with exit code 1
postgresql-1 | 2024-10-16 12:01:15.463 UTC [39] LOG: shutting down
postgresql-1 | 2024-10-16 12:01:15.470 UTC [37] LOG: database system is shut down
postgresql-1 | done
postgresql-1 | server stopped
postgresql-1 |
postgresql-1 | PostgreSQL init process complete; ready for start up.
postgresql-1 |
postgresql-1 | 2024-10-16 12:01:15.581 UTC [1] LOG: starting PostgreSQL 14.2 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
postgresql-1 | 2024-10-16 12:01:15.581 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgresql-1 | 2024-10-16 12:01:15.581 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgresql-1 | 2024-10-16 12:01:15.583 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgresql-1 | 2024-10-16 12:01:15.586 UTC [51] LOG: database system was shut down at 2024-10-16 12:01:15 UTC
postgresql-1 | 2024-10-16 12:01:15.589 UTC [1] LOG: database system is ready to accept connections
pandabi-frontend | ✓ Ready in 735ms
pandabi-backend | Resolving dependencies...
pandabi-backend | Warning: The locked version 3.9.1 for matplotlib is a yanked version. Reason for being yanked: The Windows wheels, under some conditions, caused segfaults in unrelated user code. Due to this we deleted the Windows wheels to prevent these segfaults, however this caused greater disruption as pip then began to try (and fail) to build 3.9.1 from the sdist on Windows which impacted far more users. Yanking the whole release is the only tool available to eliminate these failures without changes to on the user side. The sdist, OSX wheel, and manylinux wheels are all functional and there are no critical bugs in the release. Downstream packagers should not yank their builds of Matplotlib 3.9.1. See https://github.com/matplotlib/matplotlib/issues/28551 for details.
pandabi-backend | poetry install
pandabi-backend | Installing dependencies from lock file
pandabi-backend |
pandabi-backend | No dependencies to install or update
pandabi-backend |
pandabi-backend | Installing the current project: pandasai-server (0.1.0)
pandabi-backend |
pandabi-backend | Warning: The current project could not be installed: No file/folder found for package pandasai-server
pandabi-backend | If you do not want to install the current project use --no-root.
pandabi-backend | If you want to use Poetry only for dependency management but not for packaging, you can disable package mode by setting package-mode = false in your pyproject.toml file.
pandabi-backend | In a future version of Poetry this warning will become an error!
pandabi-backend | wait-for-it.sh: 4: shift: can't shift that many
pandabi-backend | export DEBUG='1'
pandabi-backend | export ENVIRONMENT='development'
pandabi-backend | export GPG_KEY='A035C8C19219BA821ECEA86B64E628F8D684696D'
pandabi-backend | export HOME='/root'
pandabi-backend | export HOSTNAME='3f7bba63a62a'
pandabi-backend | export LANG='C.UTF-8'
pandabi-backend | export MAKEFLAGS=''
pandabi-backend | export MAKELEVEL='1'
pandabi-backend | export MFLAGS=''
pandabi-backend | export PANDASAI_API_KEY='$2a$10$cxwR1S88WmYVOw0BFpo4vuJ0Od5zrNXevkZcFt65wf5eTdMbGFMr6'
pandabi-backend | export PATH='/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/bin:/root/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
pandabi-backend | export POSTGRES_URL='postgresql+asyncpg://pandasai:password123@postgresql:5432/pandasai-db'
pandabi-backend | export PS1='(pandasai-server-py3.11) '
pandabi-backend | export PWD='/app'
pandabi-backend | export PYTHON_VERSION='3.11.10'
pandabi-backend | export SHLVL='1'
pandabi-backend | export SHOW_SQL_ALCHEMY_QUERIES='0'
pandabi-backend | export TEST_POSTGRES_URL='postgresql+asyncpg://pandasai:password123@postgresql:5432/pandasai-db'
pandabi-backend | export VIRTUAL_ENV='/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11'
pandabi-backend | export VIRTUAL_ENV_PROMPT='pandasai-server-py3.11'
pandabi-backend | export _='/usr/bin/make'
pandabi-backend | poetry run alembic upgrade head
pandabi-backend | Traceback (most recent call last):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/bin/alembic", line 8, in <module>
pandabi-backend | sys.exit(main())
pandabi-backend | ^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/config.py", line 636, in main
pandabi-backend | CommandLine(prog=prog).main(argv=argv)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/config.py", line 626, in main
pandabi-backend | self.run_cmd(cfg, options)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/config.py", line 603, in run_cmd
pandabi-backend | fn(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/command.py", line 406, in upgrade
pandabi-backend | script.run_env()
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/script/base.py", line 582, in run_env
pandabi-backend | util.load_python_file(self.dir, "env.py")
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 95, in load_python_file
pandabi-backend | module = load_module_py(module_id, path)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 113, in load_module_py
pandabi-backend | spec.loader.exec_module(module) # type: ignore
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
pandabi-backend | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
pandabi-backend | File "/app/migrations/env.py", line 10, in <module>
pandabi-backend | from app.models import Base
pandabi-backend | ModuleNotFoundError: No module named 'app'
pandabi-backend | make: *** [Makefile:52: migrate] Error 1
pandabi-backend | export DEBUG='1'
pandabi-backend | export ENVIRONMENT='development'
pandabi-backend | export GPG_KEY='A035C8C19219BA821ECEA86B64E628F8D684696D'
pandabi-backend | export HOME='/root'
pandabi-backend | export HOSTNAME='3f7bba63a62a'
pandabi-backend | export LANG='C.UTF-8'
pandabi-backend | export MAKEFLAGS=''
pandabi-backend | export MAKELEVEL='1'
pandabi-backend | export MFLAGS=''
pandabi-backend | export PANDASAI_API_KEY='$2a$10$cxwR1S88WmYVOw0BFpo4vuJ0Od5zrNXevkZcFt65wf5eTdMbGFMr6'
pandabi-backend | export PATH='/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/bin:/root/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
pandabi-backend | export POSTGRES_URL='postgresql+asyncpg://pandasai:password123@postgresql:5432/pandasai-db'
pandabi-backend | export PS1='(pandasai-server-py3.11) '
pandabi-backend | export PWD='/app'
pandabi-backend | export PYTHON_VERSION='3.11.10'
pandabi-backend | export SHLVL='1'
pandabi-backend | export SHOW_SQL_ALCHEMY_QUERIES='0'
pandabi-backend | export TEST_POSTGRES_URL='postgresql+asyncpg://pandasai:password123@postgresql:5432/pandasai-db'
pandabi-backend | export VIRTUAL_ENV='/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11'
pandabi-backend | export VIRTUAL_ENV_PROMPT='pandasai-server-py3.11'
pandabi-backend | export _='/usr/bin/make'
pandabi-backend | poetry run python main.py
pandabi-backend | INFO: Will watch for changes in these directories: ['/app']
pandabi-backend | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
pandabi-backend | INFO: Started reloader process [57] using StatReload
pandabi-backend | INFO: Started server process [61]
pandabi-backend | INFO: Waiting for application startup.
pandabi-backend | 2024-10-16 12:01:26,475 INFO sqlalchemy.engine.Engine select pg_catalog.version()
pandabi-backend | 2024-10-16 12:01:26,475 INFO sqlalchemy.engine.Engine [raw sql] ()
pandabi-backend | 2024-10-16 12:01:26,478 INFO sqlalchemy.engine.Engine select current_schema()
pandabi-backend | 2024-10-16 12:01:26,478 INFO sqlalchemy.engine.Engine [raw sql] ()
pandabi-backend | 2024-10-16 12:01:26,479 INFO sqlalchemy.engine.Engine show standard_conforming_strings
pandabi-backend | 2024-10-16 12:01:26,479 INFO sqlalchemy.engine.Engine [raw sql] ()
pandabi-backend | 2024-10-16 12:01:26,481 INFO sqlalchemy.engine.Engine BEGIN (implicit)
pandabi-backend | 2024-10-16 12:01:26,488 INFO sqlalchemy.engine.Engine SELECT anon_1.id, anon_1.email, anon_1.first_name, anon_1.created_at, anon_1.password, anon_1.verified, anon_1.last_name, anon_1.features, organization_1.id AS id_1, organization_1.name, organization_1.url, organization_1.is_default, organization_1.settings, organization_membership_1.id AS id_2, organization_membership_1.user_id, organization_membership_1.organization_id, organization_membership_1.role, organization_membership_1.verified AS verified_1
pandabi-backend | FROM (SELECT "user".id AS id, "user".email AS email, "user".first_name AS first_name, "user".created_at AS created_at, "user".password AS password, "user".verified AS verified, "user".last_name AS last_name, "user".features AS features
pandabi-backend | FROM "user"
pandabi-backend | LIMIT $1::INTEGER OFFSET $2::INTEGER) AS anon_1 LEFT OUTER JOIN organization_membership AS organization_membership_1 ON anon_1.id = organization_membership_1.user_id LEFT OUTER JOIN organization AS organization_1 ON organization_1.id = organization_membership_1.organization_id
pandabi-backend | 2024-10-16 12:01:26,488 INFO sqlalchemy.engine.Engine [generated in 0.00023s] (1, 0)
postgresql-1 | 2024-10-16 12:01:26.489 UTC [58] ERROR: relation "user" does not exist at character 700
postgresql-1 | 2024-10-16 12:01:26.489 UTC [58] STATEMENT: SELECT anon_1.id, anon_1.email, anon_1.first_name, anon_1.created_at, anon_1.password, anon_1.verified, anon_1.last_name, anon_1.features, organization_1.id AS id_1, organization_1.name, organization_1.url, organization_1.is_default, organization_1.settings, organization_membership_1.id AS id_2, organization_membership_1.user_id, organization_membership_1.organization_id, organization_membership_1.role, organization_membership_1.verified AS verified_1
postgresql-1 | FROM (SELECT "user".id AS id, "user".email AS email, "user".first_name AS first_name, "user".created_at AS created_at, "user".password AS password, "user".verified AS verified, "user".last_name AS last_name, "user".features AS features
postgresql-1 | FROM "user"
postgresql-1 | LIMIT $1::INTEGER OFFSET $2::INTEGER) AS anon_1 LEFT OUTER JOIN organization_membership AS organization_membership_1 ON anon_1.id = organization_membership_1.user_id LEFT OUTER JOIN organization AS organization_1 ON organization_1.id = organization_membership_1.organization_id
pandabi-backend | 2024-10-16 12:01:26,490 INFO sqlalchemy.engine.Engine ROLLBACK
pandabi-backend | ERROR: Traceback (most recent call last):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 514, in _prepare_and_execute
pandabi-backend | prepared_stmt, attributes = await adapt_connection._prepare(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 760, in _prepare
pandabi-backend | prepared_stmt = await self._connection.prepare(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/asyncpg/connection.py", line 636, in prepare
pandabi-backend | return await self._prepare(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/asyncpg/connection.py", line 654, in _prepare
pandabi-backend | stmt = await self._get_statement(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/asyncpg/connection.py", line 433, in _get_statement
pandabi-backend | statement = await self._protocol.prepare(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "asyncpg/protocol/protocol.pyx", line 166, in prepare
pandabi-backend | asyncpg.exceptions.UndefinedTableError: relation "user" does not exist
pandabi-backend |
pandabi-backend | The above exception was the direct cause of the following exception:
pandabi-backend |
pandabi-backend | Traceback (most recent call last):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
pandabi-backend | self.dialect.do_execute(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute
pandabi-backend | cursor.execute(statement, parameters)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 572, in execute
pandabi-backend | self._adapt_connection.await_(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
pandabi-backend | return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
pandabi-backend | value = await result
pandabi-backend | ^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 550, in _prepare_and_execute
pandabi-backend | self._handle_exception(error)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 501, in _handle_exception
pandabi-backend | self._adapt_connection._handle_exception(error)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 784, in _handle_exception
pandabi-backend | raise translated_error from error
pandabi-backend | sqlalchemy.dialects.postgresql.asyncpg.AsyncAdapt_asyncpg_dbapi.ProgrammingError: <class 'asyncpg.exceptions.UndefinedTableError'>: relation "user" does not exist
pandabi-backend |
pandabi-backend | The above exception was the direct cause of the following exception:
pandabi-backend |
pandabi-backend | Traceback (most recent call last):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 671, in lifespan
pandabi-backend | async with self.lifespan_context(app):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 566, in __aenter__
pandabi-backend | await self._router.startup()
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 648, in startup
pandabi-backend | await handler()
pandabi-backend | File "/app/core/server.py", line 145, in on_startup
pandabi-backend | await init_database()
pandabi-backend | File "/app/core/server.py", line 113, in init_database
pandabi-backend | user = await init_user()
pandabi-backend | ^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/server.py", line 81, in init_user
pandabi-backend | await controller.create_default_user()
pandabi-backend | File "/app/core/database/transactional.py", line 40, in decorator
pandabi-backend | raise exception
pandabi-backend | File "/app/core/database/transactional.py", line 27, in decorator
pandabi-backend | result = await self._run_required_new(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/database/transactional.py", line 53, in _run_required_new
pandabi-backend | result = await function(*args, **kwargs)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/app/controllers/user.py", line 21, in create_default_user
pandabi-backend | users = await self.get_all(limit=1, join_={"memberships"})
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/controller/base.py", line 69, in get_all
pandabi-backend | response = await self.repository.get_all(skip, limit, join_)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/repository/base.py", line 48, in get_all
pandabi-backend | return await self._all_unique(query)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/repository/base.py", line 124, in _all_unique
pandabi-backend | result = await self.session.execute(query)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/ext/asyncio/scoping.py", line 589, in execute
pandabi-backend | return await self._proxied.execute(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/ext/asyncio/session.py", line 461, in execute
pandabi-backend | result = await greenlet_spawn(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 201, in greenlet_spawn
pandabi-backend | result = context.throw(*sys.exc_info())
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2351, in execute
pandabi-backend | return self._execute_internal(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2236, in _execute_internal
pandabi-backend | result: Result[Any] = compile_state_cls.orm_execute_statement(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement
pandabi-backend | result = conn.execute(
pandabi-backend | ^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1418, in execute
pandabi-backend | return meth(
pandabi-backend | ^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 515, in _execute_on_connection
pandabi-backend | return connection._execute_clauseelement(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1640, in _execute_clauseelement
pandabi-backend | ret = self._execute_context(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context
pandabi-backend | return self._exec_single_context(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context
pandabi-backend | self._handle_dbapi_exception(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2353, in _handle_dbapi_exception
pandabi-backend | raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
pandabi-backend | self.dialect.do_execute(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute
pandabi-backend | cursor.execute(statement, parameters)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 572, in execute
pandabi-backend | self._adapt_connection.await_(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
pandabi-backend | return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
pandabi-backend | value = await result
pandabi-backend | ^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 550, in _prepare_and_execute
pandabi-backend | self._handle_exception(error)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 501, in _handle_exception
pandabi-backend | self._adapt_connection._handle_exception(error)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 784, in _handle_exception
pandabi-backend | raise translated_error from error
pandabi-backend | sqlalchemy.exc.ProgrammingError: (sqlalchemy.dialects.postgresql.asyncpg.ProgrammingError) <class 'asyncpg.exceptions.UndefinedTableError'>: relation "user" does not exist
pandabi-backend | [SQL: SELECT anon_1.id, anon_1.email, anon_1.first_name, anon_1.created_at, anon_1.password, anon_1.verified, anon_1.last_name, anon_1.features, organization_1.id AS id_1, organization_1.name, organization_1.url, organization_1.is_default, organization_1.settings, organization_membership_1.id AS id_2, organization_membership_1.user_id, organization_membership_1.organization_id, organization_membership_1.role, organization_membership_1.verified AS verified_1
pandabi-backend | FROM (SELECT "user".id AS id, "user".email AS email, "user".first_name AS first_name, "user".created_at AS created_at, "user".password AS password, "user".verified AS verified, "user".last_name AS last_name, "user".features AS features
pandabi-backend | FROM "user"
pandabi-backend | LIMIT $1::INTEGER OFFSET $2::INTEGER) AS anon_1 LEFT OUTER JOIN organization_membership AS organization_membership_1 ON anon_1.id = organization_membership_1.user_id LEFT OUTER JOIN organization AS organization_1 ON organization_1.id = organization_membership_1.organization_id]
pandabi-backend | [parameters: (1, 0)]
pandabi-backend | (Background on this error at: https://sqlalche.me/e/20/f405)
pandabi-backend |
pandabi-backend | ERROR: Application startup failed. Exiting.
| closed | 2024-10-16T12:03:05Z | 2024-10-29T17:36:33Z | https://github.com/sinaptik-ai/pandas-ai/issues/1395 | [
"bug"
] | johnfelipe | 5 |
OpenInterpreter/open-interpreter | python | 708 | Need to pre-load a model somehow to avoid long start up times on Cloud Run | ### Describe the bug
When using a Cloud Run instance, the model is newly installed each time, but we have to download the start up files each time:
<img width="1047" alt="image" src="https://github.com/KillianLucas/open-interpreter/assets/3155884/94a46636-5fef-4049-bf44-5ce135eb3509">
Is it possible to load this at build time, so start up execution time can be reduced by ~20secs?
### Reproduce
On first start up on Cloud Run
```
for chunk in interpreter.chat(user_input, stream=True, display=False):
# do stuff
# /home/.cache/chroma/onnx_models/all-MiniLM-L6-v2/onnx.tar.gz: 0%| | 0.00/79.3M [00:00<?, ?iB/s]
#..
# /home/.cache/chroma/onnx_models/all-MiniLM-L6-v2/onnx.tar.gz: 1%| | 0.00/79.3M [00:00<?, ?iB/s]
```
etc. about 20 seconds
### Expected behavior
Download of onnx_models/all-MiniLM-L6-v2/onnx.tar.gz to be possible during the Dockerfile build somehow
### Screenshots
<img width="1047" alt="image" src="https://github.com/KillianLucas/open-interpreter/assets/3155884/94a46636-5fef-4049-bf44-5ce135eb3509">
### Open Interpreter version
0.1.10
### Python version
3.10
### Operating System name and version
Linux
### Additional context
To stop the HTTP timeout, I run a test query each time first, but it adds ~20+ seconds latency to first token back. | closed | 2023-10-27T16:30:44Z | 2023-11-05T16:18:26Z | https://github.com/OpenInterpreter/open-interpreter/issues/708 | [
"Bug",
"External"
] | MarkEdmondson1234 | 5 |
ultralytics/yolov5 | deep-learning | 12,574 | About how the test results obtained by detect.py are evaluated | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I used detect.py to get the test results and then wrote my own script to calculate the precision, but by chance I realized that my calculations were inconsistent with the evaluation results using val.py when I evaluated the test data using val.py. I carefully tested the script I wrote myself and I think it is consistent with the precision definition. I am confused and which is more convincing proof of the reliability of the model in the printout of val.py, precision or map?
### Additional
_No response_ | closed | 2024-01-03T08:04:31Z | 2024-10-20T19:36:00Z | https://github.com/ultralytics/yolov5/issues/12574 | [
"question",
"Stale"
] | Jiase | 7 |
huggingface/datasets | pandas | 6,571 | Make DatasetDict.column_names return a list instead of dict | Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values.
However, by construction, all splits have the same column names.
I think it makes more sense to return a single list with the column names, which is the same for all the split keys. | open | 2024-01-09T10:45:17Z | 2024-01-09T10:45:17Z | https://github.com/huggingface/datasets/issues/6571 | [
"enhancement"
] | albertvillanova | 0 |
horovod/horovod | tensorflow | 4,043 | NVIDIA CUDA TOOLKIT version to run Horovod in Conda Environment | Hi Developers
I wish to install horovod inside Conda environment for which I require nccl from NVIDIA CUDA toolkit installed in system so I just wanted to know which is version of NVIDIA CUDA Toolkit is required to build horovod inside conda env to run Pytorch library.
Many Thanks
Pushkar | open | 2024-05-10T06:56:06Z | 2025-01-31T23:14:47Z | https://github.com/horovod/horovod/issues/4043 | [
"wontfix"
] | ppandit95 | 2 |
JaidedAI/EasyOCR | deep-learning | 1,313 | bangla text issue | bangla text works poor | open | 2024-10-06T19:13:11Z | 2024-10-06T19:13:11Z | https://github.com/JaidedAI/EasyOCR/issues/1313 | [] | Automatorbd | 0 |
Lightning-AI/pytorch-lightning | data-science | 20,356 | Type annotation for `BasePredictionWriter` subclass | ### Bug description
Subclassing the `BasePredictionWriter` for custom functionality results in Pylance complaining about incorrect type.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING, Any, Literal
import lightning as L
from lightning.pytorch.callbacks import BasePredictionWriter
if TYPE_CHECKING:
import polars as pl
from lightning.pytorch import LightningModule, Trainer
from torch import Tensor
class ParquetWriter(BasePredictionWriter):
"""
Callback for writing predictions to Parquet files.
Parameters
----------
output_dir
The directory where the parquet files will be written.
write_interval
The interval at which the predictions will be written.
"""
def __init__(self, output_dir: str, write_interval: Literal["batch"]) -> None:
super().__init__(write_interval)
self.output_dir = Path(output_dir)
def write_on_batch_end(
self,
trainer: Trainer,
pl_module: LightningModule, # noqa: ARG002
prediction: pl.DataFrame,
batch_indices: Tensor, # noqa: ARG002
batch: dict[str, Any], # noqa: ARG002
batch_idx: int,
dataloader_idx: int, # noqa: ARG002
) -> None:
"""Write the prediction to a parquet file."""
prediction.write_parquet(
self.output_dir / f"{trainer.global_rank}{batch_idx}.parquet",
)
callbacks = [
ParquetWriter(
output_dir="/tmp",
write_interval="batch",
),
]
trainer = L.Trainer(
callbacks=callbacks, <----- Pylance(reportArgumentType)
)
```
### Error messages and logs
```
Argument of type "list[ParquetWriter]" cannot be assigned to parameter "callbacks" of type "List[Callback] | Callback | None" in function "__init__"
Type "list[ParquetWriter]" is not assignable to type "List[Callback] | Callback | None"
"list[ParquetWriter]" is not assignable to "List[Callback]"
Type parameter "_T@list" is invariant, but "ParquetWriter" is not the same as "Callback"
Consider switching from "list" to "Sequence" which is covariant
"list[ParquetWriter]" is not assignable to "Callback"
"list[ParquetWriter]" is not assignable to "None"
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: None
* Lightning:
- lightning: 2.4.0
- lightning-utilities: 0.11.7
- pytorch-lightning: 2.4.0
- torch: 2.4.1
- torchaudio: 2.4.1
- torchmetrics: 1.4.2
- torchvision: 0.19.1
* Packages:
- aenum: 3.1.12
- aiohappyeyeballs: 2.4.3
- aiohttp: 3.10.8
- aiosignal: 1.3.1
- altair: 5.4.1
- annotated-types: 0.7.0
- antlr4-python3-runtime: 4.9.3
- anyio: 4.6.0
- appnope: 0.1.4
- argon2-cffi: 23.1.0
- argon2-cffi-bindings: 21.2.0
- arrow: 1.3.0
- asttokens: 2.4.1
- async-lru: 2.0.4
- attrs: 24.2.0
- autocommand: 2.2.2
- babel: 2.16.0
- backports.tarfile: 1.2.0
- beautifulsoup4: 4.12.3
- bitsandbytes: 0.42.0
- bleach: 6.1.0
- certifi: 2024.8.30
- cffi: 1.17.1
- charset-normalizer: 3.3.2
- click: 8.1.7
- comm: 0.2.2
- contourpy: 1.3.0
- crispron: 3.0
- cycler: 0.12.1
- datasets: 3.0.1
- debugpy: 1.8.6
- decorator: 5.1.1
- defusedxml: 0.7.1
- dill: 0.3.8
- docker-pycreds: 0.4.0
- docstring-parser: 0.16
- euporie: 2.8.3
- executing: 2.1.0
- fastjsonschema: 2.20.0
- filelock: 3.16.1
- flatlatex: 0.15
- fonttools: 4.54.1
- fqdn: 1.5.1
- frozenlist: 1.4.1
- fsspec: 2024.6.1
- gitdb: 4.0.11
- gitpython: 3.1.43
- h11: 0.14.0
- httpcore: 1.0.6
- httpx: 0.27.2
- huggingface-hub: 0.25.1
- hydra-core: 1.3.2
- idna: 3.10
- imagesize: 1.4.1
- importlib-metadata: 8.0.0
- importlib-resources: 6.4.5
- inflect: 7.3.1
- ipykernel: 6.29.5
- ipython: 8.28.0
- ipywidgets: 8.1.5
- isoduration: 20.11.0
- itables: 2.2.2
- jaraco.collections: 5.1.0
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.1
- jaraco.text: 3.12.1
- jedi: 0.19.1
- jinja2: 3.1.4
- joblib: 1.4.2
- json5: 0.9.25
- jsonargparse: 4.33.1
- jsonpointer: 3.0.0
- jsonschema: 4.23.0
- jsonschema-specifications: 2023.12.1
- jupyter-client: 8.6.3
- jupyter-core: 5.7.2
- jupyter-events: 0.10.0
- jupyter-lsp: 2.2.5
- jupyter-server: 2.14.2
- jupyter-server-terminals: 0.5.3
- jupyterlab: 4.2.5
- jupyterlab-pygments: 0.3.0
- jupyterlab-server: 2.27.3
- jupyterlab-widgets: 3.0.13
- jupytext: 1.16.4
- kiwisolver: 1.4.7
- lightning: 2.4.0
- lightning-utilities: 0.11.7
- linkify-it-py: 1.0.3
- markdown-it-py: 2.2.0
- markupsafe: 2.1.5
- matplotlib: 3.9.2
- matplotlib-inline: 0.1.7
- mdit-py-plugins: 0.3.5
- mdurl: 0.1.2
- mistune: 3.0.2
- more-itertools: 10.3.0
- mpmath: 1.3.0
- multidict: 6.1.0
- multiprocess: 0.70.16
- narwhals: 1.9.0
- nbclient: 0.10.0
- nbconvert: 7.16.4
- nbformat: 5.10.4
- nest-asyncio: 1.6.0
- networkx: 3.3
- notebook-shim: 0.2.4
- numpy: 2.1.1
- omegaconf: 2.3.0
- overrides: 7.7.0
- packaging: 24.1
- pandas: 2.2.3
- pandas-stubs: 2.2.2.240909
- pandocfilters: 1.5.1
- parso: 0.8.4
- pexpect: 4.9.0
- pillow: 10.4.0
- pip: 24.2
- platformdirs: 3.11.0
- plotly: 5.24.1
- polars: 1.9.0
- prometheus-client: 0.21.0
- prompt-toolkit: 3.0.48
- protobuf: 5.28.2
- psutil: 6.0.0
- ptyprocess: 0.7.0
- pure-eval: 0.2.3
- pyarrow: 17.0.0
- pycparser: 2.22
- pydantic: 2.9.2
- pydantic-core: 2.23.4
- pygments: 2.18.0
- pyparsing: 3.1.4
- pyperclip: 1.9.0
- python-dateutil: 2.9.0.post0
- python-json-logger: 2.0.7
- pytorch-lightning: 2.4.0
- pytz: 2024.2
- pyyaml: 6.0.2
- pyzmq: 26.2.0
- referencing: 0.35.1
- regex: 2024.9.11
- requests: 2.32.3
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rich: 13.9.2
- rpds-py: 0.20.0
- safetensors: 0.4.5
- scikit-learn: 1.5.2
- scipy: 1.14.1
- seaborn: 0.13.2
- send2trash: 1.8.3
- sentry-sdk: 2.15.0
- setproctitle: 1.3.3
- setuptools: 75.1.0
- six: 1.16.0
- sixelcrop: 0.1.8
- smmap: 5.0.1
- sniffio: 1.3.1
- soupsieve: 2.6
- stack-data: 0.6.3
- sympy: 1.13.3
- tenacity: 9.0.0
- tensorboardx: 2.6.2.2
- terminado: 0.18.1
- threadpoolctl: 3.5.0
- timg: 1.1.6
- tinycss2: 1.3.0
- tokenizers: 0.20.1
- tomli: 2.0.1
- torch: 2.4.1
- torchaudio: 2.4.1
- torchmetrics: 1.4.2
- torchvision: 0.19.1
- tornado: 6.4.1
- tqdm: 4.66.5
- traitlets: 5.14.3
- transformers: 4.45.2
- typeguard: 4.3.0
- types-python-dateutil: 2.9.0.20241003
- types-pytz: 2024.2.0.20241003
- typeshed-client: 2.7.0
- typing-extensions: 4.12.2
- tzdata: 2024.2
- uc-micro-py: 1.0.3
- universal-pathlib: 0.2.5
- uri-template: 1.3.0
- urllib3: 2.2.3
- wandb: 0.18.3
- wcwidth: 0.2.13
- webcolors: 24.8.0
- webencodings: 0.5.1
- websocket-client: 1.8.0
- wheel: 0.44.0
- widgetsnbextension: 4.0.13
- xxhash: 3.5.0
- yarl: 1.13.1
- zipp: 3.19.2
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: arm
- python: 3.12.6
- release: 24.0.0
- version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:39:07 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6000
</details>
### More info
_No response_ | open | 2024-10-22T13:42:32Z | 2024-10-22T14:14:02Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20356 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | saiden89 | 0 |
httpie/cli | python | 1,480 | I would like the option to disable the DNS Cache and do name resolution on every request | ## Checklist
- [ ] I've searched for similar feature requests.
---
## Enhancement request
…
---
## Problem it solves
E.g. “I'm always frustrated when […]”, “I’m trying to do […] so that […]”.
---
## Additional information, screenshots, or code examples
…
| open | 2023-02-16T00:34:13Z | 2023-02-16T00:34:13Z | https://github.com/httpie/cli/issues/1480 | [
"enhancement",
"new"
] | kahirokunn | 0 |
hankcs/HanLP | nlp | 750 | “来张北京的车票“ 分词为 “张北” “京“ | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [X ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
portable版
当前最新版本号是:v1.5.3
我使用的版本是:v1.5.3
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
你好,这半年一直在用你开发的分词器做实验,感觉很好用。但是今天发现 “来张北京的车票”,无论用包中的几种分词器都分不出 “ 北京” ,基本都分成 “”张北“ “京” 。能否赐教怎么解决这个问题?我已经向自定义辞典添加了“北京“ “来张“”,但是无效。谢谢!
## 复现问题
没有修改代码,直接调用这几个分词器
### 步骤
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
[来张,北京,的,车票]
(我忽略了词性)
### 实际输出
[来张北京/nr, 的/ude1, 车票/n]
[来张北京/nr, 的/ude1, 车票/n]
[来张北京/nr, 的/ude1, 车票/n]
[来/null, 张北/null, 京/null, 的/null, 车票/null]
```
| closed | 2018-01-17T09:45:37Z | 2018-01-17T09:49:47Z | https://github.com/hankcs/HanLP/issues/750 | [
"improvement"
] | whynogo | 1 |
microsoft/unilm | nlp | 807 | Meet a StopIteration when continue training infoxlm from xlmr | I try to continue training a infoxlm from xlmr on my own dataset.
After I initialize the conda environment and prepare the training data.
I use the following bash to train, but it throws a StopIteration Error.
The bash I used is here.
`python src-infoxlm/train.py ${MLM_DATA_DIR} \
--task infoxlm --criterion xlco \
--tlm_data ${TLM_DATA_DIR} \
--xlco_data ${XLCO_DATA_DIR} \
--arch infoxlm_base --sample-break-mode complete --tokens-per-sample 512 \
--optimizer adam --adam-betas '(0.9,0.98)' --adam-eps 1e-6 --clip-norm 1.0 \
--lr-scheduler polynomial_decay --lr 0.0002 --warmup-updates 10000 \
--total-num-update 200000 --max-update 200000 \
--dropout 0.0 --attention-dropout 0.0 --weight-decay 0.01 \
--max-sentences 8 --update-freq 8 \
--log-format simple --log-interval 1 --disable-validation \
--save-interval-updates 10000 --no-epoch-checkpoints \
--seed 1 \
--save-dir ${SAVE_DIR}/ \
--tensorboard-logdir ${SAVE_DIR}/tb-log \
--roberta-model-path $HOMEPATH/xlmr.base/model.pt \
--num-workers 4 --ddp-backend=c10d --distributed-no-spawn \
--xlco_layer 8 --xlco_queue_size 256 --xlco_lambda 1.0 \
--xlco_momentum constant,0.9999 --use_proj` | closed | 2022-07-28T08:33:40Z | 2022-10-15T03:19:34Z | https://github.com/microsoft/unilm/issues/807 | [] | SAI990323 | 5 |
akfamily/akshare | data-science | 5,798 | AKShare 接口问题报告 | import akshare as ak
# 注意:该接口返回的数据只有最近一个交易日的有开盘价,其他日期开盘价为 0
stock_zh_a_hist_min_em_df = ak.stock_zh_a_hist_min_em(symbol="000001", start_date="2025-03-07 09:30:00", end_date="2024-03-07 15:00:00", period="1", adjust="")
print(stock_zh_a_hist_min_em_df)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[7], line 4
1 import akshare as ak
3 # 注意:该接口返回的数据只有最近一个交易日的有开盘价,其他日期开盘价为 0
----> 4 stock_zh_a_hist_min_em_df = ak.stock_zh_a_hist_min_em(symbol="000001", start_date="2025-03-07 09:30:00", end_date="2024-03-07 15:00:00", period="1", adjust="")
5 print(stock_zh_a_hist_min_em_df)
File [~\AppData\Local\Programs\Python\Python39\lib\site-packages\akshare\stock_feature\stock_hist_em.py:1141](http://localhost:8888/lab/tree/~/AppData/Local/Programs/Python/Python39/lib/site-packages/akshare/stock_feature/stock_hist_em.py#line=1140), in stock_zh_a_hist_min_em(symbol, start_date, end_date, period, adjust)
1133 if period == "1":
1134 url = "https://push2his.eastmoney.com/api/qt/stock/trends2/get"
1135 params = {
1136 "fields1": "f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f11,f12,f13",
1137 "fields2": "f51,f52,f53,f54,f55,f56,f57,f58",
1138 "ut": "7eea3edcaed734bea9cbfc24409ed989",
1139 "ndays": "5",
1140 "iscr": "0",
-> 1141 "secid": f"{code_id_dict[symbol]}.{symbol}",
1142 "_": "1623766962675",
1143 }
1144 r = requests.get(url, timeout=15, params=params)
1145 data_json = r.json()
KeyError: '000001' | closed | 2025-03-08T10:41:26Z | 2025-03-09T10:56:40Z | https://github.com/akfamily/akshare/issues/5798 | [
"bug"
] | hifigecko | 1 |
Farama-Foundation/PettingZoo | api | 541 | $400 bounty for fixing and learning near optimal policy with Stable Baselines 3 in Waterworld environment | Hey,
If anyone is able to provide me Stable Baselines 3 based learning code that can learn near optimal policies (e.g. solve) the [Waterworld](https://www.pettingzoo.ml/sisl/waterworld) environment once it's fixed enough to be a reasonable environment that learning works in, I will pay you a bounty of $400. Example SB3 code for similar pettingzoo environments is available [here](https://towardsdatascience.com/multi-agent-deep-reinforcement-learning-in-15-lines-of-code-using-pettingzoo-e0b963c0820b) and [here](https://github.com/jkterry1/Butterfly-Baselines). If you iteratively fix the environment and learning code enough that hyperparameter tuning appears to be needed, I can run automated hyperparameter tuning code for you in between debugging stages if you need it and talk to me.
A few notes:
-The learning code has to generally work across multiple runs, not just one seed
-I'm the final ruler if there are any disputes about the terms of this bounty (e.g. I reserve the right to split the bounty between two people if this situation warrants this or god knows what else may come up)
-Doing this may also require minor changes/fixes to to SuperSuit, or to SB3 itself
-The list of currently known bugs to explore and design failures for waterworld can be found here (https://github.com/Farama-Foundation/PettingZoo/issues/520), along with thoughts on what general changes should be made
-If you currently work for me you are not eligible for the prize
-The origin of this bounty is that Waterworld as an environment clearly needs a lot of iterations of fixes and learning to become a fully working and useful environment per the issue above, and the motivation for creating the bounty for this is that I don't have the time to do this right now. If this works out I plan to create similar bounties for other PettingZoo environments (KAZ, Prospector and the MAgent environments), which would have different rules and very different technical challenges in completing them (e.g. they're hard to learn, not buggy). Waterworld has seemingly remained in this buggy state despite how much PettingZoo has been used because it's not a profoundly interesting environment on it's own, it's only useful if you're trying to benchmark something a huge set of cooperative environments (which is why I starting used it and ran into all these issues). However, I do think that it still has enough value to be worth fixing
-If you want to try this you don't need to contact me or anything, you can just start writing code
-Feel free to leave a comment here if you have any questions | closed | 2021-11-14T03:41:02Z | 2021-12-30T22:34:03Z | https://github.com/Farama-Foundation/PettingZoo/issues/541 | [] | jkterry1 | 6 |
LAION-AI/Open-Assistant | python | 3,429 | Download links in sft training folder | Please add links or cite to the opensource data in sft training. | closed | 2023-06-14T06:04:57Z | 2023-06-14T08:14:46Z | https://github.com/LAION-AI/Open-Assistant/issues/3429 | [] | lucasjinreal | 1 |
wandb/wandb | data-science | 9,549 | [Feature]: Support prefix glob for `Run.define_metric` | ### Description
I name all of my metrics as `metric_name/train/batch` or `metric_name/valid/epoch`, and I want to configure a default x-axis like `num_examples/train/batch` using [`Run.define_metric`](https://docs.wandb.ai/ref/python/run/#define_metric) (i.e. how many examples has the model seen up to that point, so that I can directly compare runs with different datasets, world sizes, batch sizes, etc.).
But currently, only suffix glob matching is supported:
https://github.com/wandb/wandb/blob/55da1b542b2f501f216f82e6730e33fc50d721d0/wandb/sdk/wandb_run.py#L2752-L2756
### Suggested Solution
I think prefix glob matching wouldn't be any different with a suffix glob matching. So just update the conditions for defining a valid glob?
https://github.com/wandb/wandb/blob/55da1b542b2f501f216f82e6730e33fc50d721d0/core/internal/runmetric/runmetric.go#L189-L203
https://github.com/wandb/wandb/blob/55da1b542b2f501f216f82e6730e33fc50d721d0/wandb/sdk/internal/handler.py#L442-L456
https://github.com/wandb/wandb/blob/55da1b542b2f501f216f82e6730e33fc50d721d0/wandb/sdk/wandb_run.py#L2752-L2756
https://github.com/wandb/wandb/blob/55da1b542b2f501f216f82e6730e33fc50d721d0/wandb/sdk/wandb_run.py#L1981-L2058
https://github.com/wandb/wandb/blob/55da1b542b2f501f216f82e6730e33fc50d721d0/wandb/sdk/wandb_metric.py#L77-L80
| open | 2025-03-03T05:34:57Z | 2025-03-03T17:45:49Z | https://github.com/wandb/wandb/issues/9549 | [
"ty:feature"
] | ringohoffman | 1 |
erdewit/ib_insync | asyncio | 405 | News_api | Hey, I have actually subscribed to the Dowjones API, but unable to fetch tick data for the API, if anyone is having code for news API, for interactive broker please let me know | closed | 2021-10-16T18:14:31Z | 2021-11-04T19:54:24Z | https://github.com/erdewit/ib_insync/issues/405 | [] | sudhanshu8833 | 3 |
pyqtgraph/pyqtgraph | numpy | 2,578 | SpinBox _updateHeight has a type error | PyQtGraph v0.11.0 and v0.12.3, in SpinBox.py line 577 should be
```
self.setMaximumHeight(int(1e6))
```
not
```
self.setMaximumHeight(1e6)
```
The existing line causes a type error when we set `opts['compactHeight'] = False`, because `1e6` is a float, not an int.
Currently my workaround is to keep `opts['compactHeight'] = True` but then I have to manually set the height of the number box to something more reasonable or else it looks janky and squished (a bug that was submitted some years ago).
Thanks so much! | closed | 2023-01-08T01:25:33Z | 2023-01-08T02:23:29Z | https://github.com/pyqtgraph/pyqtgraph/issues/2578 | [
"good first issue"
] | jaxankey | 1 |
Miserlou/Zappa | flask | 1,377 | support AWS Lex Bot events | ## Context
Currently zappa supports excuting django [AWS events](https://github.com/Miserlou/Zappa#executing-in-response-to-aws-events). It doesn't recognise the Lex bot event. It explicitly checks for `'Records'` inside the request. I want to organise all Lex bot functions inside the django app itself. So that I will have access to database and others. [Here](https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html#eventsources-lex) is a sample Lex event source format
## Possible Fix
add new section to `events ` settings where we can configure the `intent's hook`.
```js
"events": [
{
// The function to execute
"function": "mailer.tasks.send_emails",
// When to execute it (in cron or rate format)
"expression": "rate(5 minutes)"
},
{
"function": "lexbot.handlers.book_appointment.handler",
"event_source": {
// intent's ARN : arn:aws:lex:region:accountId:
"arn": "arn:aws:lex:<region>:<account-id>:intent:<intent-name>:$LATEST",
// a list of configured invocations. possible values are from [FulfillmentCodeHook , DialogCodeHook]
"events": [
"DialogCodeHook"
]
}
}
],
```
I will create an MR with support for this. It would not require much changes since there are similar invocation code inside handler. But we should arrive at a definitive way to get the function configuration from zapp_settings.json
## Your Environment
* Zappa version used: 0.45.1
* Operating System and Python version: Python 3.6
| closed | 2018-02-06T06:22:35Z | 2018-02-08T18:27:34Z | https://github.com/Miserlou/Zappa/issues/1377 | [] | jnoortheen | 0 |
mwaskom/seaborn | pandas | 3,524 | how can i use despine in seaborn 0.13 | I want to use the Despine function In the seaborn 0.13 version, I want to remove the line of the upper right coordinate axis, how should I change,THANK YOU!
import seaborn.objects as so
import seaborn as sns
from seaborn import axes_style,plotting_context
from seaborn import despine
so.Plot.config.theme.update(axes_style("ticks")|plotting_context('paper'))
data = sns.load_dataset('penguins')
sns.despine()
((so.Plot(data, x="bill_length_mm", y="bill_depth_mm").layout(size=(3, 3))
.add(so.Dot(), color="species").label(x="a", y="b",title="c")
.add(so.Line(color="black"),so.PolyFit(), y="bill_depth_mm", label="depth"))
.save(r'E:\下载\1.svg'))

| closed | 2023-10-18T09:14:18Z | 2023-10-18T11:12:22Z | https://github.com/mwaskom/seaborn/issues/3524 | [] | z626093820 | 1 |
mherrmann/helium | web-scraping | 31 | Set chromedriver path in start_chrome | It would be nice if you could set the path to chromedriver in start_chrome, similar to the way you can pass options to it, for use in a container. (I spent most of the day yesterday trying to figure out why selenium can't find chromedriver even if it's in the path, and all I found was that it happens and people work around it, apparently.)
As far as I can tell, there's no way to pass the path in options, or am I missing something? One line is much better than seven lines to start chrome.... | closed | 2020-06-24T12:30:45Z | 2020-09-14T12:16:50Z | https://github.com/mherrmann/helium/issues/31 | [] | shalonwoodchl | 3 |
plotly/dash-table | dash | 659 | Built-in heatmap-style cell background colours | > May 1, 2020 Update by @chriddyp - This is now possible with conditional formatting. See https://dash.plotly.com/datatable/conditional-formatting & https://community.plotly.com/t/datatable-conditional-formatting-documentation/38763.
> We're keeping this issue open for built-in heatmap formatting that doesn't require code-intensive conditional formatting constructs.
I use a combination of pandas and dash-table, as I guess many do.
pandas can output tables to HTML. In addition, they've made it possible that you can pass in colourmaps, which, in combination with a cell's numerical content, can be used to make a simple heatmap. Example from [the docs](https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html#Builtin-styles)

I believe that something like this would already be possible using datatable, by comparing cell contents to a colourmap and using these to derive a background colour. (I wonder if anyone has a recipe for this?)
Even so, I think this would make a nice feature for the datatable, since you could in many ways copy the implementation inside pandas, `background_gradient` or `cmap` or whatever during instantiation of the datatable. If not, a recipe using conditional styling would still be great.
There was support for heatmaps [here](https://github.com/plotly/dash-table-experiments/issues/7), but the issue ended after conditional formatting was added.
| open | 2019-12-06T15:22:30Z | 2023-02-02T07:06:45Z | https://github.com/plotly/dash-table/issues/659 | [
"dash-type-enhancement"
] | interrogator | 5 |
bmoscon/cryptofeed | asyncio | 519 | add support huobi usdt perpetual contract. | Add support for Huobi USDT perpetual contract | open | 2021-06-13T13:30:51Z | 2021-06-14T22:20:26Z | https://github.com/bmoscon/cryptofeed/issues/519 | [
"Feature Request"
] | yfjelley | 1 |
sinaptik-ai/pandas-ai | pandas | 806 | Error with Custom prompt | ### System Info
Python 3.11.3
Pandasai 1.5.5
### 🐛 Describe the bug
Hi @gventuri I am trying to use custom prompt for python code generation. I am using agents and while looking at the log file, i can see that the prompt that was uses is the default prompt. Here is the code to replicate the issue and attached is the log file
```
import pandas as pd
import random
from pandasai import SmartDataframe
from pandasai.llm import AzureOpenAI
import os
from dotenv import load_dotenv
load_dotenv()
from pandasai.prompts import AbstractPrompt
from pandasai.helpers.logger import Logger
from pandasai import Agent
logger_obj = Logger(save_logs=True)
model = AzureOpenAI(
api_token=os.getenv('OPENAI_API_KEY'),
azure_endpoint= os.getenv('OPENAI_API_BASE'),
api_version=os.getenv('OPENAI_API_VERSION'),
deployment_name="chatgpt4"
)
months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]
countries = ["USA", "Canada", "Mexico", "Brazil", "Germany", "France", "China", "India", "Japan", "Australia"]
carriers = ["FedEx", "UPS", "DHL", "USPS"]
modes_of_transport = ["Air", "Sea", "Road", "Rail"]
data = []
for _ in range(100):
month = random.choice(months)
country = random.choice(countries)
carrier = random.choice(carriers)
mode_of_transport = random.choice(modes_of_transport)
units = random.randint(1, 100)
amount = random.randint(1000, 10000)
data.append([month, country, carrier, mode_of_transport, units, amount])
orig_df = pd.DataFrame(data, columns=["Month", "Country", "Carrier", "mot", "Units", "Amount"])
class MyCustomPrompt(AbstractPrompt):
def template(self):
return """
You are given a dataframe with distinct value in each of the dimension columns of the dataframe
Country {Country}
Carrier {Carrier}
mot {mot}
{conversation}
"""
def setup(self, **kwargs):
self.set_vars(kwargs)
df = SmartDataframe(df = orig_df,
config = {
"custom_prompts": {
"generate_python_code": MyCustomPrompt(
Country = orig_df['Country'].unique(),
Carrier = orig_df['Carrier'].unique(),
mot = orig_df['mot'].unique()
)
},
"enable_cache" : False
})
agent = Agent([df], config={"llm": model}, memory_size=20, logger = logger_obj)
# Chat with the agent
response = agent.chat("Please provide insights on which carrier should be preferred to ship to Germany")
print(response)
```
**Below is the log from the log file generated**
2023-12-08 11:27:23 [INFO] Question: Please provide insights on which carrier should be preferred to ship to Germany
2023-12-08 11:27:24 [INFO] Running PandasAI with azure-openai LLM...
2023-12-08 11:27:24 [INFO] Prompt ID: 84a0e3fa-7099-4342-b37a-bc7bf495aad4
2023-12-08 11:27:24 [INFO] Executing Step 0: CacheLookup
2023-12-08 11:27:24 [INFO] Executing Step 1: PromptGeneration
2023-12-08 11:27:24 [INFO] Using prompt: <dataframe>
dfs[0]:100x6
Month,Country,Carrier,mot,Units,Amount
April,Brazil,USPS,Road,19,4461
February,Mexico,DHL,Rail,9,5098
April,India,DHL,Rail,59,3040
</dataframe>
Update this initial code:
```python
# TODO: import the required dependencies
import pandas as pd
# Write code here
# Declare result var: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" }
```
Q: Please provide insights on which carrier should be preferred to ship to Germany
Variable `dfs: list[pd.DataFrame]` is already declared.
At the end, declare "result" var dict: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" }
Generate python code and return full updated code:
2023-12-08 11:27:24 [INFO] Executing Step 2: CodeGenerator
2023-12-08 11:27:34 [INFO] HTTP Request: POST https://openaiservice-dev.openai.azure.com//openai/deployments/chatgpt4/chat/completions?api-version=2023-07-01-preview "HTTP/1.1 200 OK"
2023-12-08 11:27:34 [INFO] Code generated:
```
# TODO: import the required dependencies
import pandas as pd
# Write code here
df = dfs[0]
germany_df = df[df['Country'] == 'Germany']
carrier_counts = germany_df['Carrier'].value_counts()
preferred_carrier = carrier_counts.idxmax()
# Declare result var: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" }
result = { "type": "string", "value": f"The preferred carrier to ship to Germany is {preferred_carrier}." }
```
2023-12-08 11:27:34 [INFO] Executing Step 3: CachePopulation
2023-12-08 11:27:34 [INFO] Executing Step 4: CodeExecution
2023-12-08 11:27:34 [INFO] Saving charts to C:\Users\navneetkumar\OneDrive - Microsoft\MDOCopilot\AutoGen Test\exports\charts\temp_chart.png
2023-12-08 11:27:34 [INFO]
Code running:
```
df = dfs[0]
germany_df = df[df['Country'] == 'Germany']
carrier_counts = germany_df['Carrier'].value_counts()
preferred_carrier = carrier_counts.idxmax()
result = {'type': 'string', 'value': f'The preferred carrier to ship to Germany is {preferred_carrier}.'}
```
2023-12-08 11:27:34 [INFO] Executing Step 5: ResultValidation
2023-12-08 11:27:34 [INFO] Answer: {'type': 'string', 'value': 'The preferred carrier to ship to Germany is USPS.'}
2023-12-08 11:27:34 [INFO] Executed in: 11.175710678100586s
2023-12-08 11:27:34 [INFO] Executing Step 6: ResultParsing
| closed | 2023-12-08T06:10:06Z | 2024-06-01T00:20:53Z | https://github.com/sinaptik-ai/pandas-ai/issues/806 | [] | kumarnavn | 0 |
Kludex/mangum | asyncio | 202 | Allow Mangum to remove certain aws reponse headers from api gateway response | Problem:
Currently `Mangum` injects the following headers in the api gateway response for an aws lambda integration
```
x-amz-apigw-id
x-amzn-requestid
x-amzn-trace-id
```
This exposes additional information that the client doesn't need to know.
Proposal:
Allow Mangum to take optional parameter say `exclude_header_keys=[]` at the application mounting step.
An Example of that would look like.
```python
from fastapi import FastAPI
app = FastAPI()
handler = Mangum(app, exclude_header_keys=[]
```
| closed | 2021-09-30T17:53:01Z | 2022-11-24T09:44:13Z | https://github.com/Kludex/mangum/issues/202 | [
"improvement"
] | amieka | 2 |
hack4impact/flask-base | sqlalchemy | 15 | Find easier way to create first admin | See discussion at https://github.com/hack4impact/women-veterans-rock/pull/1
| closed | 2015-10-21T03:14:59Z | 2016-07-07T17:32:52Z | https://github.com/hack4impact/flask-base/issues/15 | [
"enhancement"
] | sandlerben | 3 |
jupyter-book/jupyter-book | jupyter | 1,764 | Issue on page /LSA.html | open | 2022-06-22T16:08:50Z | 2022-06-22T16:12:54Z | https://github.com/jupyter-book/jupyter-book/issues/1764 | [] | Romali-040 | 1 | |
jazzband/django-oauth-toolkit | django | 1,254 | Using JWT for access and refresh tokens | <!-- What is your question? -->
Many of the standard implementations of OIDC specs use JWT as standard for access and refresh tokens. They also have the benefit of not relying on token persistence (in database) for the token introspection process.
Is there a plan for this feature in any future milestone? | closed | 2023-03-08T06:46:11Z | 2023-10-17T16:59:48Z | https://github.com/jazzband/django-oauth-toolkit/issues/1254 | [
"question"
] | mainakchhari | 7 |
yihong0618/running_page | data-visualization | 659 | v | closed | 2024-04-16T12:17:36Z | 2024-05-06T14:03:59Z | https://github.com/yihong0618/running_page/issues/659 | [] | Jeffg121 | 1 | |
plotly/dash-table | dash | 948 | Save input text in Input field inside Dash Editable Datatable without pressing 'Enter' key? | Hi All,
On entering a value in an input field inside an editable datatable, the user must press 'Enter' key to save it. Instead, if the user simply clicks outside of this cell, the value provided is not saved and the previous state is visible.
Is there a way by which we can save the input provided when focus moves out without pressing 'enter'? | open | 2022-05-18T21:34:33Z | 2022-06-03T22:30:34Z | https://github.com/plotly/dash-table/issues/948 | [] | AnSohal | 3 |
modelscope/modelscope | nlp | 425 | Asr 推理问题 | OS: [e.g. linux]
Python/C++ Version:3.7.16
Package Version:pytorch=1.11.0、torchaudio=0.11.0、modelscope=1.7.1、funasr=0.7.1
Model:speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch
Command:
Details:asr转写
Error log:
Traceback (most recent call last):
File "/home/xiaoguo/.conda/envs/modelscope/lib/python3.7/site-packages/modelscope/utils/registry.py", line 212, in build_from_cfg
return obj_cls(**args)
File "/home/xiaoguo/.conda/envs/modelscope/lib/python3.7/site-packages/modelscope/pipelines/audio/asr_inference_pipeline.py", line 163, in __init__
**kwargs,
File "/home/xiaoguo/.conda/envs/modelscope/lib/python3.7/site-packages/funasr/bin/asr_inference_launch.py", line 1632, in inference_launch
return inference_paraformer_vad_punc(**kwargs)
File "/home/xiaoguo/.conda/envs/modelscope/lib/python3.7/site-packages/funasr/bin/asr_inference_launch.py", line 520, in inference_paraformer_vad_punc
speech2vadsegment = Speech2VadSegment(**speech2vadsegment_kwargs)
File "/home/xiaoguo/.conda/envs/modelscope/lib/python3.7/site-packages/funasr/bin/vad_infer.py", line 47, in __init__
vad_infer_config, vad_model_file, None, device, task_name="vad"
File "/home/xiaoguo/.conda/envs/modelscope/lib/python3.7/site-packages/funasr/build_utils/build_model_from_file.py", line 76, in build_model_from_file
model.encoder.load_state_dict(model_dict)
File "/home/xiaoguo/.conda/envs/modelscope/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1498, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for FSMN:
While copying the parameter named "fsmn.0.fsmn_block.conv_left.weight", whose dimensions in the model are torch.Size([128, 1, 20, 1]) and whose dimensions in the checkpoint are torch.Size([128, 1, 20, 1]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).
While copying the parameter named "fsmn.1.fsmn_block.conv_left.weight", whose dimensions in the model are torch.Size([128, 1, 20, 1]) and whose dimensions in the checkpoint are torch.Size([128, 1, 20, 1]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).
While copying the parameter named "fsmn.2.fsmn_block.conv_left.weight", whose dimensions in the model are torch.Size([128, 1, 20, 1]) and whose dimensions in the checkpoint are torch.Size([128, 1, 20, 1]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).
While copying the parameter named "fsmn.3.fsmn_block.conv_left.weight", whose dimensions in the model are torch.Size([128, 1, 20, 1]) and whose dimensions in the checkpoint are torch.Size([128, 1, 20, 1]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).
| closed | 2023-07-28T03:29:27Z | 2023-08-09T01:47:39Z | https://github.com/modelscope/modelscope/issues/425 | [] | Stonexiao | 2 |
localstack/localstack | python | 11,786 | bug: S3 on Resource Browser doesn't show Object folder named '/' | I've noticed recently that the LocalStack Resource Browser web app does not show object folders named '/'.
**Actual**
- LocalStack Resource Browser

**Expected**
- awslocal-cli for LocalStack instance

- Aws Console (how it looks when actually using AWS)

| open | 2024-11-05T16:50:45Z | 2024-11-06T15:48:19Z | https://github.com/localstack/localstack/issues/11786 | [
"area: web",
"status: backlog"
] | Dylan-Bon | 1 |
PaddlePaddle/ERNIE | nlp | 585 | 使用baidu ai studio来重新运行pretrain预训练的问题 | 在百度的环境

中运行readme文件

出现很多bug
首先是第一个制作预训练数据,需要的是paddlepaddle-gpu1.7的环境和paddle-propeller==0.3.1dev1以及paddle-ernie,说明文档中没有说明。
到运行下面开始预训练的时候需要热启的模型参数、字典、json文件也做好放在

最后还是报错

这个开始以为是版本问题因为在文档中只有1.8之后的版本中才有这个方法属性。
但是改了1.8的版本也不行,应该是上面的

返回值有问题。
请问到底怎么才能在baidu的环境里自己重新预训练模型呢?打扰了!
| closed | 2020-11-05T06:35:33Z | 2021-01-22T04:10:46Z | https://github.com/PaddlePaddle/ERNIE/issues/585 | [
"wontfix"
] | luming159 | 2 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 319 | Short Phrase Workaround? location of specific words in seconds | So as other people have mentioned here, the spectrogram synthesis struggles with short phrases. One workaround is that if you lengthen your text to something that takes 6 seconds or so to say, you can do much better, then manually crop out the filler text. However, has anyone had luck doing that in an automated way? (Without having to introduce a speech to text model...)
**Is there a way to locate the time in seconds that each character of the input string maps to?** That would make scrubbing out the filler text much easier.
Example below.
input: **"this is cool"**

This short phrase gives us a spectrogram that you might be able to tell will sound bad after going through the vocoder. The "washed out gap section" is a giveaway, and even the green sections lack the "grooves" that normal speech would have. After passing it through, none of the words are intelligible at all.
Next, I'm just going to pad this with a bunch of the same word, dog. I timed this using a stopwatch and my own speaking to figure out how many times we needed to say "dog" to get to around 6 seconds.
input: **"this is cool dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog"**

Much better looking spectrogram and much better results as well. Every word sounds fine. At this point, all I need to do is remove the filler and I'll end up with just the piece I need.

Is it possible to extract the character locations in time of the synthesizer or anything? I'm using the code rather than the GUI so I can patch in wherever. Running a speech to text model and get word times that way, but it feels like overkill...but I'm open to any approach really. Just figured someone else had run into a similar issue before. | closed | 2020-04-12T14:37:56Z | 2020-06-01T18:24:53Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/319 | [] | sunnybala | 3 |
kubeflow/katib | scikit-learn | 1,837 | AWS EKS Kubeflow Katib Examples stuck in running (2/3 Ready) | /kind bug
**What steps did you take and what happened:**
I have been installing and uninstalling kubeflow multiple times now in different ways trying to get Katib to function on the basic examples. I am simply running the base examples, I tried multiple and all have the same issue. This most recent try was with the newer versions of k8 and kubeflow.
My error is something related to the worker pods being deployed as they all Error out at 2/3. I consistently get a terminated due to "Error." When looking into this error I have the below logs:
This issue seems closest to my issue: https://github.com/kubeflow/katib/issues/1258 I can't however figure out how to implement this fix on aws if it really is the issue.
If you don't know the fix then perhaps some insight into what this issue actually is? I don't know what the katib controller is doing and what this certificate is.
My outputs from the commands:
**kubectl port-forward svc/katib-controller -n kubeflow 8080:443**
**wget https://localhost:8080/mutate-experiments --no-check-certificate**
<img width="630" alt="Screen Shot 2022-03-21 at 4 46 00 PM" src="https://user-images.githubusercontent.com/98784768/159380917-31bca120-a0ec-489c-ab08-e38dd9b514b6.png">
`kubectl -n kubeflow-user-example-com logs random-wbddpcq4-9lfgl
Using deprecated annotation `kubectl.kubernetes.io/default-logs-container` in pod/random-wbddpcq4-9lfgl. Please use `kubectl.kubernetes.io/default-container` instead
2022-03-22T18:26:00Z INFO start with arguments Namespace(add_stn=False, batch_size=64, disp_batches=100, dtype='float32', gc_threshold=0.5, gc_type='none', gpus=None, image_shape='1, 28, 28', initializer='default', kv_store='device', load_epoch=None, loss='', lr=0.025957377119816605, lr_factor=0.1, lr_step_epochs='10', macrobatch_size=0, model_prefix=None, mom=0.9, monitor=0, network='mlp', num_classes=10, num_epochs=10, num_examples=60000, num_layers=4, optimizer='ftrl', profile_server_suffix='', profile_worker_suffix='', save_period=1, test_io=0, top_k=0, use_imagenet_data_augmentation=0, warmup_epochs=5, warmup_strategy='linear', wd=0.0001)
2022-03-22T18:26:00Z DEBUG Starting new HTTP connection (1): data.mxnet.io:80
2022-03-22T18:26:00Z DEBUG Starting new HTTP connection (1): data.mxnet.io:80
2022-03-22T18:26:00Z DEBUG Starting new HTTP connection (1): data.mxnet.io:80
2022-03-22T18:26:00Z DEBUG Starting new HTTP connection (1): data.mxnet.io:80
2022-03-22T18:26:00Z DEBUG Starting new HTTP connection (1): data.mxnet.io:80
download failed, retrying, 4 attempts left
download failed, retrying, 3 attempts left
download failed, retrying, 2 attempts left
download failed, retrying, 1 attempt left
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/urllib3/connection.py", line 170, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/usr/local/lib/python3.5/dist-packages/urllib3/util/connection.py", line 96, in create_connection
raise err
File "/usr/local/lib/python3.5/dist-packages/urllib3/util/connection.py", line 86, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py", line 706, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py", line 394, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.5/dist-packages/urllib3/connection.py", line 234, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/usr/lib/python3.5/http/client.py", line 1151, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python3.5/http/client.py", line 1196, in _send_request
self.endheaders(body)
File "/usr/lib/python3.5/http/client.py", line 1147, in endheaders
self._send_output(message_body)
File "/usr/lib/python3.5/http/client.py", line 950, in _send_output
self.send(msg)
File "/usr/lib/python3.5/http/client.py", line 893, in send
self.connect()
File "/usr/local/lib/python3.5/dist-packages/urllib3/connection.py", line 200, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.5/dist-packages/urllib3/connection.py", line 182, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fcd15591978>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py", line 756, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/usr/local/lib/python3.5/dist-packages/urllib3/util/retry.py", line 573, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='data.mxnet.io', port=80): Max retries exceeded with url: /data/mnist/train-labels-idx1-ubyte.gz (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fcd15591978>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/mxnet-mnist/mnist.py", line 86, in <module>
fit.fit(args, sym, get_mnist_iter)
File "/opt/mxnet-mnist/common/fit.py", line 185, in fit
(train, val) = data_loader(args, kv)
File "/opt/mxnet-mnist/mnist.py", line 44, in get_mnist_iter
mnist = mx.test_utils.get_mnist()
File "/usr/local/lib/python3.5/dist-packages/mxnet/test_utils.py", line 1907, in get_mnist
path+'train-labels-idx1-ubyte.gz', path+'train-images-idx3-ubyte.gz')
File "/usr/local/lib/python3.5/dist-packages/mxnet/test_utils.py", line 1894, in read_data
with gzip.open(mx.test_utils.download(label_url)) as flbl:
File "/usr/local/lib/python3.5/dist-packages/mxnet/test_utils.py", line 1812, in download
raise e
File "/usr/local/lib/python3.5/dist-packages/mxnet/test_utils.py", line 1802, in download
r = requests.get(url, stream=True)
File "/usr/local/lib/python3.5/dist-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.5/dist-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='data.mxnet.io', port=80): Max retries exceeded with url: /data/mnist/train-labels-idx1-ubyte.gz (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fcd15591978>: Failed to establish a new connection: [Errno 111] Connection refused',))`
**kubectl -n kubeflow-user-example-com get pods**
<img width="572" alt="Screen Shot 2022-03-21 at 4 20 43 PM" src="https://user-images.githubusercontent.com/98784768/159378677-7d170e4b-f283-4dc2-959b-551344b469af.png">
**kubectl -n kubeflow-user-example-com describe pods random-z2gxd48k-pdrwv | grep -B 3 -A 3 error**
<img width="298" alt="Screen Shot 2022-03-21 at 4 24 07 PM" src="https://user-images.githubusercontent.com/98784768/159378979-d8b8de97-a245-428c-8a0c-74eaeb04b227.png">
**kubectl -n kubeflow-user-example-com describe pods random-z2gxd48k-pdrwv | grep -B 3 -A 3 Error**
<img width="722" alt="Screen Shot 2022-03-21 at 4 25 00 PM" src="https://user-images.githubusercontent.com/98784768/159379051-e6dfbd91-022e-4568-8268-bf9c4a35e902.png">
**kubectl -n kubeflow-user-example-com get experiments**
<img width="229" alt="Screen Shot 2022-03-21 at 4 32 29 PM" src="https://user-images.githubusercontent.com/98784768/159379718-92082ef0-38af-4fe6-807a-075f1b02057b.png">
**kubectl -n kubeflow-user-example-com get trials**
<img width="272" alt="Screen Shot 2022-03-21 at 4 33 11 PM" src="https://user-images.githubusercontent.com/98784768/159379791-81310a67-0fb6-4f3f-905d-0d34fa54f44f.png">
**kubeops example**
<img width="388" alt="Screen Shot 2022-03-21 at 4 33 59 PM" src="https://user-images.githubusercontent.com/98784768/159379892-f0dea556-beff-4978-91a4-73408ed9cbf2.png">
**kubectl -n kubeflow-user-example-com logs random-z2gxd48k-pdrwv**
<img width="729" alt="Screen Shot 2022-03-21 at 4 38 51 PM" src="https://user-images.githubusercontent.com/98784768/159380295-f3d2b57d-0b81-40ae-9019-9e03a207f829.png">
**What did you expect to happen:**
Examples to work following the standard documentation or at least to work when I updated them.
**Anything else you would like to add:**
Here are some links to issue I think are related:
I tried upgrading k8 because of this issue: https://github.com/istio/istio/issues/14389
Everything checks out from this issue: https://github.com/kubeflow/katib/issues/1160
**Environment:**
Enviroment:
- AWS EKS
-k8 1.21
- I have run 1-5 m5.xlarge instances with no difference depending on resources available
- kubeflow 1.4.1
- Katib: customize build https://github.com/awslabs/kubeflow-manifests/tree/main/docs/deployment/vanilla#central-dashboard)kustomize build apps/katib/upstream/installs/katib-with-kubeflow | kubectl apply -f -
- Kustomize: {Version:kustomize/v3.9.3 GitCommit:1ae8303bdc9372bc7c15942df6e9cf5d67fdba1a BuildDate:2021-02-07T17:02:13Z GoOs:linux GoArch:amd64}
Install Method:
- https://github.com/awslabs/kubeflow-manifests/tree/main/docs/deployment/vanilla
kubectl version:
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.5-eks-bc4871b", GitCommit:"5236faf39f1b7a7dabea8df12726f25608131aa9", GitTreeState:"clean", BuildDate:"2021-10-29T23:32:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.21) exceeds the supported minor version skew of +/-1
---
<!-- Don't delete this message to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍 We prioritize the issues with the most 👍
| closed | 2022-03-22T00:01:23Z | 2022-03-25T16:54:34Z | https://github.com/kubeflow/katib/issues/1837 | [
"kind/bug"
] | charlescurt | 10 |
roboflow/supervision | deep-learning | 1,562 | Connect Oriented Bounding Box to Metrics | # Connect Oriented Bounding Box to Metrics
> [!TIP]
> [Hacktoberfest](https://hacktoberfest.com/) is calling! Whether it's your first PR or your 50th, you’re helping shape the future of open source. Help us build the most reliable and user-friendly computer vision library out there! 🌱
---
Several new features were recently added to supervision:
* Mean Average Precision (mAP)
* F1 Score
* IoU calculation for Oriented Bounding Boxes
Intersection Over Union (IoU) is the starting point when computing these metrics. It determines which detections are considered true positives. However, [take a look](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/metrics/mean_average_precision.py#L176)! The Oriented Box IoU is not supported yet! Help us add support by using `oriented_box_iou_batch`.
Helpful links:
* [Contribution guide](https://supervision.roboflow.com/develop/contributing/#how-to-contribute-changes)
* Metrics:
* mAP metric: [docs](https://supervision.roboflow.com/develop/metrics/mean_average_precision/), [code](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/metrics/mean_average_precision.py#L25)
* F1 Score: [docs](https://supervision.roboflow.com/develop/metrics/f1_score/), [code](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/metrics/f1_score.py#L25)
* Oriented box IoU calculation function: [docs](https://supervision.roboflow.com/develop/detection/utils/#supervision.detection.utils.oriented_box_iou_batch), [code](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/detection/utils.py#L143)
* [Supervision Cheatsheet](https://roboflow.github.io/cheatsheet-supervision/)
* [Colab Starter Template](https://colab.research.google.com/drive/1rin7WrS-UvVIe-_Gfxmu-yVslGphOq89#scrollTo=pjmCrNre2g58)
* [Prior metrics test Colab](https://colab.research.google.com/drive/1qSMDDpImc9arTgQv-qvxlTA87KRRegYN) | closed | 2024-10-03T11:50:31Z | 2024-11-01T09:45:43Z | https://github.com/roboflow/supervision/issues/1562 | [
"good first issue",
"hacktoberfest"
] | LinasKo | 20 |
koxudaxi/datamodel-code-generator | pydantic | 2,010 | Remove linters from package dependency | Would it be possible to move the code formatting tools to a dedicated poetry group such as `[tool.poetry.group.dev.dependencies]` or are they required for the package to work?
https://github.com/koxudaxi/datamodel-code-generator/blob/28be37d7c2a0b0bce21b0719ffb732df36ebce74/pyproject.toml#L54-L55 | closed | 2024-06-19T14:43:39Z | 2024-07-04T23:40:37Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2010 | [
"answered"
] | PythonFZ | 1 |
tox-dev/tox | automation | 3,127 | TOX_OVERRIDES for testenv.pass_env are processed inconsistently | ## Issue
After adding TOX_OVERRIDEs to my projects, releases have started failing when the TWINE_PASSWORD is missing from passenv.
## Environment
Provide at least:
- OS: macOS, Linux
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
draft @ pipx runpip tox freeze
cachetools==5.3.1
chardet==5.2.0
colorama==0.4.6
distlib==0.3.7
filelock==3.12.3
packaging==23.1
platformdirs==3.10.0
pluggy==1.3.0
pyproject-api==1.6.1
tox==4.11.3
virtualenv==20.24.5
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
```console
draft @ tox -rvv
py: 96 I find interpreter for spec PythonSpec(path=/Users/jaraco/.local/pipx/venvs/tox/bin/python) [virtualenv/discovery/builtin.py:58]
py: 97 D got python info of %s from (PosixPath('/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/bin/python3.11'), PosixPath('/Users/jaraco/Library/Application Support/virtualenv/py_info/1/0722d1d654d36a08896c2c727f3d426ef2212e71e059d909d7a685204d5b0d1d.json')) [virtualenv/app_data/via_disk_folder.py:131]
py: 98 D got python info of %s from (PosixPath('/opt/homebrew/opt/python@3.11/bin/python3.11'), PosixPath('/Users/jaraco/Library/Application Support/virtualenv/py_info/1/573546c1eada8c60b27f5300df4435af9ba2007194c80719d45c24c6ea4a493c.json')) [virtualenv/app_data/via_disk_folder.py:131]
py: 98 I proposed PythonInfo(spec=CPython3.11.5.final.0-64, system=/opt/homebrew/opt/python@3.11/bin/python3.11, exe=/Users/jaraco/.local/pipx/venvs/tox/bin/python, platform=darwin, version='3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
py: 98 D accepted PythonInfo(spec=CPython3.11.5.final.0-64, system=/opt/homebrew/opt/python@3.11/bin/python3.11, exe=/Users/jaraco/.local/pipx/venvs/tox/bin/python, platform=darwin, version='3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:67]
py: 99 D filesystem is not case-sensitive [virtualenv/info.py:26]
py: 114 I create virtual environment via CPython3macOsBrew(dest=/Users/jaraco/draft/.tox/py, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:50]
py: 114 D create folder /Users/jaraco/draft/.tox/py/bin [virtualenv/util/path/_sync.py:12]
py: 114 D create folder /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages [virtualenv/util/path/_sync.py:12]
py: 114 D write /Users/jaraco/draft/.tox/py/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:32]
py: 114 D home = /opt/homebrew/opt/python@3.11/bin [virtualenv/create/pyenv_cfg.py:36]
py: 114 D implementation = CPython [virtualenv/create/pyenv_cfg.py:36]
py: 114 D version_info = 3.11.5.final.0 [virtualenv/create/pyenv_cfg.py:36]
py: 114 D virtualenv = 20.24.5 [virtualenv/create/pyenv_cfg.py:36]
py: 115 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:36]
py: 115 D base-prefix = /opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11 [virtualenv/create/pyenv_cfg.py:36]
py: 115 D base-exec-prefix = /opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11 [virtualenv/create/pyenv_cfg.py:36]
py: 115 D base-executable = /opt/homebrew/opt/python@3.11/bin/python3.11 [virtualenv/create/pyenv_cfg.py:36]
py: 115 D symlink /opt/homebrew/opt/python@3.11/bin/python3.11 to /Users/jaraco/draft/.tox/py/bin/python [virtualenv/util/path/_sync.py:32]
py: 115 D create virtualenv import hook file /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:91]
py: 115 D create /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:94]
py: 116 D ============================== target debug ============================== [virtualenv/run/session.py:52]
py: 116 D debug via /Users/jaraco/draft/.tox/py/bin/python /Users/jaraco/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:200]
py: 116 D {
"sys": {
"executable": "/Users/jaraco/draft/.tox/py/bin/python",
"_base_executable": "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/bin/python3.11",
"prefix": "/Users/jaraco/draft/.tox/py",
"base_prefix": "/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11",
"real_prefix": null,
"exec_prefix": "/Users/jaraco/draft/.tox/py",
"base_exec_prefix": "/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11",
"path": [
"/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python311.zip",
"/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11",
"/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/lib-dynload",
"/Users/jaraco/draft/.tox/py/lib/python3.11/site-packages"
],
"meta_path": [
"<class '_virtualenv._Finder'>",
"<class '_frozen_importlib.BuiltinImporter'>",
"<class '_frozen_importlib.FrozenImporter'>",
"<class '_frozen_importlib_external.PathFinder'>"
],
"fs_encoding": "utf-8",
"io_encoding": "utf-8"
},
"version": "3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)]",
"makefile_filename": "/opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11/lib/python3.11/config-3.11-darwin/Makefile",
"os": "<module 'os' (frozen)>",
"site": "<module 'site' (frozen)>",
"datetime": "<module 'datetime' from '/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/datetime.py'>",
"math": "<module 'math' from '/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/lib-dynload/math.cpython-311-darwin.so'>",
"json": "<module 'json' from '/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py'>"
} [virtualenv/run/session.py:53]
py: 140 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/jaraco/Library/Application Support/virtualenv) [virtualenv/run/session.py:57]
py: 142 D got embed update of distribution %s from ('setuptools', PosixPath('/Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/embed/3/setuptools.json')) [virtualenv/app_data/via_disk_folder.py:131]
py: 142 D got embed update of distribution %s from ('wheel', PosixPath('/Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/embed/3/wheel.json')) [virtualenv/app_data/via_disk_folder.py:131]
py: 142 D got embed update of distribution %s from ('pip', PosixPath('/Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131]
py: 144 D using periodically updated wheel /Users/jaraco/Library/Application Support/virtualenv/wheel/house/wheel-0.41.0-py3-none-any.whl [virtualenv/seed/wheels/periodic_update.py:49]
py: 144 D using periodically updated wheel /Users/jaraco/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl [virtualenv/seed/wheels/periodic_update.py:49]
py: 144 D install pip from wheel /Users/jaraco/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-23.2.1-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49]
py: 144 D install wheel from wheel /Users/jaraco/Library/Application Support/virtualenv/wheel/house/wheel-0.41.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49]
py: 145 D install setuptools from wheel /Users/jaraco/Library/Application Support/virtualenv/wheel/house/setuptools-68.0.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49]
py: 146 D copy directory /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.dist-info to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/setuptools-68.0.0.dist-info [virtualenv/util/path/_sync.py:40]
py: 146 D copy directory /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.41.0-py3-none-any/wheel to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40]
py: 146 D copy /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.2.1-py3-none-any/pip-23.2.1.virtualenv to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/pip-23.2.1.virtualenv [virtualenv/util/path/_sync.py:40]
py: 147 D copy directory /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.2.1-py3-none-any/pip to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40]
py: 149 D copy /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/distutils-precedence.pth to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40]
py: 150 D copy directory /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40]
py: 158 D copy directory /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.41.0-py3-none-any/wheel-0.41.0.dist-info to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/wheel-0.41.0.dist-info [virtualenv/util/path/_sync.py:40]
py: 161 D copy /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.41.0-py3-none-any/wheel-0.41.0.virtualenv to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/wheel-0.41.0.virtualenv [virtualenv/util/path/_sync.py:40]
py: 162 D generated console scripts wheel wheel3.11 wheel3 wheel-3.11 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43]
py: 214 D copy directory /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/pkg_resources to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40]
py: 228 D copy directory /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/_distutils_hack to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40]
py: 230 D copy /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-68.0.0-py3-none-any/setuptools-68.0.0.virtualenv to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/setuptools-68.0.0.virtualenv [virtualenv/util/path/_sync.py:40]
py: 230 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43]
py: 295 D copy directory /Users/jaraco/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-23.2.1-py3-none-any/pip-23.2.1.dist-info to /Users/jaraco/draft/.tox/py/lib/python3.11/site-packages/pip-23.2.1.dist-info [virtualenv/util/path/_sync.py:40]
py: 297 D generated console scripts pip3 pip3.11 pip-3.11 pip [virtualenv/seed/embed/via_app_data/pip_install/base.py:43]
py: 298 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:63]
py: 300 D write /Users/jaraco/draft/.tox/py/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:32]
py: 300 D home = /opt/homebrew/opt/python@3.11/bin [virtualenv/create/pyenv_cfg.py:36]
py: 300 D implementation = CPython [virtualenv/create/pyenv_cfg.py:36]
py: 300 D version_info = 3.11.5.final.0 [virtualenv/create/pyenv_cfg.py:36]
py: 300 D virtualenv = 20.24.5 [virtualenv/create/pyenv_cfg.py:36]
py: 300 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:36]
py: 300 D base-prefix = /opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11 [virtualenv/create/pyenv_cfg.py:36]
py: 300 D base-exec-prefix = /opt/homebrew/opt/python@3.11/Frameworks/Python.framework/Versions/3.11 [virtualenv/create/pyenv_cfg.py:36]
py: 300 D base-executable = /opt/homebrew/opt/python@3.11/bin/python3.11 [virtualenv/create/pyenv_cfg.py:36]
py: OK (0.21 seconds)
congratulations :) (0.23 seconds)
```
</details>
## Minimal example
<!-- If possible, provide a minimal reproducer for the issue. -->
```console
draft @ cat tox.ini
[testenv]
[testenv:release]
passenv=
TWINE_PASSWORD
commands=
py -c "import os; print(os.environ.get('TWINE_PASSWORD'))"
```
When passing `pass_env` to the config using overrides, the overrides supersede the explict value in the config:
```
draft @ env TOX_OVERRIDE=testenv.pass_env+=FOO,BAR tox config -k passenv -e release
[testenv:release]
pass_env =
BAR
CC
CCSHARED
CFLAGS
CPPFLAGS
CURL_CA_BUNDLE
CXX
FOO
HOME
LANG
LANGUAGE
LDFLAGS
LD_LIBRARY_PATH
PIP_*
PKG_CONFIG
PKG_CONFIG_PATH
PKG_CONFIG_SYSROOT_DIR
REQUESTS_CA_BUNDLE
SSL_CERT_FILE
TERM
TMPDIR
VIRTUALENV_*
http_proxy
https_proxy
no_proxy
```
Note that FOO and BAR are present, but TWINE_PASSWORD is lost.
If however, one changes `passenv=` to `pass_env` in the config,
```
draft @ cat tox.ini
[testenv]
[testenv:release]
pass_env=
TWINE_PASSWORD
commands=
py -c "import os; print(os.environ.get('TWINE_PASSWORD'))"
```
Now TWINE_PASSWORD appears, but FOO and BAR are missing:
```
draft @ env TOX_OVERRIDE=testenv.pass_env+=FOO,BAR tox config -k passenv -e release
[testenv:release]
pass_env =
CC
CCSHARED
CFLAGS
CPPFLAGS
CURL_CA_BUNDLE
CXX
HOME
LANG
LANGUAGE
LDFLAGS
LD_LIBRARY_PATH
PIP_*
PKG_CONFIG
PKG_CONFIG_PATH
PKG_CONFIG_SYSROOT_DIR
REQUESTS_CA_BUNDLE
SSL_CERT_FILE
TERM
TMPDIR
TWINE_PASSWORD
VIRTUALENV_*
http_proxy
https_proxy
no_proxy
```
What is the preferred configuration key for `passenv`/`pass_env`? I presume the latter.
I've tried other combinations of `passenv` in TOX_OVERRIDES and in the config, but I haven't yet found a combination that allows the pass_env to be applied at both the plain `[testenv]` and also extend the `[testenv:release].pass_env`. Is that possible? At the very least, I wouldn't expect a `pass_env+=` to ever mask an existing definition, but it does. | open | 2023-09-18T15:03:37Z | 2024-03-05T22:15:14Z | https://github.com/tox-dev/tox/issues/3127 | [
"help:wanted"
] | jaraco | 1 |
huggingface/datasets | numpy | 6,584 | np.fromfile not supported | How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *args, **kwargs)
else:
filepath_or_buffer = str(filepath_or_buffer)
return np.fromfile(xopen(filepath_or_buffer, "rb", download_config=download_config).read(), *args, **kwargs)
```
this is not work
| open | 2024-01-12T09:46:17Z | 2024-01-15T05:20:50Z | https://github.com/huggingface/datasets/issues/6584 | [] | d710055071 | 6 |
huggingface/transformers | nlp | 36,931 | Clarification on Commercial License Impact of LayoutLMv3ImageProcessor within UdopProcessor | Hi team,
I have a question regarding licensing and commercial usage.
Since UdopProcessor internally uses LayoutLMv3ImageProcessor (as part of resizing, rescaling, normalizing document images, and applying OCR), and given that LayoutLMv3 itself is not licensed for commercial use, I would like to clarify:
➡️ If I use UdopProcessor for fine-tuning UDOP and plan to deploy it in a commercial setting, will the dependency on LayoutLMv3ImageProcessor affect the commercial viability of using UDOP?
In other words, does the inclusion of LayoutLMv3ImageProcessor within UdopProcessor impose any commercial licensing restrictions on the UDOP model?
Thank you in advance! | open | 2025-03-24T15:34:24Z | 2025-03-24T15:34:24Z | https://github.com/huggingface/transformers/issues/36931 | [] | Arjunexperion | 0 |
google-research/bert | nlp | 886 | accent character | hello,
in BERT tokenization.py, why are accents striped away? However, in the vocab file of multi_cased_model that supports multilingual languages, there are many accented characters.
Thanks, | open | 2019-10-25T09:09:17Z | 2020-03-30T19:31:16Z | https://github.com/google-research/bert/issues/886 | [] | lytum | 1 |
jupyterlab/jupyter-ai | jupyter | 838 | Server-side error on Python 3.8 | ## Description
```py
File "python3.8/site-packages/jupyter_ai/__init__.py", line 3, in <module>
from jupyter_ai_magics import load_ipython_extension, unload_ipython_extension
File "python3.8/site-packages/jupyter_ai_magics/__init__.py", line 4, in <module>
from .embedding_providers import (
File "python3.8/site-packages/jupyter_ai_magics/embedding_providers.py", line 3, in <module>
from jupyter_ai_magics.providers import (
File "python3.8/site-packages/jupyter_ai_magics/providers.py", line 212, in <module>
class BaseProvider(BaseModel, metaclass=ProviderMetaclass):
File "python3.8/site-packages/jupyter_ai_magics/providers.py", line 280, in BaseProvider
server_settings: ClassVar[Optional[MappingProxyType[str, Any]]] = None
TypeError: 'type' object is not subscriptable
```
## Reproduce
Install on python 3.8
## Expected behavior
Works with minimum dependencies installed.
## Context
Mea culpa | closed | 2024-06-19T14:10:03Z | 2024-06-19T21:39:04Z | https://github.com/jupyterlab/jupyter-ai/issues/838 | [
"bug"
] | krassowski | 1 |
ultralytics/ultralytics | machine-learning | 18,904 | Benchmark gives NaN for exportable models | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Other
### Bug
I am getting this result from benchmark:
```
1300.2s 2103 Benchmarks complete for best.pt on face-detection-dataset.yaml at imgsz=192,320 (536.99s)
1300.2s 2104 Format Status❔ Size (MB) metrics/mAP50-95(B) Inference time (ms/im) FPS
1300.2s 2105 0 PyTorch ✅ 0.3 0.2027 16.46 60.73
1300.2s 2106 1 TorchScript ❎ 0.7 NaN NaN NaN
1300.2s 2107 2 ONNX ❎ 0.5 NaN NaN NaN
1300.2s 2108 3 OpenVINO ❎ 0.6 NaN NaN NaN
1300.2s 2109 4 TensorRT ❌ 0.0 NaN NaN NaN
1300.2s 2110 5 CoreML ❎ 0.3 NaN NaN NaN
1300.2s 2111 6 TensorFlow SavedModel ❎ 1.4 NaN NaN NaN
1300.2s 2112 7 TensorFlow GraphDef ❎ 0.5 NaN NaN NaN
1300.2s 2113 8 TensorFlow Lite ❎ 0.5 NaN NaN NaN
1300.2s 2114 9 TensorFlow Edge TPU ❎ 0.3 NaN NaN NaN
1300.2s 2115 10 TensorFlow.js ❎ 0.5 NaN NaN NaN
1300.2s 2116 11 PaddlePaddle ❎ 1.0 NaN NaN NaN
1300.2s 2117 12 MNN ❎ 0.5 NaN NaN NaN
1300.2s 2118 13 NCNN ✅ 0.5 0.0003 5.97 167.53
1300.2s 2119 14 IMX ❌ 0.0 NaN NaN NaN
1300.2s 2120 15 RKNN ❌ 0.0 NaN NaN NaN
```
Why do I get so many NaN, especially for TensorFlow Lite, even thought I can export and run the model all right?
Logs:
[logs.log](https://github.com/user-attachments/files/18550557/logs.log)
### Environment
```
Ultralytics 8.3.68 🚀 Python-3.10.14 torch-2.4.0 CUDA:0 (Tesla P100-PCIE-16GB, 16269MiB)
Setup complete ✅ (4 CPUs, 31.4 GB RAM, 6095.9/8062.4 GB disk)
OS Linux-6.6.56+-x86_64-with-glibc2.35
Environment Kaggle
Python 3.10.14
Install pip
RAM 31.35 GB
Disk 6095.9/8062.4 GB
CPU Intel Xeon 2.00GHz
CPU count 4
GPU Tesla P100-PCIE-16GB, 16269MiB
GPU count 1
CUDA 12.3
numpy ✅ 1.26.4>=1.23.0
numpy ✅ 1.26.4<2.0.0; sys_platform == "darwin"
matplotlib ✅ 3.7.5>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 11.0.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.1>=1.4.1
torch ✅ 2.4.0>=1.8.0
torch ✅ 2.4.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.19.0>=0.9.0
tqdm ✅ 4.66.4>=4.64.0
psutil ✅ 5.9.3
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.12.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
dataset:
```
%%writefile face-detection-dataset.yaml
# CC0: Public Domain license
# Face-Detection-Dataset dataset by Fares Elmenshawii
# Documentation: https://www.kaggle.com/datasets/fareselmenshawii/face-detection-dataset
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: /kaggle/input/face-detection-dataset # dataset root dir
train: images/train # train images (relative to 'path')
val: images/val # val images (relative to 'path')
test: # test images (optional)
# Classes
names:
0: face
# Download script/URL (optional)
download: https://storage.googleapis.com/kaggle-data-sets/3345370/5891144/bundle/archive.zip
```
model:
```
%%writefile yolov6-face.yaml
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv6 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/models/yolov6
# Parameters
nc: 1 # number of classes
activation: nn.ReLU() # (optional) model default activation function
scales: # model compound scaling constants, i.e. 'model=yolov6n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
p: [0.33, 0.25, 8] # nano is [0.33, 0.25, 1024]
# YOLOv6-3.0s backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- [-1, 6, Conv, [128, 3, 1]]
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- [-1, 12, Conv, [256, 3, 1]]
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- [-1, 18, Conv, [512, 3, 1]]
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- [-1, 6, Conv, [1024, 3, 1]]
- [-1, 1, SPPF, [1024, 5]] # 9
# YOLOv6-3.0s head
head:
- [-1, 1, Conv, [256, 1, 1]]
- [-1, 1, nn.ConvTranspose2d, [256, 2, 2, 0]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 1, Conv, [256, 3, 1]]
- [-1, 9, Conv, [256, 3, 1]] # 14
- [-1, 1, Conv, [128, 1, 1]]
- [-1, 1, nn.ConvTranspose2d, [128, 2, 2, 0]]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 1, Conv, [128, 3, 1]]
- [-1, 9, Conv, [128, 3, 1]] # 19
- [[14, 19], 1, Detect, [nc]] # Detect(P3, P4, P5)
```
code:
```
model = YOLO("./yolov6-face.yaml")
r = model.train(data="face-detection-dataset.yaml", epochs=1, imgsz='192,320', single_cls=True, plots=True, batch=500)
from ultralytics.utils.benchmarks import benchmark
benchmark(model=model, data="face-detection-dataset.yaml", imgsz='192,320', device="cpu")
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-01-26T15:12:52Z | 2025-01-28T18:16:28Z | https://github.com/ultralytics/ultralytics/issues/18904 | [
"bug",
"fixed",
"exports"
] | EmmanuelMess | 8 |
psf/black | python | 4,300 | line-length is not working as intended | I am using black version `24.3.0` and have set `line-length = 88` in my config file, however running `black .` command doesn't modify the lines of code that are over 88 characters.
Ex:
Before running black
```str = "this is the longest text in the history of mankind that I have seen in the world for all the good and bad things" ```
After running black the result is still the same. I don't see any error it says `x file(s) left unchanged`.
Here is my config file
```[flake8]
max-line-length = 88
extend-ignore = E203, W503, W291
exclude = .git,__pycache__,./.venv
[tool.black]
line-length = 88```
python version: `3.11.6`
| closed | 2024-04-05T21:24:13Z | 2024-04-06T11:39:38Z | https://github.com/psf/black/issues/4300 | [
"T: bug"
] | gjambaisivanandham | 1 |
Farama-Foundation/Gymnasium | api | 733 | [Bug Report] UserWarning occurring after every call of the env. | ### Describe the bug
```
UserWarning: WARN: env.shape to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.shape` for environment variables or `env.get_wrapper_attr('shape')` that will search the reminding wrappers.
logger.warn(
```
the above warning occurs as soon as initiating the environment for CartPole-v1 and Hopper-v4. Gymnasium version used is 0.29.1.
### Code example
```shell
import gymnasium as gym
env = gym.make("CartPole-v1")
env.reset()
```
### System info
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-10-05T21:07:24Z | 2023-11-09T16:27:22Z | https://github.com/Farama-Foundation/Gymnasium/issues/733 | [
"bug"
] | davidireland3 | 3 |
dmlc/gluon-nlp | numpy | 849 | Fix all Pad() calls | Now that for all `Pad()` calls without pad_val set, users see a warning that `pad_val` is set to default value 0. This may confuse ppl who uses existing script and wonder if there's any problem in their setup for the warning message printed. We should fix all these usages. | closed | 2019-07-26T21:21:31Z | 2019-10-08T23:46:38Z | https://github.com/dmlc/gluon-nlp/issues/849 | [
"enhancement"
] | eric-haibin-lin | 1 |
jonaswinkler/paperless-ng | django | 1,347 | [BUG] Importing large file results in: RecursionError: maximum recursion depth exceeded | **Describe the bug**
I'm scanning a 90-page PDF that has been scanned using NAPS2 and is therefore already OCRed. When I want to import the file to paperless-ng the following error is produced.
**To Reproduce**
1. I don't know if this happens only for me. Try and upload a 90-pages or so PDF.
**Expected behavior**
I expect the PDF to be imported without any errors.
**Screenshots**
Not necessary.
**Webserver logs**
```python
[2021-09-26 22:44:26,140] [INFO] [paperless.consumer] Consuming 20210926_0001.pdf
[2021-09-26 22:44:26,142] [DEBUG] [paperless.consumer] Detected mime type: application/pdf
[2021-09-26 22:44:26,149] [DEBUG] [paperless.consumer] Parser: RasterisedDocumentParser
[2021-09-26 22:44:26,154] [DEBUG] [paperless.consumer] Parsing 20210926_0001.pdf...
[2021-09-26 22:45:04,423] [WARNING] [paperless.parsing.tesseract] Error while getting text from PDF document with pdfminer.six
Traceback (most recent call last):
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 120, in extract_text
stripped = post_process_text(pdfminer_extract_text(pdf_file))
File "/usr/local/lib/python3.9/site-packages/pdfminer/high_level.py", line 121, in extract_text
interpreter.process_page(page)
File "/usr/local/lib/python3.9/site-packages/pdfminer/pdfinterp.py", line 896, in process_page
self.device.end_page(page)
File "/usr/local/lib/python3.9/site-packages/pdfminer/converter.py", line 50, in end_page
self.cur_item.analyze(self.laparams)
File "/usr/local/lib/python3.9/site-packages/pdfminer/layout.py", line 814, in analyze
group.analyze(laparams)
File "/usr/local/lib/python3.9/site-packages/pdfminer/layout.py", line 575, in analyze
LTTextGroup.analyze(self, laparams)
File "/usr/local/lib/python3.9/site-packages/pdfminer/layout.py", line 362, in analyze
obj.analyze(laparams)
#
# errors for line 575 and 362 go on for reeeeeeally long...
#
File "/usr/local/lib/python3.9/site-packages/pdfminer/layout.py", line 362, in analyze
obj.analyze(laparams)
File "/usr/local/lib/python3.9/site-packages/pdfminer/layout.py", line 575, in analyze
LTTextGroup.analyze(self, laparams)
RecursionError: maximum recursion depth exceeded
```
**Relevant information**
- Host OS: Ubuntu 20.04 LTS
- Browser: Chrome, newest version
- Version of paperless-ng: 1.5.0
- Installation method: docker => docker-compose, behind nginx proxy accessible through the interwebs
- Any configuration changes you made in `docker-compose.yml`, `docker-compose.env` or `paperless.conf`.
1. PAPERLESS_FILENAME_FORMAT={created_year}/{correspondent}/{title}
2. PAPERLESS_OCR_OUTPUT_TYPE=pdf
3. PAPERLESS_ALLOWED_HOSTS=myhosts.lol
4. PAPERLESS_OCR_ROTATE_PAGES=False
- settings.py #dont know if this is necessary, it works anyway
1. SESSION_COOKIE_SECURE = True
2. SECURE_HSTS_SECONDS = 31536000
3. CSRF_COOKIE_SECURE = True
4. SECURE_HSTS_INCLUDE_SUBDOMAINS = True
5. SECURE_HSTS_PRELOAD = True | open | 2021-09-26T20:48:08Z | 2021-09-26T20:58:53Z | https://github.com/jonaswinkler/paperless-ng/issues/1347 | [] | ghost | 0 |
laurentS/slowapi | fastapi | 88 | Redis Version Conflict: install failed when redis > 4.0 | The conflict is caused by:
The user requested redis==4.2.0rc3
slowapi 0.1.5 depends on redis<4.0.0 and >=3.4.1 | closed | 2022-03-25T03:23:01Z | 2023-04-12T08:20:32Z | https://github.com/laurentS/slowapi/issues/88 | [] | a-yangyi | 9 |
python-visualization/folium | data-visualization | 1,453 | How to integrate and plot both heatmap and quivers/arrows for different timestamps using folium? | I want to plot wind velocity heatmap and wind direction quivers together for different timestamps on top of a map using folium. I could not find any plugin for this integrated operation. Can you please help me sort this out? | closed | 2021-02-14T02:25:55Z | 2022-11-18T14:30:15Z | https://github.com/python-visualization/folium/issues/1453 | [] | tasfia | 1 |
PaddlePaddle/ERNIE | nlp | 513 | ernie-tiny 使用GPU finetune | 用GPU finetune的时候,会报importError:libcublas.so cannot open shared object file...
但是在cuda lib中是有libcublas.so.9.0的,请问这种情况该怎么做? | closed | 2020-07-06T08:13:39Z | 2020-09-12T03:29:31Z | https://github.com/PaddlePaddle/ERNIE/issues/513 | [
"wontfix",
"Paddle-Issue"
] | kennyLSN | 3 |
xinntao/Real-ESRGAN | pytorch | 211 | Seams after upscaling | Whenever i input a seamless image i get an image with seams all around the edges, is there a way to patch the seams? | closed | 2022-01-03T13:23:17Z | 2024-05-15T14:38:33Z | https://github.com/xinntao/Real-ESRGAN/issues/211 | [] | industdev | 3 |
python-visualization/folium | data-visualization | 1,963 | Add support to map ruler based on configured projection | It would be great if we had the possibility to show a ruler around the map. This ruler should change as the projection change as well, and the frequency of the ticks should be configurable as well (eg. show the latitude every 10km and longitude every 5km if we are dealing with an UTM projection or latitude every 2º and longitude every 3º if we are dealing with geographic projection). It should be something like that:

Another example:

We already have a scale bar, which is great, but it would be very good to be able to see the ruler around the map.
| open | 2024-06-04T13:47:17Z | 2024-06-14T15:15:37Z | https://github.com/python-visualization/folium/issues/1963 | [] | barcelosleo | 3 |
numpy/numpy | numpy | 28,157 | BUG: `StringDType`: `na_object` ignored in `full` | ### Describe the issue:
Creating a `StringDType` ndarray with `na_object` using `full` (and `full_like`) coerces the `nan` sentinel to a string.
I can work around this using `arr[:] = np.nan`, but think the behavior is unexpected.
### Reproduce the code example:
```python
import numpy as np
arr1 = np.full((1,), fill_value=np.nan, dtype=np.dtypes.StringDType(na_object=np.nan))
arr2 = np.full_like(arr1, fill_value=np.nan)
assert arr1.item() is np.nan
assert arr2.item() is np.nan
```
### Error message:
```python traceback
Traceback (most recent call last):
File "/Users/goldbaum/Documents/numpy/../numpy-experiments/test.py", line 7, in <module>
assert arr1.item() is np.nan
^^^^^^^^^^^^^^^^^^^^^
AssertionError
```
### Python and NumPy Versions:
2.2.1
3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0]
### Runtime Environment:
[{'numpy_version': '2.2.1',
'python': '3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) '
'[GCC 13.3.0]',
'uname': uname_result(system='Linux', node='poisson', release='6.8.0-51-generic', version='#52~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Dec 9 15:00:52 UTC 2', machine='x86_64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_KNM',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL',
'AVX512_SPR']}},
{'architecture': 'Haswell',
'filepath': '/home/mathause/.conda/envs/regionmask_dev/lib/libopenblasp-r0.3.28.so',
'internal_api': 'openblas',
'num_threads': 8,
'prefix': 'libopenblas',
'threading_layer': 'pthreads',
'user_api': 'blas',
'version': '0.3.28'}]
### Context for the issue:
_No response_ | closed | 2025-01-15T15:07:14Z | 2025-01-27T19:45:23Z | https://github.com/numpy/numpy/issues/28157 | [
"00 - Bug",
"component: numpy.strings"
] | mathause | 4 |
marimo-team/marimo | data-visualization | 3,781 | KaTeX Macro Support | ### Description
As outlined in https://github.com/marimo-team/marimo/discussions/1941, I would love to be able to have some Macro Support for KaTeX.
This would come in handy for repeated use of convoluted symbols and would really help us for teaching and presenting our stuff in notebooks.
Our current workflow for jupyter notebooks is a workaround, where we do
```python
import IPython.display
IPython.display.display_latex(IPython.display.Latex(filename="macros.tex"))
```
where `macros.tex` looks like
```
\newcommand{\rot}[1]{{\rm curl }\left( #1 \right)}
\newcommand{\Grad}[1]{{\rm Grad}\left( #1 \right)}
\newcommand{\Div}[1]{{\rm Div }\left( #1 \right)}
```
A feature like that would be great!
### Suggested solution
In an optimal case, we would love to be able to point to a file populated by KaTeX macro commands, which would then be available to use in all markdown cells without any additional import. | closed | 2025-02-13T12:51:39Z | 2025-02-14T07:18:32Z | https://github.com/marimo-team/marimo/issues/3781 | [
"enhancement"
] | claudiushaag | 3 |
itamarst/eliot | numpy | 456 | Testing infrastructure (@capture_logging) can't use custom JSON encoders | If you have a custom JSON encoder, and you try to test your log messages, your tests will fail because the `MemoryLogger` code path always encodes with the plain-vanilla JSON encoder.
Given `FileDestination` supports a custom JSON encoder, this is a problem. | closed | 2020-11-03T15:00:20Z | 2020-12-15T19:09:24Z | https://github.com/itamarst/eliot/issues/456 | [
"bug"
] | itamarst | 0 |
alpacahq/alpaca-trade-api-python | rest-api | 367 | Using paper trading, bars request returns 403 Forbidden | I've been running paper trading for the last few days without problem. Today for some reason, when I go to look up prices using the bars API, I am getting a 403 forbidden error.
I have version 0.51.0 installed, which from what I can tell on pip, is thee latest version.
Here's a simple example.
```python
import alpaca_trade_api as tradeapi
api = tradeapi.REST(base_url="https://paper-api.alpaca.markets", key_id="<my paper key here>", secret_key="<my paper secret here>")
def get_current_price(ticker):
symbol_bars = api.get_barset(ticker, 'minute', 1).df.iloc[0]
current_price = symbol_bars[ticker]['close']
return float(current_price)
print(get_current_price("AAPL"))
```
This code example is almost straight from the API documentation.
The error I get is:
```
Traceback (most recent call last):
File "trader.py", line 110, in <module>
print(get_current_price("AAPL"))
File "trader.py", line 94, in get_current_price
symbol_bars = api.get_barset(ticker, 'minute', 1).df.iloc[0]
File "/usr/local/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 456, in get_barset
resp = self.data_get('/bars/{}'.format(timeframe), params)
File "/usr/local/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 172, in data_get
return self._request(
File "/usr/local/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 119, in _request
return self._one_request(method, url, opts, retry)
File "/usr/local/lib/python3.8/site-packages/alpaca_trade_api/rest.py", line 140, in _one_request
resp.raise_for_status()
File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://data.alpaca.markets/v1/bars/minute?symbols=AAPL&limit=1
```
This code has been working without problem until this morning, for some reason.
Edit: I've also tried forcing the version in the constructor of the trade API to be `v2`, however, looking at the documentation here: https://alpaca.markets/docs/api-documentation/api-v2/market-data/bars/ it appears that the v2 API still uses a v1 URL for this action? | closed | 2021-01-14T19:33:11Z | 2021-03-23T15:22:19Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/367 | [] | joshterrill | 2 |
gee-community/geemap | streamlit | 445 | Publish Maps Notebook 24 - Datapane - Invalid version 'unknown' | Thanks for these lessons and videos. A great resource.
### Environment Information
- geemap version: 0.8.14
- Python version: 3.9.2
- Operating System: macOS 10.15.7
### Description
I am trying to run the Example notebook 24 Publish Maps https://github.com/giswqs/geemap/blob/master/examples/notebooks/24_publish_maps.ipynb
### What I Did
I signed up and created a Datapane account and followed all the cells from your notebook.
To confirm datapane is set up I ran:
```
import datapane as dp
dp.ping()
```
Response:
```
Connected successfully to https://datapane.com as {username}
```
Where it breaks is here:
```
Map.publish(name='gee_folium_map', headline='Terrain Visualization', visibility='PUBLIC', overwrite=True)
```
Response:
```
Invalid version: 'unknown'
```

| closed | 2021-04-26T21:43:53Z | 2021-11-13T06:28:25Z | https://github.com/gee-community/geemap/issues/445 | [
"bug"
] | nygeog | 1 |
davidsandberg/facenet | tensorflow | 306 | How to train my model? | closed | 2017-06-02T08:18:41Z | 2017-06-05T00:55:03Z | https://github.com/davidsandberg/facenet/issues/306 | [] | ouyangbei | 6 | |
pywinauto/pywinauto | automation | 694 | move_window not available on dialog | According to the documentation, move_window should be available on all controls and it seems like I have a valid dialog object. But when I try to use this method on the main window of my application, I get an error. I'm assuming that I am actually doing it wrong, rather than a bug, but any help would be appreciated. I was using v 0.6.5 but updated to 0.6.6 to see if that changed anything, both have same results. Thanks!
Here's the code snippet that is running, followed by the error and the stdout for that section:
```
print(self.main_window)
print(self.main_window.handle)
print(self.main_window.wrapper_object())
self.main_window.restore()
self.main_window.move_window(x=0,y=0,width=700,height=800)
```
main_window is set with app.window(title_re='Alteryx Designer.*'), I just cached that reference since I was looking it up all the time. Don't think that affects this, I did try getting it again from the application object before calling move_window and got the same result.
```
Traceback (most recent call last):
File "c:\git\py-auto\TestFramework\STF\test\explorer_unit_test.py", line 62, in test_drag_and_drop
designer.position_window()
File "c:\git\py-auto\testframework\STF\modules\designer.py", line 1090, in position_window
self.main_window.move_window(x=0,y=0,width=700,height=800)
File "C:\Users\sezell\AppData\Local\Continuum\anaconda3\lib\site-packages\pywinauto\application.py", line 180, in __call__
format(self.criteria[-1]['best_match']))
AttributeError: Neither GUI element (wrapper) nor wrapper method 'move_window' were found (typo?)
-------------------- >> begin captured stdout << ---------------------
<pywinauto.application.WindowSpecification object at 0x0000019B593A4748>
985552
uiawrapper.UIAWrapper - 'Alteryx Designer x64 - New Workflow1', Dialog
``` | open | 2019-03-25T16:04:54Z | 2019-03-25T22:55:18Z | https://github.com/pywinauto/pywinauto/issues/694 | [
"duplicate"
] | alteryx-sezell | 1 |
xonsh/xonsh | data-science | 5,120 | Is the tutorial_ptk correct (maybe need to be fixed)? | 1) Is the tutorial_ptk correct?
I am beginner in python. And i wrote .xonshrc as here: https://xon.sh/tutorial_ptk.html
```
@events.on_ptk_create
def custom_keybindings(bindings, **kw):
@handler(Keys.ControlP)
def run_ls(event):
ls -l
event.cli.renderer.erase()
```
But error:
```
$ xonsh
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
NameError: name 'handler' is not defined
Exception raised in event handler; ignored.
```
I searched in a bugtracker and found the missing string: `handler = bindings.add`
The correct example is:
```
@events.on_ptk_create
def custom_keybindings(bindings, **kw):
handler = bindings.add
@handler(Keys.ControlP)
def run_ls(event):
ls -l
event.cli.renderer.erase()
```
The article may need to be fixed.
2) When to use @bindings.add and when @handler()?
This are 2 examples with equally behaviour:
```
@bindings.add(Keys.ControlW)
def say_hi(event):
ls
event.cli.renderer.erase()
```
```
@handler(Keys.ControlP)
def run_ls(event):
ls
event.cli.renderer.erase()
```
3) What analogues of variables of editable string from bash are in xonsh?
Bash:
$READLINE_LINE — editable string
$READLINE_POINT — cursor position
$READLINE_MARK — position of selection
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| open | 2023-04-19T16:25:08Z | 2023-04-21T14:39:40Z | https://github.com/xonsh/xonsh/issues/5120 | [
"docs",
"prompt-toolkit"
] | pigasus55 | 1 |
PokeAPI/pokeapi | graphql | 1,045 | Pokemons rename in species request | The names of the pokemons are different, taking into account the urls `https://pokeapi.co/api/v2/pokemon/{name}` and `https://pokeapi.co/api/v2/pokemon-species/{name}`, generating errors.
For example, in the `https://pokeapi.co/api/v2/pokemon/892` request, the name of the pokemon is **urshifu-single-strike**, however in the `https://pokeapi.co/api/v2/pokemon-species/892` request, the name is **urshifu**, generating an error, if the request is made by name. | closed | 2024-02-15T16:02:46Z | 2024-02-21T18:32:11Z | https://github.com/PokeAPI/pokeapi/issues/1045 | [] | aristofany-herderson | 2 |
flairNLP/fundus | web-scraping | 159 | Generate/Link xpath/csss documentation | This is important, since most of the work in adding a parser consists of xpath/css. We should ease this part of the contribution. | closed | 2023-04-06T12:22:09Z | 2023-08-22T17:58:05Z | https://github.com/flairNLP/fundus/issues/159 | [
"documentation"
] | Weyaaron | 2 |
ploomber/ploomber | jupyter | 276 | Document how to send custom parameters to nbconvert | We use the official nbconvert package to export notebooks to different formats. Each output format has some extra options that users can set via the `nbconvert_export_kwargs` parameter in `NotebookRunner` (such as hiding input cells). However, these details are hidden in the Python API docs, and even there, they aren't explained clearly:
- [x] Document in NotebookRunner some uses cases for custom args
- [x] Also document this in the user guide for people who uses the spec API (YAML)
Thanks @grst for reporting! | closed | 2020-11-04T19:41:33Z | 2020-11-16T02:36:47Z | https://github.com/ploomber/ploomber/issues/276 | [] | edublancas | 2 |
strawberry-graphql/strawberry | django | 2,943 | Allow other GraphiQL interfaces | I think we should allow users to choose between GraphiQL and other interfaces (like the Apollo Explorer).
This might not be too difficult to implement now that we have a base view, but maybe we need some tweaks for Django, or we should at least consider to (keep) support(ing) overriding the playground using templates (I think this works now, right @bellini666?)
The only thing I'm not sure about this, is if we set this option on the schema, or on the views. The views currently also have the ability to enable/disable GraphiQL, so maybe it should live there. | closed | 2023-07-12T14:52:52Z | 2025-03-20T15:56:18Z | https://github.com/strawberry-graphql/strawberry/issues/2943 | [
"feature-request"
] | patrick91 | 5 |
QingdaoU/OnlineJudge | django | 402 | Docker部署JavaScript支持的问题 |
我参照 OnlineJudgeDeploy 安装的,但是因为我海外的机器访问阿里云有问题,资源下载不了,所以我是从docker hub的镜像 `qduoj/judge-server` 安装的
安装完之后我添加了一个题目,但是发现没有JavaScript选项,是不是镜像的版本比较低呀?如果是的话,能更新到最新的吗?
多谢~ | open | 2022-01-25T06:49:26Z | 2022-01-26T03:30:06Z | https://github.com/QingdaoU/OnlineJudge/issues/402 | [] | akira-cn | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 934 | File structure for training (encoder, synthesizer (vocoder)) | I want to train my own model on the mozilla common voice dataset.
All .mp3s are delivered in one folder with accompanying .tsv lists. I understood, that next to an utterance the corresponding .txt has to reside.
But what about folder structre. Can I leave all .mp3s in that one folder or do I have to split them into one subdirectory for every speaker (i'd hate to do that.).
I would be very thankful if somebody could help me with the code adjustments since I am quite new to all of this :)
| open | 2021-12-02T06:38:49Z | 2022-09-01T14:43:28Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/934 | [] | Dannypeja | 24 |
Kanaries/pygwalker | plotly | 555 | Issue with Pygwalker | This is the error I am getting while trying to execute this code:
walker=pyg.walk(df)
Error:
[Open Browser Console for more detailed log - Double click to close this message]
Failed to load model class 'BoxModel' from module '@jupyter-widgets/controls'
Error: Module @jupyter-widgets/controls, version ^1.5.0 is not registered, however, 2.0.0 is
at f.loadClass (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/134.a63a8d293fb35a52dc25.js?v=a63a8d293fb35a52dc25:1:75057)
at f.loadModelClass (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/336.ebc7a55ea1768712771f.js?v=ebc7a55ea1768712771f:1:10729)
at f._make_model (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/336.ebc7a55ea1768712771f.js?v=ebc7a55ea1768712771f:1:7517)
at f.new_model (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/336.ebc7a55ea1768712771f.js?v=ebc7a55ea1768712771f:1:5137)
at f.handle_comm_open (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/336.ebc7a55ea1768712771f.js?v=ebc7a55ea1768712771f:1:3894)
at _handleCommOpen (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/134.a63a8d293fb35a52dc25.js?v=a63a8d293fb35a52dc25:1:73473)
at v._handleCommOpen (http://localhost:8889/static/notebook/3676.bundle.js:1:30808)
at async v._handleMessage (http://localhost:8889/static/notebook/3676.bundle.js:1:32702)
| closed | 2024-05-18T11:24:03Z | 2024-05-19T06:36:26Z | https://github.com/Kanaries/pygwalker/issues/555 | [] | Aditi-Gupta2001 | 4 |
quantmind/pulsar | asyncio | 225 | JsonProxy is not using utf-8 | The following line does not work as intended as it does not send requests encoded as utf-8:
`json.dumps(data).encode('utf-8')`
According to the docs I think you should set `ensure_ascii=False`:
`json.dumps(data, ensure_ascii=False).encode('utf-8')`
> If ensure_ascii is True (the default), all non-ASCII characters in the output are escaped with \uXXXX sequences
| closed | 2016-06-15T14:03:14Z | 2016-06-23T09:38:46Z | https://github.com/quantmind/pulsar/issues/225 | [] | wilddom | 3 |
onnx/onnx | scikit-learn | 5,799 | Verify implementation of BatchNormalization-9 | In the reference implementation of BatchNormalization-9, `_batchnorm_test_mode` is used twice. All the other implementations use both `_batchnorm_test_mode` and `_batchnorm_train_mode`. I'm trying to implement this operator in [GONNX](https://github.com/AdvancedClimateSystems/gonnx), and I'm wondering if this correct. Can someone confirm this? Thanks in advance!
https://github.com/onnx/onnx/blob/6ff456c1179c34827ad910e5601cb1486822d800/onnx/reference/ops/op_batch_normalization.py#L64
| closed | 2023-12-10T18:45:26Z | 2025-01-03T06:44:36Z | https://github.com/onnx/onnx/issues/5799 | [
"stale"
] | Swopper050 | 3 |
geex-arts/django-jet | django | 90 | Duplicate app label name with django-oscar dashboard | Ok, i foun this error installing `django-oscar` and `django-jet` in same project.
``` bash
./manage.py runserver ✓ 1567 11:22:45
Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x7febb8eafd90>
Traceback (most recent call last):
File "/home/salahaddin/Proyectos/demo-oscar/lib/python3.5/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/home/salahaddin/Proyectos/demo-oscar/lib/python3.5/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "/home/salahaddin/Proyectos/demo-oscar/lib/python3.5/site-packages/django/utils/autoreload.py", line 249, in raise_last_exception
six.reraise(*_exception)
File "/home/salahaddin/Proyectos/demo-oscar/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/salahaddin/Proyectos/demo-oscar/lib/python3.5/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/home/salahaddin/Proyectos/demo-oscar/lib/python3.5/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/salahaddin/Proyectos/demo-oscar/lib/python3.5/site-packages/django/apps/registry.py", line 89, in populate
"duplicates: %s" % app_config.label)
django.core.exceptions.ImproperlyConfigured: Application labels aren't unique, duplicates: dashboard
```
I solved this and i'll make pr for it.
| open | 2016-07-25T16:27:43Z | 2018-05-28T09:17:46Z | https://github.com/geex-arts/django-jet/issues/90 | [] | SalahAdDin | 10 |
cupy/cupy | numpy | 8,213 | Typecasting issue | ### Description
It seems that CuPy is incorrectly typecasting data. The data type should remain unchanged if the user explicitly specifies it during tensor creation. However, the data type is being changed from float16 to float32, even though it was manually set to float16. Please refer to the output and code for details.
### To Reproduce
```py
import cupy as cp
from prettytable import PrettyTable
import argparse
def bench_time_matmul_cupy(input: cp.ndarray, weights: cp.ndarray, output: cp.ndarray, warmup: int, iters: int):
stream = cp.cuda.Stream(non_blocking=True)
start = cp.cuda.Event(disable_timing=False)
end = cp.cuda.Event(disable_timing=False)
with stream:
for ii in range(warmup + iters):
if ii == warmup:
start.record(stream)
cp.matmul(input, weights, out=output)
end.record(stream)
end.synchronize()
return cp.cuda.get_elapsed_time(start, end) / iters
def run_benchmark(
BL: int, H2_by_N: int, H: int, warmup: int, iters: int, ngpus: int
):
input_cupy = cp.ones((BL, H2_by_N), dtype=cp.float16) * (1.1) / H
weights_cupy = cp.ones((H2_by_N, H), dtype=cp.float16) * (3.1) / H
output_cupy = cp.zeros((BL, H), dtype=cp.float16)
print(f"BL: {BL}, H2_by_N: {H2_by_N}, H: {H}", flush=True)
print(f"input_data_type: {input_cupy.dtype}, weights_data_type: {weights_cupy.dtype}, output_data_type: {output_cupy.dtype}", flush=True)
cp.cuda.runtime.deviceSynchronize()
full_cupy1 = bench_time_matmul_cupy(input_cupy, weights_cupy, output_cupy, warmup, iters)
cp.cuda.runtime.deviceSynchronize()
print(f"cast to float16", flush=True)
input_cupy = input_cupy.astype(cp.float16)
weights_cupy = weights_cupy.astype(cp.float16)
print(f"input_data_type: {input_cupy.dtype}, weights_data_type: {weights_cupy.dtype}, output_data_type: {output_cupy.dtype}", flush=True)
cp.cuda.runtime.deviceSynchronize()
full_cupy2 = bench_time_matmul_cupy(input_cupy, weights_cupy, output_cupy, warmup, iters)
print(".", end="", flush=True)
return full_cupy1, full_cupy2
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Matmul_reducescatter overlap")
parser.add_argument('-g', '--ngpus', default=2, required=False, type=int)
parser.add_argument('-b', '--batch_size', default=1, required=False, type=int)
parser.add_argument('-l', '--seq_len', default=1, required=False, type=int)
parser.add_argument('-hs', '--hidden_size', default=8192, required=False, type=int)
parser.add_argument('-w', '--num_warmup', default=20, required=False, type=int)
parser.add_argument('-i', '--active_iters', default=100, required=False, type=int)
parser.add_argument('--debug', action='store_true', default=False)
args = parser.parse_args()
cp.cuda.Device(0).use()
H = args.hidden_size
if H == 8192:
H2_by_N = 28672 // args.ngpus
elif H == 4096:
H2_by_N = 11008 // args.ngpus
else:
H2_by_N = (H * 4) // args.ngpus
debug = args.debug
table = None
# Set table headers
table = PrettyTable()
table.field_names = [
"BL",
"H2_by_N",
"H",
"Full Cupy - 1 (ms)",
"Full Cupy - 2 (ms)",
]
batch_sizes = [4]
seq_lens = [1]
BLs = list(set(b * l for b in batch_sizes for l in seq_lens))
BLs.sort()
full_cupys1 = [0] * len(BLs)
full_cupys2 = [0] * len(BLs)
for BL in BLs:
full_cupy1, full_cupy2 = run_benchmark(
BL, H2_by_N, H, args.num_warmup, args.active_iters, args.ngpus
)
full_cupys1[BLs.index(BL)] = full_cupy1
full_cupys2[BLs.index(BL)] = full_cupy2
import datetime
time_stamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
dir = "cupy_results"
import os
if not os.path.exists(dir):
os.makedirs(dir)
file = f"{dir}/matmul_{time_stamp}.txt"
print()
print(f"=====================================Table=====================================", flush=True)
for i in range(len(BLs)):
table.add_row(
[
BLs[i],
H2_by_N,
H,
"{:.2f}".format(full_cupys1[i]),
"{:.2f}".format(full_cupys2[i]),
]
)
with open(file, "w") as f:
f.write(f"ngpus: {args.ngpus}, H: {H}, H2_by_N: {H2_by_N}, num_warmup: {args.num_warmup}, active_iters: {args.active_iters}\n")
f.write(str(table))
print(table, flush=True)
```
### Installation
Wheel (`pip install cupy-***`)
### Environment
```
BL: 4, H2_by_N: 14336, H: 8192
input_data_type: float32, weights_data_type: float32, output_data_type: float16
cast to float16
input_data_type: float16, weights_data_type: float16, output_data_type: float16
.
=====================================Table=====================================
+----+---------+------+--------------------+--------------------+
| BL | H2_by_N | H | Full Cupy - 1 (ms) | Full Cupy - 2 (ms) |
+----+---------+------+--------------------+--------------------+
| 4 | 14336 | 8192 | 0.36 | 0.16 |
+----+---------+------+--------------------+--------------------+
```
### Additional Information
_No response_ | closed | 2024-02-27T03:47:22Z | 2024-02-27T06:33:03Z | https://github.com/cupy/cupy/issues/8213 | [
"issue-checked"
] | rajagond | 2 |
BayesWitnesses/m2cgen | scikit-learn | 331 | Mabe not support lightgbmClassifier lightgbm== 2.3.0 when tree['tree_structure']['decision_type'] is '==' | AssertionError: Unexpected comparison op
i find my model.booster_.dump_model()['tree_info'][0]['tree_structure']['decision_type'] is '=='
but the code is assert op == ast.CompOpType.LTE, "Unexpected comparison op"
and ast.CompOpType.LTE is '<='
i change my code assert op == ast.CompOpType.LTE or op == ast.CompOpType.EQ, "Unexpected comparison op"
and my code can run
my lightgbm is 2.3.0
i want to known whether it works, thks | closed | 2020-12-22T11:00:36Z | 2020-12-23T08:04:28Z | https://github.com/BayesWitnesses/m2cgen/issues/331 | [] | Sherlockgg | 1 |
andfanilo/streamlit-echarts | streamlit | 6 | echarts 5 support | Still waiting for https://github.com/hustcc/echarts-for-react/issues/388
A user wanted a demo of [gauge ring](https://echarts.apache.org/next/examples/en/editor.html?c=gauge-ring) in Streamlit | closed | 2020-12-11T07:48:52Z | 2021-04-16T19:46:51Z | https://github.com/andfanilo/streamlit-echarts/issues/6 | [] | andfanilo | 2 |
plotly/dash | flask | 2,911 | Inconsistent behavior with dcc.Store initial values and prevent_initial_call |
```
dash 2.16.1
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-mantine-components 0.12.1
dash-table 5.0.0
```
**description**
There appears to be an inconsistency in how dcc.Store components behave with different initial values when using prevent_initial_call=True.
**Expected behavior**
When using `prevent_initial_call=True`, callbacks should not be triggered on initial load for any dcc.Store, regardless of initial value.
**Actual behavior**
A callback with `prevent_initial_call=True` is triggered for a dcc.Store initialized with None, but not for one initialized with an empty dictionary {}.
Minimal reproducible example:
```
import dash
from dash import dcc, html, Input, Output, State
app = dash.Dash(__name__)
app.layout = html.Div([
dcc.Store(id='store_none', data=None),
dcc.Store(id='store', data={}),
html.H3("none store"),
html.Div(id='output none store', children="Not called yet"),
html.H3("store"),
html.Div(id='output store', children="Not called yet"),
])
@app.callback(
Output('output none store', 'children'),
Input('store_none', 'data'),
prevent_initial_call=True
)
def on_store_none(store_none):
return "called"
@app.callback(
Output('output store', 'children'),
Input("store", "data"),
prevent_initial_call=True
)
def on_store(store):
return "called"
if __name__ == '__main__':
app.run_server(debug=True)
```
**Screenshots**
 | open | 2024-07-03T16:42:36Z | 2024-08-30T15:13:07Z | https://github.com/plotly/dash/issues/2911 | [
"bug",
"P3"
] | shimon-l | 1 |
tensorpack/tensorpack | tensorflow | 1,150 | running multi-pod with multi-gpu,each pod contains four or more gpu cards,but according to your guideline,all pods can running,but happen a probelem,so list version of each component version | ### 1. Running Multi-Pod With Multi-GPUS:
(1) **Multi-Pods Running Four Or More GPUS,Each Pod Contains Four GPUS**
(2) **Running Script For Python(imagenet-xxx.py) From Your Project**
(3) **Parameter Is The Same To Your Example**
### 2. I observed:
(1) **communication between gpu can't work**
(2) **gradient not be update between gpus**
### 3. What you expected, if not obvious.
(1)**description for your guiedeline too sample?**
(2)**can you give me details document about your case with multi-machine with multi-gpus?**
### 4. Your environment:
+ Python version: python3
+ Hhorovod version:0.16.0
+ Tensorpack version: 0.9.4
### 5. My Script
- each pod contains four gpus
```
mpirun --allow-run-as-root -np 4 -mca plm_rsh_args '-p 22' --oversubscribe -mca plm_rsh_args '-p 22' --bind-to none -map-by slot -mca pml ob1 -x NCCL_IB_CUDA_SUPPORT=1 -x NCCL_IB_DISABLE=0 -x NCCL_DEBUG=INFO \
--mca btl_tcp_if_include 10.211.0.0/16 python3 imagenet-resnet-horovod.py -d 50 --data /data/ --load ${LOG_DIR}/model-134060 --eval --no-zmq-ops
``` | closed | 2019-04-16T13:47:00Z | 2019-04-23T07:26:07Z | https://github.com/tensorpack/tensorpack/issues/1150 | [
"unrelated"
] | perrynzhou | 2 |
microsoft/qlib | deep-learning | 1,819 | Will consider improving and enhancing the functionality and examples of Reinforcement Learning? | Will consider improving and enhancing the functionality and examples of Reinforcement Learning?
The current sample is running slowly and has not been updated for a long time.

-------
会否考虑完善和增强强化学习部分的功能和样例?
当前的样例运行缓慢且许久未更新了。 | open | 2024-07-01T12:53:23Z | 2024-09-28T07:09:33Z | https://github.com/microsoft/qlib/issues/1819 | [
"question"
] | ghyzx | 1 |
pykaldi/pykaldi | numpy | 102 | there is a problem between in fbank_feature.shape and pitch_feature.shape. | 1.fbank_feature:
from kaldi.feat.wave import WaveData
from kaldi.base._iostream import *
from kaldi.util.io import *
wavedata=WaveData()
inp=Input("BAC009S0912W0121.wav")
wavedata.read(inp.stream())
s3=wavedata.data()
s3 = s3[:,::int(wavedata.samp_freq / sf_fbank)]
m3 = SubVector(mean(s3, axis=0))
f3=fbank.compute_features(m3,sf_fbank,1.00)
print f3.shape
resutl:(_394_,80)
2.picth_feature
from kaldi.feat.mfcc import Mfcc, MfccOptions
from kaldi.feat.pitch import PitchExtractionOptions,ProcessPitchOptions
from kaldi.feat.pitch import *
pitch_opts = PitchExtractionOptions()
pitch_opts.samp_freq=16000.0
feat_pitch=compute_kaldi_pitch(pitch_opts,m3)
processpitchoptions=ProcessPitchOptions()
f_pitch=process_pitch(processpitchoptions,feat_pitch)
print f_pitch.shape
result: (_395_,3)
why is the first dimension different?
| closed | 2019-03-25T08:57:32Z | 2019-03-26T03:09:08Z | https://github.com/pykaldi/pykaldi/issues/102 | [] | liuchenbaidu | 2 |
tiangolo/uvicorn-gunicorn-fastapi-docker | pydantic | 76 | Root path is applied 2 times when using root_path | Here is error:

Python 3.7.8
fastapi 0.63.0
Run configuration:
`uvicorn.run("app:app", host="0.0.0.0", port=port, reload=True)`
I think the error is in the path here is console log:
`127.0.0.1:54913 - "GET /api/v1/api/v1/openapi.json HTTP/1.1" 404 Not Found`
Here is double path: /api/v1/**api/v1/**
Doc path is wrong http://localhost:8080/docs I expect it to be: http://localhost:8080/api/v1/docs
Here is the test code to reproduce 'test_app.py' file:
```
import uvicorn
import json
from fastapi import FastAPI, APIRouter, Response
from fastapi.responses import RedirectResponse
app = FastAPI(title="Root path test", root_path="/api/v1")
@app.post("/test-call ", tags=["test"])
def ping():
return Response(
json.dumps(dict(ping='pong')),
headers={'Content-Type':'application/json'})
@app.get("/")
def read_typer():
return RedirectResponse('/docs')
if __name__ == "__main__":
uvicorn.run("test_app:app", host="0.0.0.0", port=8080, reload=True)
```
| closed | 2021-02-25T15:46:02Z | 2022-11-25T00:24:13Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/76 | [
"answered"
] | mindej | 2 |
healthchecks/healthchecks | django | 384 | Add a curl example in PHP example section | Suggested here: https://twitter.com/smknstd/status/1272818956076810247

Aside from the code sample, it will need some accompanying information–
- is curl support typically built in with PHP, or does it need to be installed / enabled separately?
- in the "20 retries, 5 second timeouts" example, what is the maximum amount of time it can use up?
- is there a risk of curl code throwing exceptions? Should the snippet perhaps be wrapped in try..catch?
I'm very out of touch of today's PHP, so would appreciate any help. | closed | 2020-06-17T07:29:54Z | 2020-07-07T18:21:32Z | https://github.com/healthchecks/healthchecks/issues/384 | [] | cuu508 | 6 |
nvbn/thefuck | python | 1,500 | No fucks given for homebrew update command | ```console
% brew update go
Error: This command updates brew itself, and does not take formula names.
Use `brew upgrade go` instead.
% fuck
No fucks given
```
Thefuck should run the suggested command.
Env: The Fuck 3.32 using Python 3.13.2 and ZSH 5.9
It seems the rule doesn't work: https://github.com/nvbn/thefuck/blob/master/thefuck/rules/brew_update_formula.py | open | 2025-02-25T14:44:07Z | 2025-03-18T08:25:40Z | https://github.com/nvbn/thefuck/issues/1500 | [] | Hipska | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 128 | 词表合并问题 | 请教各位大佬:我在领域中文语料上训练了基于[sentencepiece](https://github.com/google/sentencepiece)的中文词表myly.model,请问与LLaMa原来的词表tokenizer.model如何进行合并? | closed | 2023-04-11T17:04:27Z | 2023-05-06T04:18:26Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/128 | [] | jamestch | 9 |
vimalloc/flask-jwt-extended | flask | 133 | Default invalid token callback may not be secure | Hi, I'm quite new to this extension, and please correct me if I'm wrong.
While playing around with the `@jwt_protected` endpoint, and trying to customize my `invalid_token_loader` callback I noticed that we can get different error messages depending on how we tamper the token.
I understand it's reasonable to differentiate between an invalid token and an expired token and respond to them accordingly. However, when I play around with the bits in the token, I receive error messages of all kinds, including:
- Invalid header string: 'utf-8' codec can't decode byte 0x88 in position 18: invalid start byte
- Invalid crypto padding
- Signature verification failed
- Invalid payload padding
It concerns me that it might be revealing too much information than needed, and may introduce risk to the encryption. I'm not an expert in crypto, but it seems that the above design may result in a Padding Oracle Attack[ ( Wiki link here).](https://en.wikipedia.org/wiki/Padding_oracle_attack)
Do you think it's a good idea to just return a unified message such as "Token is invalid" instead? | closed | 2018-03-16T22:29:56Z | 2018-03-26T21:17:03Z | https://github.com/vimalloc/flask-jwt-extended/issues/133 | [] | CristianoYL | 2 |
thp/urlwatch | automation | 232 | Identify jobs not only by URL, but also by filters and/or POST data | I want to check for changes in the "article-content" and the "table of contents" sections on the same web page. Both sections are in named classes. This is my config:
```
kind: url
name: GDPRTableOfContents
url: https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/
filter: element-by-class:toc,html2text
---
kind: url
name: Introduction
url: https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/
filter: element-by-class:article-content,html2text
---
...etc
```
however when I run urlwatch with this it appears to get confused and reports :
```
changed: GDPRTableOfContents
--- @ Thu, 10 May 2018 16:26:04 +0100
+++ @ Thu, 10 May 2018 16:27:15 +0100
@@ -1,4 +0,0 @@
- Introduction
-The Guide to the GDPR explains the provisions of the GDPR to help organisations comply with its requirements. It is for those who have day-to-day responsibility for data protection.
-This is a living document and we are working to expand it in key areas. It includes links to relevant sections of the GDPR itself, to other ICO guidance and to guidance produced by the EU’s Article 29 Working Party. The Working Party includes representatives of the data protection authorities from each EU member state, and the ICO is the UK’s representative.
-Alongside the Guide to the GDPR, we have produced a number of tools to help organisations to prepare for the GDPR:
________________________________________
urlwatch 2.9, Copyright 2008-2018 Thomas Perl
Website: https://thp.io/2008/urlwatch/
watched 38 URLs in 3 seconds
```
when that text still exists in the page. My first check on page in the "table of contents" section is to check that no new pages have been added to the website. The second check of that page is to check that the introduction text itself has not been changed. There are then about 40 other checks on the other pages linked from the table of contents. Does the cache use the name as well as the URL to uniquely identify each page check?
| closed | 2018-05-10T15:51:46Z | 2020-07-10T13:16:18Z | https://github.com/thp/urlwatch/issues/232 | [] | cjohnsonuk | 7 |
deepset-ai/haystack | nlp | 8,524 | Allow subclassing of `Document` | I want to subclass `Document`. An issue arise when using built-in components that uses `haystack.Document` as type and the type checking that is performed in the pipeline:
```python
PipelineConnectError: Cannot connect 'cleaner.documents' with 'empty_doc_remover.documents': their declared input and output types do not match.
'cleaner':
- documents: List[Document]
'empty_doc_remover':
- documents: list[Document] (available)
```
**Describe the solution you'd like**
Allow subclasses of Document by using `issubclass()`
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2024-11-08T14:32:23Z | 2025-03-03T15:00:24Z | https://github.com/deepset-ai/haystack/issues/8524 | [
"P3"
] | tsoernes | 0 |
littlecodersh/ItChat | api | 687 | 如何在运行脚本期间依然在手机客户端提示消息? | 因为运行itchat之后相当于是网页登录了,所以就不能在手机客户端进行消息的铃声震动提示了。但是有时候还是希望能够及时通过铃声和震动来提示,怎么办呢? | closed | 2018-07-01T18:26:35Z | 2018-07-17T02:48:45Z | https://github.com/littlecodersh/ItChat/issues/687 | [] | caiqiqi | 2 |
ray-project/ray | tensorflow | 51,416 | [Ray Data | Core ] | ### What happened + What you expected to happen
I am running SAC on a custom environment, and use Ray Data to load a small csv file for training. I keep encountering the following error message about having too many positional arguments:
Exception occurred in Ray Data or Ray Core internal code. If you continue to see this error, please open an issue on the Ray project GitHub page with the full stack trace below:
https://github.com/ray-project/ray/issues/new/choose
```
Full stack trace:
Traceback (most recent call last):
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\data\exceptions.py", line 49, in handle_trace
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\data\_internal\plan.py", line 429, in execute_to_iterator
bundle_iter = itertools.chain([next(gen)], gen)
^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\data\_internal\execution\interfaces\executor.py", line 37, in __next__
return self.get_next()
^^^^^^^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\data\_internal\execution\legacy_compat.py", line 76, in get_next
bundle = self._base_iterator.get_next(output_split_idx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\data\_internal\execution\streaming_executor.py", line 168, in get_next
self._outer.shutdown(
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\data\_internal\execution\streaming_executor.py", line 229, in shutdown
self._autoscaler.on_executor_shutdown()
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\data\_internal\execution\autoscaler\default_autoscaler.py", line 185, in on_executor_shutdown
actor.request_resources.remote({}, self._execution_id)
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\actor.py", line 206, in remote
return self._remote(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\_private\auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\util\tracing\tracing_helper.py", line 422, in _start_span
return method(self, args, kwargs, *_args, **_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\actor.py", line 366, in _remote
return invocation(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\actor.py", line 347, in invocation
return actor._actor_method_call(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\actor.py", line 1479, in _actor_method_call
list_args = signature.flatten_args(function_signature, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\_private\signature.py", line 126, in flatten_args
validate_args(signature_parameters, args, kwargs)
File "C:\Users\[username]\AppData\Local\Programs\Python\Python312\Lib\site-packages\ray\_private\signature.py", line 99, in validate_args
raise TypeError(str(exc)) from None
TypeError: too many positional arguments
```
### Versions / Dependencies
Ray: 2.43
Python: 3.12.6
OS: Windows 11
### Reproduction script
[I am not entirely sure which part of my script led to the error - I just posted below as a starting point and happy to take further hints.]
```
import os
import ray
data_dir = ospath.join(ospath.dirname(os.path.realpath(__file__)), "Data")
ticker = env_config.get("ticker")
ticker_file_stream = os.path.join(f"{data_dir}", f"{ticker}.csv")
assert os.path.isfile(
ticker_file_stream
), f"Historical data file stream not found at: {ticker_file_stream}"
ds = ray.data.read_csv(ticker_file_stream)
print("Finished loading dataset.")
```
### Issue Severity
High: It blocks me from completing my task. | open | 2025-03-17T07:11:48Z | 2025-03-18T18:35:11Z | https://github.com/ray-project/ray/issues/51416 | [
"bug",
"triage",
"data"
] | teen4ever | 0 |
plotly/dash-core-components | dash | 601 | This fix correctly handles the `x` action case - only additional fix is that it might setting one of the date props superfluously if already / still `None`. | This fix correctly handles the `x` action case - only additional fix is that it might setting one of the date props superfluously if already / still `None`.
----------
Leaving the scope of this specific issue / fix and looking into the logic of this component as a whole. Multiple scenarios seem wrongly implemented and I think we should use this opportunity to address them instead of just fixing this one case.
For example, setting `end_date` first with `updatemode=bothdates` will trigger setProps, setting `start_date` afterwards will not, even though both values are now set -- there's an assumption here that start is always set before end...; manually resetting a value with `clearable=True` is possible if only one value is set but not if both are, etc.
@alexcjohnson My feeling is we should pay the price and fix these components in depth as we work on them. I'm almost positive many other comps will exhibit deep logic flaws as we start going through them. Would rather fix them for real than make 'cosmetic' fixes that addresses whatever specific issue was logged.
[Warning: Wall of text...]
Four props are involved in the update logic: `start_date`, `end_date`, `clearable`, `updatemode`.
The possible operation / cases the user might trigger are:
With `clearable=True|False`
- set start_date w/ end_date unset
- set end_date w/ start_date unset
- set start_date w/ end_date set
- set end_date w/ start_date set
With `clearable=True`
- manually unset start_date w/ end_date unset
- manually unset end_date w/ start_date unset
- manually unset start_date w/ end_date set
- manually unset end_date w/ start_date set
- clear w/ start_date set, end_date unset
- clear w/ start_date unset, end_date set
- clear w/ start_date set, end_date set
All combinations and expected behaviors, hopefully without mistakes:
`clearable=False` + `updatemode=singledate`
- set start_date w/ end_date unset -> setProps called w/ start_date only
- set end_date w/ start_date unset -> setProps called w/ end_date only
- set start_date w/ end_date set -> setProps called w/ start_date only
- set end_date w/ start_date set -> setProps called w/ end_date only
- manually unset start_date w/ end_date unset -> does not unset
- manually unset end_date w/ start_date unset -> does not unset
- manually unset start_date w/ end_date set -> does not unset
- manually unset end_date w/ start_date set -> does not unset
- clear w/ start_date set, end_date unset -> not available
- clear w/ start_date unset, end_date set -> not available
- clear w/ start_date set, end_date set -> not available
`clearable=False` + `updatemode=bothdates`
- set start_date w/ end_date unset -> state updated, no setProps call
- set end_date w/ start_date unset -> state updated, no setProps call
- set start_date w/ end_date set -> setProps called w/ start_date and end_date
- set end_date w/ start_date set -> setProps called w/ start_date and end_date
- manually unset start_date w/ end_date unset -> does not unset
- manually unset end_date w/ start_date unset -> does not unset
- manually unset start_date w/ end_date set -> does not unset
- manually unset end_date w/ start_date set -> does not unset
- clear w/ start_date set, end_date unset -> not available
- clear w/ start_date unset, end_date set -> not available
- clear w/ start_date set, end_date set -> not available
`clearable=True` + `updatemode=singledate`
- set start_date w/ end_date unset -> setProps called w/ start_date only
- set end_date w/ start_date unset -> setProps called w/ end_date only
- set start_date w/ end_date set -> setProps called w/ start_date only
- set end_date w/ start_date set -> setProps called w/ end_date only
- manually unset start_date w/ end_date unset -> unsets, setProps called with start_date only
- manually unset end_date w/ start_date unset -> unsets, setProps called with end_date only
- manually unset start_date w/ end_date set -> unsets, setProps called with start_date only
- manually unset end_date w/ start_date set -> unsets, setProps called with end_date only
- clear w/ start_date set, end_date unset -> unsets, setProps called with start_date
- clear w/ start_date unset, end_date set -> unsets, setProps called with end_date
- clear w/ start_date set, end_date set -> unsets, setProps called with start_date and end_date
`clearable=True` + `updatemode=bothdates`
- set start_date w/ end_date unset -> state updated, no setProps call
- set end_date w/ start_date unset -> state updated, no setProps call
- set start_date w/ end_date set -> setProps called w/ start_date and end_date
- set end_date w/ start_date set -> setProps called w/ start_date and end_date
- manually unset start_date w/ end_date unset -> unsets, setProps called with start_date
- manually unset end_date w/ start_date unset -> unsets, setProps called with end_date
- manually unset start_date w/ end_date set -> unsets, no setProps call
- manually unset end_date w/ start_date set -> unsets, no setProps call
- clear w/ start_date set, end_date unset -> unsets, setProps called with start_date
- clear w/ start_date unset, end_date set -> unsets, setProps called with end_date
- clear w/ start_date set, end_date set -> unsets, setProps called with end_date and start_date
NB. The above considers that `bothdates` triggers setProps call if either both values are defined or both vaules are None.
_Originally posted by @Marc-Andre-Rivet in https://github.com/plotly/dash-core-components/timeline_ | closed | 2019-08-08T20:35:31Z | 2019-08-08T20:35:53Z | https://github.com/plotly/dash-core-components/issues/601 | [] | byronz | 0 |
HumanSignal/labelImg | deep-learning | 648 | Terminal error: "ZeroDivisionError: float division by zero," program crashes when attempting to create YOLO training samples on large images | I am trying to label images (with bounding boxes) with the YOLO format, but the program keeps crashing when I try to label large images (works fine with Pascal VOC samples). I have several UAV images (dimensions 4000 x 3000 pixels) taken at low altitude, and I need to label these images, as they are very high-resolution and useful for my project. However, when I splice the images into 1000 x 1000 pixel images and convert from .jpg to .png format, it will save the samples just fine. I am using a conda environment to run the code, and am relatively new to Python.
Note: Installation followed [this](https://medium.com/@sanghuynh_73086/how-to-install-labelimg-in-windows-with-anaconda-c659b27f0f) tutorial.
The following code was run to open the program in Anaconda Shell:
```
conda activate labelImg
cd D:\\myDirectory
python labelImg.py D:\imageDirectory_with_multiple_images D:\imageDirectory\text_file_with_class_specified.txt
```
Note that there is one class to classify in the text file, "weeds."
After specifying where to save the YOLO samples (D:\imageDirectory_with_multiple_images), I opened the big image (First image below) to create training samples, but when clicking the "save" or "Next Image" button, the program crashes with the following error:
```
Traceback (most recent call last):
File "labelImg.py", line 1339, in saveFile
self._saveFile(savedPath)
File "labelImg.py", line 1371, in _saveFile
if annotationFilePath and self.saveLabels(annotationFilePath):
File "labelImg.py", line 837, in saveLabels
self.lineColor.getRgb(), self.fillColor.getRgb())
File "D:\labelImg\libs\labelFile.py", line 89, in saveYoloFormat
writer.save(targetFile=filename, classList=classList)
File "D:\labelImg\libs\yolo_io.py", line 70, in save
classIndex, xcen, ycen, w, h = self.BndBox2YoloLine(box, classList)
File "D:\labelImg\libs\yolo_io.py", line 37, in BndBox2YoloLine
xcen = float((xmin + xmax)) / 2 / self.imgSize[1]
ZeroDivisionError: float division by zero
```
However, after slicing the big photos to smaller photos with dimensions of 1000 x 1000 using the following Python code in Jupyter Notebook...
```
import glob, os
import image_slicer
for file in glob.glob('D:\\directory_of_large_images'):
image_slicer.slice(file, row=3, col=4)
```
...the YOLO training samples save just fine. The second attached photo (below) is a smaller, sliced photo that allowed the training samples to be saved, and did not crash the program.
Let me know if any other information is needed to solve the issue. Thanks!


| closed | 2020-09-17T20:04:18Z | 2022-01-07T15:56:36Z | https://github.com/HumanSignal/labelImg/issues/648 | [] | ib124 | 1 |
pandas-dev/pandas | python | 60,928 | ENH: Control resampling at halfyear with origin | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
s1 = pd.Series(1, pd.date_range('2025', freq='D', periods=700)).resample('2QS-JAN').sum()
s2 = pd.Series(1, pd.date_range('2025-04', freq='D', periods=700)).resample('2QS-JAN').sum()
# s1 expectedly has timestamps in january and july
# s1
# 2025-01-01 181
# 2025-07-01 184
# 2026-01-01 181
# 2026-07-01 154
# Freq: 2QS-JAN, dtype: int64 # NB frequency
# but s2 unexpectedly has timestamps in april and october
# s2
# 2025-04-01 183
# 2025-10-01 182
# 2026-04-01 183
# 2026-10-01 152
# Freq: 2QS-JAN, dtype: int64 # NB frequency
s1.index.freq == s2.index.freq # True
```
### Issue Description
It seems there is no way to force where the period boundaries are when resampling at the 2-Quarter frequency. Resampling at `2QS-APR` gives the same results for `s1` and `s2` as those shown above.
### Expected Behavior
I'd expect the index of `s2` to also have timestamps on the first of January and July.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.12
python-bits : 64
OS : Linux
OS-release : 6.9.3-76060903-generic
Version : #202405300957~1738770968~22.04~d5f7c84 SMP PREEMPT_DYNAMIC Wed F
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : 7.3.7
IPython : 8.29.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.9.2
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| closed | 2025-02-13T22:52:27Z | 2025-03-03T18:21:17Z | https://github.com/pandas-dev/pandas/issues/60928 | [
"Enhancement",
"Frequency",
"Resample"
] | rwijtvliet | 8 |
alteryx/featuretools | data-science | 2,673 | Remove premium primitives from docs to be able to release it | closed | 2024-02-16T18:02:46Z | 2024-02-16T19:14:28Z | https://github.com/alteryx/featuretools/issues/2673 | [] | tamargrey | 0 | |
erdewit/ib_insync | asyncio | 542 | Error for pnlSingle | Hi,
I am using
`ib.reqPnLSingle('account', '', fill.contract.conId)`
and I am receiving:
Error for pnlSingle:
Traceback (most recent call last):
File "/home/p/.pyenv/versions/3.6.4/lib/python3.6/site-packages/ib_insync/decoder.py", line 185, in handler
for (typ, field) in zip(types, fields[skip:])]
File "/home/p/.pyenv/versions/3.6.4/lib/python3.6/site-packages/ib_insync/decoder.py", line 185, in <listcomp>
for (typ, field) in zip(types, fields[skip:])]
ValueError: invalid literal for int() with base 10: '2.0'
Could you please share the proper way of calculation loss/profit?
| closed | 2023-01-17T12:05:30Z | 2023-01-19T09:06:41Z | https://github.com/erdewit/ib_insync/issues/542 | [] | piotrgolawski | 1 |
ultralytics/ultralytics | python | 19,108 | Scores for all classes for each prediction box | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I want be able to get the scores across all classes for a prediction.
Example, if I have a picture of a car, I still want the prediction scores for the other classes I'm considering.
I don't see a way to do this after going through the documentation. I just get an output tensor which gives the top class and score.

### Additional
_No response_ | open | 2025-02-06T18:58:41Z | 2025-02-07T23:18:02Z | https://github.com/ultralytics/ultralytics/issues/19108 | [
"question",
"detect"
] | bharathsivaram10 | 6 |
ResidentMario/geoplot | matplotlib | 66 | Add dependency to doc | Hi,
I'm beginning to use your library and just wanted to share the troubles I went through to install the library from `pip`.
Maybe it should be nice to have a Dependency section in the README of this project.
I used this information [from cartopy](https://scitools.org.uk/cartopy/docs/v0.15/installing.html#requirements) to download all the requirements, so maybe just this link can be enough.
Thanks for the wonderful work done btw. | closed | 2018-12-11T12:04:10Z | 2018-12-16T04:45:16Z | https://github.com/ResidentMario/geoplot/issues/66 | [] | SylvainLan | 2 |
dmlc/gluon-cv | computer-vision | 910 | Combine datasets in a single dataloader | I would like to experiment with combining multiple datasets to retrain object detection but I am unable to find an easy way to do so.
Using https://gluon-cv.mxnet.io/build/examples_detection/train_yolo_v3.html as a base example, my first idea would be to change
` for i, batch in enumerate(train_data):`
to handle train_data as a list of dataloaders. That, however, would prevent mixup of data from the different datasets in the minibatch.
I was wondering if there is an easy way to combine the datasets before creating the dataloader, i.e., before this line, for example:
` val_loader = gluon.data.DataLoader(
val_dataset.transform(YOLO3DefaultValTransform(width, height)),
batch_size, False, batchify_fn=val_batchify_fn, last_batch='keep', num_workers=num_workers)`
It seems to me that the structure should be quite similar to `class ArrayDataset(Dataset):` that combines dataset-like objects.
Any suggestions? | closed | 2019-08-15T09:38:24Z | 2021-06-01T07:11:00Z | https://github.com/dmlc/gluon-cv/issues/910 | [
"Stale"
] | douglas125 | 2 |
google-research/bert | nlp | 476 | Crash issue when best_non_null_entry is None on SQuAD 2.0 | If the n best entries are all null, we would get 'None' for best_non_null_entry and the program will crash in the next few lines.
I made a workaround as following by assigning `score_diff = FLAGS.null_score_diff_threshold + 1.0` to fix this issue in `run_squad.py`.
Please fix it in the official release.
```
#line 885
best_non_null_entry = None
for entry in nbest:
total_scores.append(entry.start_logit + entry.end_logit)
if not best_non_null_entry:
if entry.text:
best_non_null_entry = entry
......
#line 905
if not FLAGS.version_2_with_negative:
all_predictions[example.qas_id] = nbest_json[0]["text"]
else:
# predict "" iff the null score - the score of best non-null > threshold
if best_non_null_entry:
score_diff = score_null - best_non_null_entry.start_logit - (
best_non_null_entry.end_logit)
scores_diff_json[example.qas_id] = score_diff
else:
score_diff = FLAGS.null_score_diff_threshold + 1.0
if score_diff > FLAGS.null_score_diff_threshold:
all_predictions[example.qas_id] = ""
else:
all_predictions[example.qas_id] = best_non_null_entry.text
``` | open | 2019-03-05T01:26:38Z | 2019-03-05T01:26:38Z | https://github.com/google-research/bert/issues/476 | [] | xianzhez | 0 |
PokemonGoF/PokemonGo-Bot | automation | 5,556 | Sniper is assuming false VIPs on social mode | It seems to be reporting false VIPs when under social mode. I'll investigate it tomorrow because I'm sick and my head hurts right now. You can close all the other issues about this.
| closed | 2016-09-20T01:07:10Z | 2016-09-20T02:42:17Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5556 | [
"Bug"
] | YvesHenri | 2 |
apache/airflow | automation | 47,373 | Deferred TI object has no attribute 'next_method' | ### Apache Airflow version
3.0.0b1
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
RetryOperator is failing.
```
[2025-03-05T07:12:25.188823Z] ERROR - Task failed with exception logger="task" error_detail=
[{"exc_type":"AttributeError","exc_value":"'RuntimeTaskInstance' object has no attribute
'next_method'","syntax_error":null,"is_cause":false,"frames":
[{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":605,"name":"run"},
{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":726,"name":"_execut
e_task"},{"filename":"/opt/airflow/airflow/models/baseoperator.py","lineno":168,"name":"wrapper"}
,{"filename":"/files/dags/retry.py","lineno":17,"name":"execute"},{"filename":"/usr/local/lib/python3.9/site
-packages/pydantic/main.py","lineno":891,"name":"__getattr__"}]}]
```
### What you think should happen instead?
RetryOperator should show same behaviour as AF2
### How to reproduce
Run the below DAG:
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.exceptions import AirflowException
from airflow.models import BaseOperator
from airflow.triggers.testing import SuccessTrigger
class RetryOperator(BaseOperator):
def execute(self, context):
ti = context["ti"]
has_next_method = bool(ti.next_method)
try_number = ti.try_number
self.log.info(
f"In `execute`: has_next_method: {has_next_method}, try_number:{try_number}"
)
self.defer(
trigger=SuccessTrigger(),
method_name="next",
kwargs={"execute_try_number": try_number},
)
def next(self, context, execute_try_number, event=None):
self.log.info("In next!")
ti = context["ti"]
has_next_method = bool(ti.next_method)
try_number = ti.try_number
self.log.info(
f"In `next`: has_next_method: {has_next_method}, try_number:{try_number}, excute_try_number: {execute_try_number}"
)
if try_number == 1:
# Force a retry
raise AirflowException("Force a retry")
# Did we run `execute`?
if execute_try_number != try_number:
raise AirflowException("`execute` wasn't run during retry!")
return None # Success!
with DAG(
"triggerer_retry", schedule=None, start_date=datetime(2021, 9, 13), tags=['core']
) as dag:
RetryOperator(task_id="retry", retries=1, retry_delay=timedelta(seconds=15))
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-05T07:39:18Z | 2025-03-12T08:31:50Z | https://github.com/apache/airflow/issues/47373 | [
"kind:bug",
"priority:high",
"area:core",
"affected_version:3.0.0beta"
] | atul-astronomer | 10 |
flasgger/flasgger | rest-api | 424 | Missing git tag for 0.9.5 release | It would be nice to keep PyPI releases and git tags in sync :) | closed | 2020-08-01T05:43:04Z | 2020-08-01T15:28:29Z | https://github.com/flasgger/flasgger/issues/424 | [] | felixonmars | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.