repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
nolar/kopf | asyncio | 1,021 | How to invoke kopf when only operating against cluster level resources | ### Keywords
cluster only
### Problem
When kopf is invoked without either --namespace or --all-namespaces, the following warning is emitted:
```
/opt/aws-auth/.venv/lib/python3.11/site-packages/kopf/_core/reactor/running.py:176: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to the cluster-wide mode for backward compatibility.
warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
```
My operator is only operating against cluster level crds.
How should kopf be invoked in this case? | open | 2023-03-29T00:08:28Z | 2023-07-23T21:05:48Z | https://github.com/nolar/kopf/issues/1021 | [
"question"
] | iciclespider | 2 |
idealo/image-super-resolution | computer-vision | 133 | documentation not working | documentation not working
i tried to follow this prediction but i was not able to make it work at al... class module or weights always having problems.
https://idealo.github.io/image-super-resolution/tutorials/prediction/
please update it. | open | 2020-07-15T18:56:57Z | 2020-07-20T09:30:31Z | https://github.com/idealo/image-super-resolution/issues/133 | [] | sinanisler | 1 |
horovod/horovod | deep-learning | 3,595 | Torch test deadlock on MacOS CI | Just observed on master: https://github.com/horovod/horovod/runs/7232373749?check_suite_focus=true
```
...
[1,0]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_average [1,1]<stdout>:
[1,1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_average [1,1]<stdout>:PASSED[1,0]<stdout>:PASSED[1,1]<stdout>:
[1,1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_cpu_gpu_error [1,0]<stdout>:
[1,0]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_cpu_gpu_error [1,1]<stdout>:SKIPPED[1,0]<stdout>:SKIPPED[1,1]<stdout>:
[1,1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_duplicate_name_error [1,0]<stdout>:
[1,0]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_duplicate_name_error [1,1]<stdout>:FAILED[1,1]<stdout>:
[1,0]<stderr>:[2022-07-07 12:11:45.233659: W[1,0]<stderr>: [1,0]<stderr>:/Users/runner/work/horovod/horovod/horovod/common/stall_inspector.cc:107] [1,0]<stderr>:One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock.
[1,0]<stderr>:Missing ranks:
[1,0]<stderr>:0: [allreduce.noname.3077]
[1,0]<stderr>:1: [barrier.noname]
[1,0]<stderr>:[2022-07-07 12:12:45.234446: W /Users/runner/work/horovod/horovod/horovod/common/stall_inspector.cc:107] One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock.
[1,0]<stderr>:Missing ranks:
[1,0]<stderr>:0: [allreduce.duplicate_name, allreduce.noname.3077]
[1,0]<stderr>:1: [barrier.noname]
...
```
Then failure by timeout. | open | 2022-07-07T13:22:39Z | 2022-10-12T11:30:16Z | https://github.com/horovod/horovod/issues/3595 | [
"bug"
] | maxhgerlach | 2 |
MagicStack/asyncpg | asyncio | 922 | TypeError: an integer is required (got type asyncpg.Record) | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.25.0
* **PostgreSQL version**: psql (12.11 (Ubuntu 12.11-0ubuntu0.20.04.1), server 9.5.14) - that is a very old server, and I will address this shortly.
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: This is a local PostgreSQL
* **Python version**: CPython 3.8
* **Platform**: Ubuntu 20.04
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**: This is a binary installation
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: uvloop is installed in the virtualenv - would it be used automatically?
This may well be a newbie problem, but both a call to `connection.executemany` with a parameterized insert statement and a call to `connection.copy_record_to_table` produces the TypeError. I will update if movingf to a PostgreSQL server 12.x resolves the issue.
```python
File "./manage.py", line 70, in make_data
await conn.copy_records_to_table(
File "/home/dan/python-envs/fastapi/lib/python3.8/site-packages/asyncpg/connection.py", line 984, in copy_records_to_table
return await self._protocol.copy_in(
File "asyncpg/protocol/protocol.pyx", line 525, in copy_in
File "asyncpg/protocol/protocol.pyx", line 453, in asyncpg.protocol.protocol.BaseProtocol.copy_in
File "asyncpg/protocol/codecs/base.pyx", line 206, in asyncpg.protocol.protocol.Codec.encode
File "asyncpg/protocol/codecs/base.pyx", line 111, in asyncpg.protocol.protocol.Codec.encode_scalar
File "asyncpg/pgproto/./codecs/int.pyx", line 54, in asyncpg.pgproto.pgproto.int4_encode
TypeError: an integer is required (got type asyncpg.Record)
```
```python
async def make_contacts(faker, contact_types, num_contacts=10):
for _ in range(num_contacts):
first_name = faker.first_name()
last_name = faker.last_name()
type_id = faker.random.choice(contact_types)
if faker.random.random() < 0.4:
phone_number = faker.phone_number()
else:
phone_number = None
if faker.random.random() < 0.7:
email = faker.email()
else:
email = None
yield (first_name, last_name, type_id, phone_number, email)
print('.')
async def make_data(seed=0, num_contacts=10):
settings = get_settings()
params = settings.connect_params()
faker = Faker(seed)
conn = await asyncpg.connect(**params)
sql = dedent("""\
SELECT type_id FROM contact_types ORDER BY type_id
""")
contact_types = await conn.fetch(sql)
await conn.copy_records_to_table(
'contacts',
records=make_contacts(faker, contact_types, num_contacts),
columns=[
'first_name',
'last_name',
'type_id',
'phone_number',
'email'
]
)
def command_makedata(opts):
# we will support an alternative environment file later on
asyncio.run(make_data(opts.seed, opts.contacts))
return 0
``` | closed | 2022-06-03T03:59:07Z | 2022-06-03T12:42:44Z | https://github.com/MagicStack/asyncpg/issues/922 | [] | danizen | 2 |
microsoft/nni | deep-learning | 4,910 | Accessing/regenerating the model objects used in the trials | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
The ability to programmatically instantiate a model object using the ModelSpace class given the set of choices.
**Why is this needed**:
1- For better testing! The user cannot test the ModelSpace module without passing it to an experiment.
2- Ultimately, the user needs to put the NAS experiment results into use! Which is commonly achieved by training a model with the found architecture (the highest-scoring trial). If the ModelSpace could give an instance of a model object (ready to be trained) given a dictionary of choices, the user would easily train and use the model using the most optimum architecture found in the trials.
**Without this feature, how does current nni work**:
To the best of my knowledge, currently, one can only do this manually by implementing a new model class based on the values of the choices found in the trial's config.
**Components that may involve changes**:
ModelSpace
RetiariiExperiment
**Brief description of your proposal if any**:
I can think of two ways of addressing this, either the experiment object provides the model objects used in the trials (or saves them somewhere as a pickle file), or it should return a dictionary of choices for each trial, that when passed to a new method in ModelSpace (such as instantiate_from_dict) it would return an instance of the model object with the given choices. | open | 2022-06-03T01:14:09Z | 2022-06-15T02:46:35Z | https://github.com/microsoft/nni/issues/4910 | [
"new feature",
"user raised",
"NAS"
] | aminabedi | 1 |
laughingman7743/PyAthena | sqlalchemy | 545 | Feature request: support for positional parameter substitution | Thank you for a wonderful library and for your work maintaining it!
One feature I hope can be considered for future development is implementing `?` style param support. Athena supports [parameterized queries](https://docs.aws.amazon.com/athena/latest/ug/querying-with-prepared-statements.html) using the `?` character for positional substitution.
For use cases where queries are run both using `pyathena` and by analysts directly, having `PyFormat`-style params is not ideal as it requires analysts to modify queries to execute them directly in their SQL clients or in the Athena console. Supporting `?` for query parameterization would decouple the query from any specific execution context, making it more portable.
Thanks for your time and consideration. | closed | 2024-05-15T17:43:40Z | 2024-12-26T07:46:56Z | https://github.com/laughingman7743/PyAthena/issues/545 | [] | paulgrow-octane | 1 |
autokey/autokey | automation | 329 | Now that Autokey google forums is closed, is there a place where we could share our scripts with others? | ## Classification:
(Pick one of: Bug, Crash/Hang/Data Loss, Performance, UI/Usability, Feature (New), Enhancement)
## Reproducibility:
(Pick one of: Always, Sometimes, Rarely, Unable, I Didn't Try)
## Version
AutoKey version:
Used GUI (Gtk, Qt, or both):
If the problem is known to be present in more than one version, please list all of those.
Installed via: (PPA, pip3, …).
Linux Distribution:
## Summary
Summary of the problem.
## Steps to Reproduce (if applicable)
- I do this
- I do that
## Expected Results
- This should happen.
## Actual Results
- Instead, this happens. :(
If helpful, submit screenshots of the issue to help debug.\
Debugging output, obtained by launching autokey via `autokey-gtk --verbose` (or `autokey-qt --verbose`, if you use the Qt interface) is also useful.\
Please upload the log somewhere accessible or put the output into a code block (enclose in triple backticks).
```
Example code block. Replace this whith your log content.
```
## Notes
Describe any debugging steps you've taken yourself.
If you've found a workaround, please provide it here.
| closed | 2019-11-27T05:45:08Z | 2019-11-28T16:55:02Z | https://github.com/autokey/autokey/issues/329 | [
"documentation"
] | stepnjump | 1 |
litestar-org/litestar | api | 3,554 | Unexpected behavior from `module_to_os_path` | ### Description
The file [litestar/utils/module_loader.py](https://github.com/litestar-org/litestar/blob/84f51c8afc3203cd4914922b2ec3c1e92d5d40ba/litestar/utils/module_loader.py#L21) contains a function definition for `module_to_os_path`.
Assuming I've understood the code comments, the purpose of this code is to return the path to a **directory**, which is either the base directory of the project or (when supplied with the name of a module) the base directory of the module.
Unfortunately, if you define your Litestar object in a file named `app.py`, and there is no other module named `app`, then this function returns the path to that file. This is already an error given that the intention is to return a path to a directory.
I noticed this problem while attempting to set up my own starting configuration following the `litestar-fullstack` repository. This repository defines a `BASE_DIR` property at https://github.com/litestar-org/litestar-fullstack/blob/8e6edb90a401778741062a8383ff6e6f354b44dd/src/app/config/base.py#L23 using `module_to_os_path` which is then used to define subdirectories for alembic. In my case (using a file named `app.py`, no module named `app`), this produces invalid directory pathnames which cannot then be created. For example, when attempting to run as per the MCVE below, I see the error:
```
File "/home/ghf/projects/dappel-litestar/.venv/lib/python3.11/site-packages/alembic/command.py", line 99, in init
script._generate_template(
File "/home/ghf/projects/dappel-litestar/.venv/lib/python3.11/site-packages/alembic/script/base.py", line 593, in _generate_template
util.template_to_file(src, dest, self.output_encoding, **kw)
File "/home/ghf/projects/dappel-litestar/.venv/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 41, in template_to_file
with open(dest, "wb") as f:
^^^^^^^^^^^^^^^^
NotADirectoryError: [Errno 20] Not a directory: '/home/ghf/projects/dappel-litestar/src/app.py/db/migrations/alembic.ini'
```
### URL to code causing the issue
https://github.com/litestar-org/litestar/blob/84f51c8afc3203cd4914922b2ec3c1e92d5d40ba/litestar/utils/module_loader.py#L21
### MCVE
```python
from __future__ import annotations
from typing import TYPE_CHECKING
from advanced_alchemy.extensions.litestar import (
AlembicAsyncConfig,
AsyncSessionConfig,
SQLAlchemyPlugin,
SQLAlchemyAsyncConfig,
async_autocommit_before_send_handler,
)
import binascii
import json
import os
from dataclasses import dataclass, field
from functools import lru_cache
from pathlib import Path
from typing import TYPE_CHECKING, Any, Final
from advanced_alchemy.utils.text import slugify
from litestar.serialization import decode_json, encode_json
from litestar.utils.module_loader import module_to_os_path
from redis.asyncio import Redis
from sqlalchemy import event
from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine
from sqlalchemy.pool import NullPool
if TYPE_CHECKING:
from litestar.data_extractors import RequestExtractorField, ResponseExtractorField
DEFAULT_MODULE_NAME = "app"
BASE_DIR: Final[Path] = module_to_os_path(DEFAULT_MODULE_NAME)
TRUE_VALUES = {"True", "true", "1", "yes", "Y", "T"}
@dataclass
class DatabaseSettings:
ECHO: bool = field(
default_factory=lambda: os.getenv("DATABASE_ECHO", "False") in TRUE_VALUES,
)
"""Enable SQLAlchemy engine logs."""
ECHO_POOL: bool = field(
default_factory=lambda: os.getenv("DATABASE_ECHO_POOL", "False") in TRUE_VALUES,
)
"""Enable SQLAlchemy connection pool logs."""
POOL_DISABLED: bool = field(
default_factory=lambda: os.getenv("DATABASE_POOL_DISABLED", "False") in TRUE_VALUES,
)
"""Disable SQLAlchemy pool configuration."""
POOL_MAX_OVERFLOW: int = field(default_factory=lambda: int(os.getenv("DATABASE_MAX_POOL_OVERFLOW", "10")))
"""Max overflow for SQLAlchemy connection pool"""
POOL_SIZE: int = field(default_factory=lambda: int(os.getenv("DATABASE_POOL_SIZE", "5")))
"""Pool size for SQLAlchemy connection pool"""
POOL_TIMEOUT: int = field(default_factory=lambda: int(os.getenv("DATABASE_POOL_TIMEOUT", "30")))
"""Time in seconds for timing connections out of the connection pool."""
POOL_RECYCLE: int = field(default_factory=lambda: int(os.getenv("DATABASE_POOL_RECYCLE", "300")))
"""Amount of time to wait before recycling connections."""
POOL_PRE_PING: bool = field(
default_factory=lambda: os.getenv("DATABASE_PRE_POOL_PING", "False") in TRUE_VALUES,
)
"""Optionally ping database before fetching a session from the connection pool."""
URL: str = field(default_factory=lambda: os.getenv("DATABASE_URL", "sqlite+aiosqlite:///db.sqlite3"))
"""SQLAlchemy Database URL."""
MIGRATION_CONFIG: str = f"{BASE_DIR}/db/migrations/alembic.ini"
"""The path to the `alembic.ini` configuration file."""
MIGRATION_PATH: str = f"{BASE_DIR}/db/migrations"
"""The path to the `alembic` database migrations."""
MIGRATION_DDL_VERSION_TABLE: str = "ddl_version"
"""The name to use for the `alembic` versions table name."""
FIXTURE_PATH: str = f"{BASE_DIR}/db/fixtures"
"""The path to JSON fixture files to load into tables."""
_engine_instance: AsyncEngine | None = None
"""SQLAlchemy engine instance generated from settings."""
@property
def engine(self) -> AsyncEngine:
return self.get_engine()
def get_engine(self) -> AsyncEngine:
if self._engine_instance is not None:
return self._engine_instance
if self.URL.startswith("postgresql+asyncpg"):
engine = create_async_engine(
url=self.URL,
future=True,
json_serializer=encode_json,
json_deserializer=decode_json,
echo=self.ECHO,
echo_pool=self.ECHO_POOL,
max_overflow=self.POOL_MAX_OVERFLOW,
pool_size=self.POOL_SIZE,
pool_timeout=self.POOL_TIMEOUT,
pool_recycle=self.POOL_RECYCLE,
pool_pre_ping=self.POOL_PRE_PING,
pool_use_lifo=True, # use lifo to reduce the number of idle connections
poolclass=NullPool if self.POOL_DISABLED else None,
)
"""Database session factory.
See [`async_sessionmaker()`][sqlalchemy.ext.asyncio.async_sessionmaker].
"""
@event.listens_for(engine.sync_engine, "connect")
def _sqla_on_connect(dbapi_connection: Any, _: Any) -> Any: # pragma: no cover
"""Using msgspec for serialization of the json column values means that the
output is binary, not `str` like `json.dumps` would output.
SQLAlchemy expects that the json serializer returns `str` and calls `.encode()` on the value to
turn it to bytes before writing to the JSONB column. I'd need to either wrap `serialization.to_json` to
return a `str` so that SQLAlchemy could then convert it to binary, or do the following, which
changes the behaviour of the dialect to expect a binary value from the serializer.
See Also https://github.com/sqlalchemy/sqlalchemy/blob/14bfbadfdf9260a1c40f63b31641b27fe9de12a0/lib/sqlalchemy/dialects/postgresql/asyncpg.py#L934 pylint: disable=line-too-long
"""
def encoder(bin_value: bytes) -> bytes:
return b"\x01" + encode_json(bin_value)
def decoder(bin_value: bytes) -> Any:
# the byte is the \x01 prefix for jsonb used by PostgreSQL.
# asyncpg returns it when format='binary'
return decode_json(bin_value[1:])
dbapi_connection.await_(
dbapi_connection.driver_connection.set_type_codec(
"jsonb",
encoder=encoder,
decoder=decoder,
schema="pg_catalog",
format="binary",
),
)
dbapi_connection.await_(
dbapi_connection.driver_connection.set_type_codec(
"json",
encoder=encoder,
decoder=decoder,
schema="pg_catalog",
format="binary",
),
)
elif self.URL.startswith("sqlite+aiosqlite"):
engine = create_async_engine(
url=self.URL,
future=True,
json_serializer=encode_json,
json_deserializer=decode_json,
echo=self.ECHO,
echo_pool=self.ECHO_POOL,
pool_recycle=self.POOL_RECYCLE,
pool_pre_ping=self.POOL_PRE_PING,
)
"""Database session factory.
See [`async_sessionmaker()`][sqlalchemy.ext.asyncio.async_sessionmaker].
"""
@event.listens_for(engine.sync_engine, "connect")
def _sqla_on_connect(dbapi_connection: Any, _: Any) -> Any: # pragma: no cover
"""Override the default begin statement. The disables the built in begin execution."""
dbapi_connection.isolation_level = None
@event.listens_for(engine.sync_engine, "begin")
def _sqla_on_begin(dbapi_connection: Any) -> Any: # pragma: no cover
"""Emits a custom begin"""
dbapi_connection.exec_driver_sql("BEGIN")
else:
engine = create_async_engine(
url=self.URL,
future=True,
json_serializer=encode_json,
json_deserializer=decode_json,
echo=self.ECHO,
echo_pool=self.ECHO_POOL,
max_overflow=self.POOL_MAX_OVERFLOW,
pool_size=self.POOL_SIZE,
pool_timeout=self.POOL_TIMEOUT,
pool_recycle=self.POOL_RECYCLE,
pool_pre_ping=self.POOL_PRE_PING,
)
self._engine_instance = engine
return self._engine_instance
@dataclass
class Settings:
db: DatabaseSettings = field(default_factory=DatabaseSettings)
@classmethod
def from_env(cls, dotenv_filename: str = ".env") -> Settings:
from litestar.cli._utils import console
env_file = Path(f"{os.curdir}/{dotenv_filename}")
if env_file.is_file():
from dotenv import load_dotenv
console.print(f"[yellow]Loading environment configuration from {dotenv_filename}[/]")
load_dotenv(env_file)
return Settings()
@lru_cache(maxsize=1, typed=True)
def get_settings() -> Settings:
return Settings.from_env()
settings = get_settings()
from litestar import Litestar
#pdb.set_trace()
alchemy=SQLAlchemyPlugin(
config=SQLAlchemyAsyncConfig(
engine_instance=settings.db.get_engine(),
before_send_handler=async_autocommit_before_send_handler,
session_config=AsyncSessionConfig(expire_on_commit=False),
alembic_config=AlembicAsyncConfig(
version_table_name=settings.db.MIGRATION_DDL_VERSION_TABLE,
script_config=settings.db.MIGRATION_CONFIG,
script_location=settings.db.MIGRATION_PATH,
)
)
)
app = Litestar(
plugins=[alchemy,],
)
```
### Steps to reproduce
```bash
1. Create a new project folder and copy the code above to a file name `app.py`
2. Run `litestar database init db`
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
2.9.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-06-09T17:27:27Z | 2025-03-20T15:54:45Z | https://github.com/litestar-org/litestar/issues/3554 | [
"Bug :bug:"
] | ghferrari | 5 |
ultralytics/ultralytics | pytorch | 19,574 | Use ray to tune | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
Hi guys, I need your help with an issue I'm facing when using Ray to tune my YOLO model.
When using Ray, some processes run normally while others fail.
The error I'm encountering is:
```
Failure # 1 (occurred at 2025-03-08_15-26-26)
[36mray::ImplicitFunc.train()[39m (pid=499864, ip=192.168.5.3, actor_id=2c6a9084244fbf1b3f754eb001000000, repr=_tune)
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/site-packages/ray/tune/trainable/trainable.py", line 330, in train
raise skipped from exception_cause(skipped)
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/site-packages/ray/air/_internal/util.py", line 107, in run
self._ret = self._target(*self._args, **self._kwargs)
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 45, in <lambda>
training_func=lambda: self._trainable_func(self.config),
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
output = fn()
File "/home/aiwork/csn/Projects/ultralytics/ultralytics/utils/tuner.py", line 106, in _tune
results = model_to_train.train(**config)
File "/home/aiwork/csn/Projects/ultralytics/ultralytics/engine/model.py", line 810, in train
self.trainer.train()
File "/home/aiwork/csn/Projects/ultralytics/ultralytics/engine/trainer.py", line 203, in train
raise e
File "/home/aiwork/csn/Projects/ultralytics/ultralytics/engine/trainer.py", line 201, in train
subprocess.run(cmd, check=True)
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/home/aiwork/anaconda3/envs/py39/bin/python', '-m', 'torch.distributed.run', '--nproc_per_node', '2', '--master_port', '43925', '/home/aiwork/.config/Ultralytics/DDP/_temp_aqpask7_133441279378960.py']' returned non-zero exit status 1.
```
### Environment
My server configuration is as follows:
- System: Ubuntu 20.04
- CPU: 80 cores
- GPU: 2 x NVIDIA 3090
- Python: 3.9
- I'm using the latest versions of Ultralytics and Ray.
### Minimal Reproducible Example
Here's my code:
```python
# test_model_tune.py
import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLO
import ray
import os
if __name__ == '__main__':
# Initialize Ray
weights_path = os.path.abspath('./weights/yolo11n.pt')
model = YOLO(weights_path) # Need to modify
print(f"model.ckpt_path:{model.ckpt_path}")
ray.init(num_cpus=20, num_gpus=2) # Adjust according to your hardware configuration
result_grid = model.tune(
data=r'./custom_configs/dateset/image_split.yaml', # Need to modify
imgsz=2560,
epochs=10,
batch=8,
device='0,1',
optimizer='SGD',
project='runs/tune',
iterations=10,
name='exp',
use_ray=True
)
for i, result in enumerate(result_grid):
print(f"Trial #{i}: Configuration: {result.config}, Last Reported Metrics: {result.metrics}")
# Shutdown Ray
ray.shutdown()
```
```python
# tuner.py
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
from ultralytics.cfg import TASK2DATA, TASK2METRIC, get_cfg, get_save_dir
from ultralytics.utils import DEFAULT_CFG, DEFAULT_CFG_DICT, LOGGER, NUM_THREADS, checks
def run_ray_tune(
model,
space: dict = None,
grace_period: int = 10,
gpu_per_trial: int = None,
max_samples: int = 10,
**train_args,
):
"""
Runs hyperparameter tuning using Ray Tune.
Args:
model (YOLO): Model to run the tuner on.
space (dict, optional): The hyperparameter search space. Defaults to None.
grace_period (int, optional): The grace period in epochs of the ASHA scheduler. Defaults to 10.
gpu_per_trial (int, optional): The number of GPUs to allocate per trial. Defaults to None.
max_samples (int, optional): The maximum number of trials to run. Defaults to 10.
train_args (dict, optional): Additional arguments to pass to the `train()` method. Defaults to {}.
Returns:
(dict): A dictionary containing the results of the hyperparameter search.
Example:
```python
from ultralytics import YOLO
# Load a YOLO11n model
model = YOLO("yolo11n.pt")
# Start tuning hyperparameters for YOLO11n training on the COCO8 dataset
result_grid = model.tune(data="coco8.yaml", use_ray=True)
```
"""
LOGGER.info("💡 Learn about RayTune at https://docs.ultralytics.com/integrations/ray-tune ")
if train_args is None:
train_args = {}
try:
checks.check_requirements("ray[tune]")
import ray
from ray import tune
from ray.air import RunConfig
from ray.air.integrations.wandb import WandbLoggerCallback
from ray.tune.schedulers import ASHAScheduler
except ImportError:
raise ModuleNotFoundError('Ray Tune required but not found. To install run: pip install "ray[tune]"')
try:
import wandb
assert hasattr(wandb, "__version__")
except (ImportError, AssertionError):
wandb = False
checks.check_version(ray.__version__, ">=2.0.0", "ray")
default_space = {
# 'optimizer': tune.choice(['SGD', 'Adam', 'AdamW', 'NAdam', 'RAdam', 'RMSProp']),
"lr0": tune.uniform(1e-5, 1e-1),
"lrf": tune.uniform(0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
"momentum": tune.uniform(0.6, 0.98), # SGD momentum/Adam beta1
"weight_decay": tune.uniform(0.0, 0.001), # optimizer weight decay 5e-4
"warmup_epochs": tune.uniform(0.0, 5.0), # warmup epochs (fractions ok)
"warmup_momentum": tune.uniform(0.0, 0.95), # warmup initial momentum
"box": tune.uniform(0.02, 0.2), # box loss gain
"cls": tune.uniform(0.2, 4.0), # cls loss gain (scale with pixels)
"hsv_h": tune.uniform(0.0, 0.1), # image HSV-Hue augmentation (fraction)
"hsv_s": tune.uniform(0.0, 0.9), # image HSV-Saturation augmentation (fraction)
"hsv_v": tune.uniform(0.0, 0.9), # image HSV-Value augmentation (fraction)
"degrees": tune.uniform(0.0, 45.0), # image rotation (+/- deg)
"translate": tune.uniform(0.0, 0.9), # image translation (+/- fraction)
"scale": tune.uniform(0.0, 0.9), # image scale (+/- gain)
"shear": tune.uniform(0.0, 10.0), # image shear (+/- deg)
"perspective": tune.uniform(0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
"flipud": tune.uniform(0.0, 1.0), # image flip up-down (probability)
"fliplr": tune.uniform(0.0, 1.0), # image flip left-right (probability)
"bgr": tune.uniform(0.0, 1.0), # image channel BGR (probability)
"mosaic": tune.uniform(0.0, 1.0), # image mixup (probability)
"mixup": tune.uniform(0.0, 1.0), # image mixup (probability)
"copy_paste": tune.uniform(0.0, 1.0), # segment copy-paste (probability)
}
# Put the model in ray store
task = model.task
model_in_store = ray.put(model)
def _tune(config):
"""
Trains the YOLO model with the specified hyperparameters and additional arguments.
Args:
config (dict): A dictionary of hyperparameters to use for training.
Returns:
None
"""
model_to_train = ray.get(model_in_store) # get the model from ray store for tuning
model_to_train.reset_callbacks()
config.update(train_args)
results = model_to_train.train(**config)
if results is not None:
print(results)
return results.results_dict
else:
print("_tune::results is None")
return None
# Get search space
if not space:
space = default_space
LOGGER.warning("WARNING ⚠️ search space not provided, using default search space.")
# Get dataset
data = train_args.get("data", TASK2DATA[task])
space["data"] = data
if "data" not in train_args:
LOGGER.warning(f'WARNING ⚠️ data not provided, using default "data={data}".')
# modified by chenshining
# Define the trainable function with allocated resources
# trainable_with_resources = tune.with_resources(_tune, {"cpu": NUM_THREADS, "gpu": gpu_per_trial or 0})
trainable_with_resources = tune.with_resources(_tune, {"cpu": 4, "gpu": gpu_per_trial or 1})
# Define the ASHA scheduler for hyperparameter search
asha_scheduler = ASHAScheduler(
time_attr="epoch",
metric=TASK2METRIC[task],
mode="max",
max_t=train_args.get("epochs") or DEFAULT_CFG_DICT["epochs"] or 100,
grace_period=grace_period,
reduction_factor=3,
)
# Define the callbacks for the hyperparameter search
tuner_callbacks = [WandbLoggerCallback(project="YOLOv8-tune")] if wandb else []
# Create the Ray Tune hyperparameter search tuner
tune_dir = get_save_dir(
get_cfg(DEFAULT_CFG, train_args), name=train_args.pop("name", "tune")
).resolve() # must be absolute dir
tune_dir.mkdir(parents=True, exist_ok=True)
# modified by chenshining
tuner = tune.Tuner(
trainable_with_resources,
param_space=space,
tune_config=tune.TuneConfig(scheduler=asha_scheduler, num_samples=max_samples, max_concurrent_trials=4),
run_config=RunConfig(name="memory_optimized_tune", callbacks=tuner_callbacks, storage_path=tune_dir),
)
# Run the hyperparameter search
tuner.fit()
# Get the results of the hyperparameter search
results = tuner.get_results()
# Shut down Ray to clean up workers
ray.shutdown()
return results
```
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | closed | 2025-03-08T08:28:18Z | 2025-03-24T09:03:21Z | https://github.com/ultralytics/ultralytics/issues/19574 | [
"bug",
"enhancement"
] | csn223355 | 12 |
skypilot-org/skypilot | data-science | 4,836 | Ulimit low on MacOS, but incorrect way to update it | ```
Open file descriptor limit (256) is low. File sync to remote clusters may be slow. Consider increasing the limit using `ulimit -n <number>` or modifying system limits.
```
But on MacOS 15 you can not do that anymore
```
$ ulimit -n 4096
$ launchctl limit maxfiles
maxfiles 256 unlimited
``` | open | 2025-02-27T09:53:37Z | 2025-02-27T10:28:55Z | https://github.com/skypilot-org/skypilot/issues/4836 | [
"good first issue",
"interface/ux",
"good starter issues"
] | kesitrifork | 0 |
wandb/wandb | data-science | 8,942 | [Bug]: log_params flag in the wandb_callback() of the lightgbm integration is not working | ### Describe the bug
<!--- Describe your issue here --->
the `log_params` looks not to be passed to the `_WandbCallback` constructor, I think the following fix is needed
```diff
- return _WandbCallback(define_metric)
+ return_WandbCallback(log_params, define_metric)
```
See:
https://github.com/wandb/wandb/blob/4e78d9dfa4a3e86d968d89a75a7eac8eaa6e03c2/wandb/integration/lightgbm/__init__.py#L155-L185 | closed | 2024-11-25T05:41:39Z | 2024-12-03T22:31:52Z | https://github.com/wandb/wandb/issues/8942 | [
"ty:bug",
"a:app"
] | i-aki-y | 3 |
dsdanielpark/Bard-API | api | 136 | Unable to install Bard-Api - Windows 10 - Python 3.9.1 | Hello!
I downloaded the code and unzipped it to a particular folder then in a windows 10 command prompt I navigate to the folder and entered the command:
python -m pip install bard-api --user
I get this error message:
ERROR: Could not find a version that satisfies the requirement bard-api (from versions: none)
ERROR: No matching distribution found for bard-api
My Python version is 3.9.1
Can someone help me on this issue?
I thank you for your attention and look forward for a feedback.
Cordially,
Alexandre | closed | 2023-07-24T11:04:06Z | 2023-11-26T21:57:30Z | https://github.com/dsdanielpark/Bard-API/issues/136 | [] | Alexo47 | 7 |
rthalley/dnspython | asyncio | 1,093 | with `raise_on_no_answer=False`, dns.resolver.Answer objects evaluate inconsistently depending on what sections are present in a reply | **Describe the bug**
See the code example: basically certain cases using `raise_on_no_answer=False` produce astonishing behavior when trying to evaluate the result of the dns.resolver.resolve() method.
Some SOA cases do not return an answer section for the rtype, but will return SOA in an authority section and possibly a CNAME answer. In these cases, it makes sense to use `raise_on_no_answer=False`.
But it seems these prevent an evaluation of `if [dns_resolver.Answer instance]:` from evaluating to True because the answer objects are evaluated on `rrset` / answer, not on the overall answer object still having valuable and valid information.
**To Reproduce**
```python
import dns.resolver, dns.exception
dns_servers = ['1.1.1.1', '1.0.0.1']
dns_resolver = dns.resolver.Resolver()
dns_resolver.nameservers = dns_servers
domain = "www.w3schools.com"
r = dns_resolver.resolve(domain, rdtype="SOA", tcp=False, raise_on_no_answer=False)
print([answer.to_text() for answer in r.response.answer], [answer.to_text() for answer in r.response.authority])
print(type(r), type(r.response), r.rrset)
# BUG: this is NOT being evaluated to true when I would expect it to be.
if r:
print("r evaluates to True!!!")
# changing to A record lookup, the result object evaluates to True
r = dns_resolver.resolve(domain, rdtype="A", tcp=False, raise_on_no_answer=False)
print([answer.to_text() for answer in r.response.answer], [answer.to_text() for answer in r.response.authority])
print(type(r), type(r.response), r.rrset)
# this will eval to true
if r:
print("r evaluates to True on A record answer!!!")
```
**Expected:** The expression `if [dns.resolver.Answer instance]:` evaluates consistently for a successful resolution, even if there is no rrset answer (when `raise_on_no_answer=False`).
"bug" is manifest at line 14 of this example. The result object instance is obviously valid for this query, but `if [resolve_object_instance]:` evaluates to false but only for queries like this that don't return a direct answer/expected RRset for the rtype.
This object corresponds to a "successful resolution" https://dnspython.readthedocs.io/en/latest/resolver-class.html#dns.resolver.Answer and is valid even if there is no answer set, so the evaluation should not be based on whether an `rrset` is present.
The workaround is to use a more hard evaluator, such as `if type(r) == dns.resolver.Answer`
The difference in these queries is that the first query does not return an answer, but a CNAME response and an Authority field. The answer objects have no rrset in this case.
The issue can also be reproduced with an rtype 6 query to `www.google.com` which only returns an authority section.
**Context (please complete the following information):**
- dnspython version 2.6.1
- Python version 3.10.12
- OS: *
[NO_TRAIN]:: | closed | 2024-06-18T13:39:15Z | 2024-06-18T15:06:40Z | https://github.com/rthalley/dnspython/issues/1093 | [
"Will Not Fix"
] | wesinator | 2 |
zama-ai/concrete-ml | scikit-learn | 646 | Feature Request: Add support for embedding layers | Hey,
I already created [an issue](https://huggingface.co/zama-fhe/concrete-ml-encrypted-deeplearning/discussions/1) on Huggingface.
One of the issues can be closed if you need to.
I want to use `concerete-ml` for the Transformer model, such as BERT.
Do you have any resources to look at or advice you could give for this?
I already tried the [distilbert ner](https://huggingface.co/dslim/distilbert-NER) and [conll2003](https://huggingface.co/datasets/conll2003) by duplicating and modifying this model, however, I have not succeeded yet.
Thanks,
Best. | open | 2024-04-24T14:22:48Z | 2024-05-06T11:39:36Z | https://github.com/zama-ai/concrete-ml/issues/646 | [] | dopc | 4 |
pallets-eco/flask-sqlalchemy | flask | 1,366 | Relationship between classes with __bind_key__ defined throws InvalidRequestError | When defining a relationship between two classes via association class and query to one of them it raise the InvalidRequestError exception.
Define a minimal flask app, with six (three as default and three with bind key) classes and a second bind for the configuration.
```python
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy import ForeignKey, select
from typing import List
db = SQLAlchemy()
app = Flask(__name__)
app.config.from_mapping({
'SECRET_KEY': 'dev',
'SQLALCHEMY_DATABASE_URI': 'sqlite:///db.sqlite',
'SQLALCHEMY_BINDS': {
'auth': 'sqlite:///auth.sqlite'
}
})
class ClassA(db.Model):
id: Mapped[int] = mapped_column(primary_key=True)
text: Mapped[str] = mapped_column(unique=True, nullable=False)
manye: Mapped[List["ClassB"]] = relationship(
"ClassE", secondary="class_f", back_populates="manya")
class ClassB(db.Model):
__bind_key__ = "auth"
id: Mapped[int] = mapped_column(primary_key=True)
text: Mapped[str] = mapped_column(unique=True, nullable=False)
manyc: Mapped[List["ClassC"]] = relationship(
"ClassC", secondary="class_d", back_populates="manyb")
class ClassC(db.Model):
__bind_key__ = "auth"
id: Mapped[int] = mapped_column(primary_key=True)
text: Mapped[str] = mapped_column(unique=True, nullable=False)
manyb: Mapped[List["ClassB"]] = relationship(
"ClassB", secondary="class_d", back_populates="manyc")
class ClassD(db.Model):
__bind_key__ = "auth"
b_id: Mapped[int] = mapped_column(
ForeignKey('class_b.id'), primary_key=True)
c_id: Mapped[int] = mapped_column(
ForeignKey('class_c.id'), primary_key=True)
class ClassE(db.Model):
id: Mapped[int] = mapped_column(primary_key=True)
text: Mapped[str] = mapped_column(unique=True, nullable=False)
manya: Mapped[List["ClassA"]] = relationship(
"ClassA", secondary="class_f", back_populates="manye")
class ClassF(db.Model):
a_id: Mapped[int] = mapped_column(
ForeignKey('class_a.id'), primary_key=True)
e_id: Mapped[int] = mapped_column(
ForeignKey('class_e.id'), primary_key=True)
db.init_app(app)
with app.app_context():
db.drop_all()
db.create_all()
@app.route("/")
def index():
return db.session.execute(select(ClassB)).scalars().all()
@app.route("/good")
def good():
return db.session.execute(select(ClassA)).scalars().all()
if __name__ == "__main__":
app.run(debug=True)
```
Traceback:
```bash
127.0.0.1 - - [09/Aug/2024 06:22:35] "GET /good HTTP/1.1" 500 -
Traceback (most recent call last):
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/clsregistry.py", line 532, in __call__
x = eval(self.arg, globals(), self._dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 1, in <module>
NameError: name 'class_d' is not defined
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/path/to/project/.venv/lib/python3.12/site-packages/flask/app.py", line 1498, in __call__
return self.wsgi_app(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/flask/app.py", line 1476, in wsgi_app
response = self.handle_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/flask/app.py", line 1473, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/flask/app.py", line 882, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/test_app.py", line 77, in good
return db.session.execute(select(ClassA)).scalars().all()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/scoping.py", line 778, in execute
return self._proxied.execute(
^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 2362, in execute
return self._execute_internal(
^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 2247, in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement
result = conn.execute(
^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1418, in execute
return meth(
^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 515, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1632, in _execute_clauseelement
compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 703, in _compile_w_cache
compiled_sql = self._compiler(
^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 316, in _compiler
return dialect.statement_compiler(dialect, self, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/sql/compiler.py", line 1429, in __init__
Compiled.__init__(self, dialect, statement, **kwargs)
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/sql/compiler.py", line 870, in __init__
self.string = self.process(self.statement, **compile_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/sql/compiler.py", line 915, in process
return obj._compiler_dispatch(self, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/sql/visitors.py", line 141, in _compiler_dispatch
return meth(self, **kw) # type: ignore # noqa: E501
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/sql/compiler.py", line 4679, in visit_select
compile_state = select_stmt._compile_state_factory(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/sql/base.py", line 683, in create_for_statement
return klass.create_for_statement(statement, compiler, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/context.py", line 1097, in create_for_statement
_QueryEntity.to_compile_state(
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/context.py", line 2552, in to_compile_state
_MapperEntity(
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/context.py", line 2632, in __init__
entity._post_inspect
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py", line 1253, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/mapper.py", line 2711, in _post_inspect
self._check_configure()
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/mapper.py", line 2388, in _check_configure
_configure_registries({self.registry}, cascade=True)
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/mapper.py", line 4204, in _configure_registries
_do_configure_registries(registries, cascade)
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/mapper.py", line 4245, in _do_configure_registries
mapper._post_configure_properties()
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/mapper.py", line 2405, in _post_configure_properties
prop.init()
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/interfaces.py", line 584, in init
self.do_init()
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/relationships.py", line 1641, in do_init
self._process_dependent_arguments()
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/relationships.py", line 1682, in _process_dependent_arguments
rel_arg._resolve_against_registry(self._clsregistry_resolvers[1])
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/relationships.py", line 270, in _resolve_against_registry
self.resolved = clsregistry_resolver(
^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/clsregistry.py", line 539, in __call__
self._raise_for_name(n.args[0], n)
File "/path/to/project/.venv/lib/python3.12/site-packages/sqlalchemy/orm/clsregistry.py", line 500, in _raise_for_name
raise exc.InvalidRequestError(
sqlalchemy.exc.InvalidRequestError: When initializing mapper Mapper[ClassB(class_b)], expression 'class_d' failed to locate a name ("name 'class_d' is not defined"). If this is a class name, consider adding this relationship() to the <class '__main__.ClassB'> class after both dependent classes have been defined.
```
When removing the `relationship` from classes with bind key it works.
```python
class ClassB(db.Model):
__bind_key__ = "auth"
id: Mapped[int] = mapped_column(primary_key=True)
text: Mapped[str] = mapped_column(unique=True, nullable=False)
# manyc: Mapped[List["ClassC"]] = relationship(
# "ClassC", secondary="class_d", back_populates="manyb")
class ClassC(db.Model):
__bind_key__ = "auth"
id: Mapped[int] = mapped_column(primary_key=True)
text: Mapped[str] = mapped_column(unique=True, nullable=False)
# manyb: Mapped[List["ClassB"]] = relationship(
# "ClassB", secondary="class_d", back_populates="manyc")
```
Also tested with creating tables.
Environment:
- Python version: 3.12.4
- Flask-SQLAlchemy version: 3.1.1
- SQLAlchemy version: 2.0.32
| closed | 2024-08-09T04:31:45Z | 2024-08-09T06:59:18Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1366 | [] | graedo-ogd | 1 |
PokemonGoF/PokemonGo-Bot | automation | 5,538 | [Issue] Considering all pokemon as VIP | ### Expected Behavior
Treat only unseen, or IV above 1500 or CP above 0.9 or listed pokemon as VIP
### Actual Behavior
Treated Meowth and Mankey as VIP
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
http://pastebin.com/m5JJJjVf
### Output when issue occurred
[2016-09-18 19:09:58] [MainThread] [PokemonCatchWorker] [INFO] _A wild Meowth appeared!_ (CP: 495) (NCP: 0.66) (Potential 0.36) (A/D/S 12/2/2)
[2016-09-18 19:10:01] [MainThread] [PokemonCatchWorker] [INFO] This is a VIP pokemon. Catch!!!
[2016-09-18 19:11:19] [MainThread] [PokemonCatchWorker] [INFO] _A wild Mankey appeared!_ (CP: 302) (NCP: 0.34) (Potential 0.18) (A/D/S 5/3/0)
[2016-09-18 19:11:22] [MainThread] [PokemonCatchWorker] [INFO] This is a VIP pokemon. Catch!!!
### Steps to Reproduce
Run bot on latest build
### Other Information
OS:
Win 10
Branch:
Master
Git Commit:
fef76945022210f4663c091b55750c57684026ec
Python Version:
2.7.6
| closed | 2016-09-19T02:17:17Z | 2016-09-19T03:39:15Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5538 | [] | pranavperfect | 3 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 1,085 | [BUG]: <Not able to open the new window which redirects to linkedin login> | ### Describe the bug
Hi, please fix the error am facing as i followed one of the youtube @17:01 min video https://www.youtube.com/watch?v=gdW9wogHEUM&t=1s am not getting lib_resume_building and gpt_resume_builder options in lib package and am using gemini API key .donnot how to sort this out even i fixed some of the packages by installing also in visual studio and i used terminal commands in vs code platform only . now finally after used my own resume.pdf and running python main.py am not getting either error or nor directing to other page nothing is happening . help me to sort this out.
### Steps to reproduce
_No response_
### Expected behavior
nothing happened
### Actual behavior
no error no output
### Branch
main
### Branch name
main.py
### Python version
3.13.1
### LLM Used
Google
### Model used
Gemini API
### Additional context
_No response_ | open | 2025-02-04T15:35:00Z | 2025-02-04T15:35:39Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/1085 | [
"bug"
] | kavuthavarapuh | 0 |
allenai/allennlp | nlp | 5,354 | the last decoding step miscalculated in the forward_loss of CopyNet? | <!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x] I have verified that the issue exists against the `main` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [ ] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [ ] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [ ] I have included in the "Environment" section below the output of `pip freeze`.
- [ ] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
Hi, I was recently reading through the implementation of [CopyNet](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/generation/models/copynet_seq2seq.py), which is rather delicate :)
In the forward_loss of CopyNet, I find a conditional statement that determines if the target of the current step is the END token (See Line [526)](https://github.com/allenai/allennlp-models/blob/31649f5776522ae60661499c595f73e1c019d72c/allennlp_models/generation/models/copynet_seq2seq.py#L526).
```python
# If the previous target token was copied, we use the special copy token.
# But the end target token will always be THE end token, so we know
# it was not copied.
if timestep < num_decoding_steps - 1:
# Get mask tensor indicating which instances were copied.
# shape: (batch_size,)
copied = (
(input_choices == self._oov_index) & (target_to_source.sum(-1) > 0)
).long()
# shape: (batch_size,)
input_choices = input_choices * (1 - copied) + copy_input_choices * copied
# shape: (batch_size, source_sequence_length)
target_to_source = state["source_token_ids"] == target_token_ids[
:, timestep + 1
].unsqueeze(-1)
```
However, Line [497](https://github.com/allenai/allennlp-models/blob/31649f5776522ae60661499c595f73e1c019d72c/allennlp_models/generation/models/copynet_seq2seq.py#L497) has already excluded the potential END token. Therefore I am confused if the if-statement is correct in Line 526.
```python
# The last input from the target is either padding or the end symbol.
# Either way, we don't have to process it.
num_decoding_steps = target_sequence_length - 1
```
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS:
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version:
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
```
</p>
</details>
| closed | 2021-08-11T03:17:29Z | 2021-08-19T18:10:08Z | https://github.com/allenai/allennlp/issues/5354 | [
"bug"
] | GregoryZeng | 10 |
Yorko/mlcourse.ai | numpy | 584 | Assignment 4 fix is not published | There was a fix made at https://github.com/Yorko/mlcourse.ai/commit/e5227683e961d0a3327d0dca3b76964858b26098 but it was not merged to /jupyter_english/assignments_spring2019/assignment4_time_series.ipynb | closed | 2019-04-01T15:36:06Z | 2019-04-01T15:42:14Z | https://github.com/Yorko/mlcourse.ai/issues/584 | [] | chekan-o | 1 |
pykaldi/pykaldi | numpy | 208 | Reading UBM models in PyKaldi | Hi, it's one of the best kaldi wrappers. I want to use this to extract i-vectors, I already have the fullUBM in fubm.mdl or fubm.ubm formats (from KALDI). However, I couldn't find in the documentation how to read this type of models. @dogancan Any ideas, please? :) | open | 2020-02-28T09:46:02Z | 2020-03-02T18:09:19Z | https://github.com/pykaldi/pykaldi/issues/208 | [] | jvel07 | 1 |
InstaPy/InstaPy | automation | 6,124 | App download page | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Instapy logs in
## Current Behavior
A app download and login page is added between the cookie page and the email / password page. This page blocks the instagram login. If I manually press login, instapy achieves connecting to intagram.

## Possible Solution (optional)
## InstaPy configuration
| closed | 2021-03-19T08:09:10Z | 2021-07-21T05:18:36Z | https://github.com/InstaPy/InstaPy/issues/6124 | [
"wontfix"
] | RoneFRANCE | 1 |
PaddlePaddle/models | nlp | 4,846 | 请问使用LAC分词,怎么返回词语在原句子中的位置信息?(tokenize) | 例如在jieba分词中可以这么用:
jieba.tokenize(u'永和服装饰品有限公司', mode='search')
word 永和 start: 0 end:2
word 服装 start: 2 end:4
word 饰品 start: 4 end:6
word 有限 start: 6 end:8
word 公司 start: 8 end:10
word 有限公司 start: 6 end:10
请问LAC中有没有这种接口?如果没有,请问哪里可以用? | open | 2020-09-11T08:27:52Z | 2024-02-26T05:10:12Z | https://github.com/PaddlePaddle/models/issues/4846 | [] | LeeYongchao | 1 |
keras-team/keras | tensorflow | 20,324 | Reason for the recently added shape restriction in MultiHeadAttention | Hello,
Wondering why is there a restriction on the input shape of `query` and `value` to have a matching final dimension?
This blocks having cross-attention to a source that has a different shape than query, unless adding an extra projection layer. Given that all input tensors (`query`, `key`, `value`) are immediately projected by dense layers inside `MultiHeadAttention`, I don't think any restriction on final dims is necessary.
For reference, the [pytorch doc](https://keras.io/api/layers/attention_layers/multi_head_attention/) for `MultiHeadAttention` explicitly uses 3 distinct variables to describe expected dimensions for the three tensors. The tensorflow implementation does not enforce such restriction as well.
The restriction is enforced here: https://github.com/keras-team/keras/blob/5aa5f88dc200bbf2cd765d5a213c23c58da48e80/keras/src/layers/attention/multi_head_attention.py#L214-L219
And was added as part of the PR #19973 (in response to the issue #19769)
Thanks | closed | 2024-10-04T19:48:49Z | 2024-10-14T17:36:53Z | https://github.com/keras-team/keras/issues/20324 | [
"type:support",
"keras-team-review-pending"
] | aarbabi | 4 |
jupyter-widgets-contrib/ipycanvas | jupyter | 277 | Images with alpha fail: OSError: cannot write mode RGBA as JPEG | Trying to add an alpha chanel to the [example in the docs](https://ipycanvas.readthedocs.io/en/latest/drawing_images.html?highlight=put_image_data#from-a-numpy-array) leads to an error `OSError: cannot write mode RGBA as JPEG` on Mac OS X:
```python
import numpy as np
from ipycanvas import Canvas
x = np.linspace(-1, 1, 600)
y = np.linspace(-1, 1, 600)
x_grid, y_grid = np.meshgrid(x, y)
blue_channel = np.array(np.sin(x_grid**2 + y_grid**2) * 255, dtype=np.int32)
red_channel = np.zeros_like(blue_channel) + 200
green_channel = np.zeros_like(blue_channel) + 50
alpha_channel = np.zeros_like(blue_channel) + 1
image_data = np.stack((red_channel, blue_channel, green_channel, alpha_channel), axis=2)
canvas = Canvas(width=image_data.shape[0], height=image_data.shape[1])
canvas.put_image_data(image_data, 0, 0)
canvas
```
<details><summary>Exception details: `OSError: cannot write mode RGBA as JPEG`</summary>
```
KeyError Traceback (most recent call last)
File ~/work/mmfbb/gpe-explorer/envs/super_hydro/lib/python3.9/site-packages/PIL/JpegImagePlugin.py:633, in _save(im, fp, filename)
632 try:
--> 633 rawmode = RAWMODE[im.mode]
634 except KeyError as e:
KeyError: 'RGBA'
The above exception was the direct cause of the following exception:
OSError Traceback (most recent call last)
Input In [3], in <cell line: 18>()
15 image_data = np.stack((red_channel, blue_channel, green_channel, alpha_channel), axis=2)
17 canvas = Canvas(width=image_data.shape[0], height=image_data.shape[1])
---> 18 canvas.put_image_data(image_data, 0, 0)
20 canvas
File ~/work/mmfbb/gpe-explorer/envs/super_hydro/lib/python3.9/site-packages/ipycanvas/canvas.py:1374, in Canvas.put_image_data(self, image_data, x, y)
1367 def put_image_data(self, image_data, x=0, y=0):
1368 """Draw an image on the Canvas.
1369
1370 ``image_data`` should be a NumPy array containing the image to draw and ``x`` and ``y`` the pixel position where to
1371 draw. Unlike the CanvasRenderingContext2D.putImageData method, this method **is** affected by the canvas transformation
1372 matrix, and supports transparency.
1373 """
-> 1374 image_buffer = binary_image(image_data)
1375 _CANVAS_MANAGER.send_draw_command(
1376 self, COMMANDS["putImageData"], [x, y], [image_buffer]
1377 )
File ~/work/mmfbb/gpe-explorer/envs/super_hydro/lib/python3.9/site-packages/ipycanvas/utils.py:29, in binary_image(ar, quality)
27 def binary_image(ar, quality=75):
28 f = BytesIO()
---> 29 PILImage.fromarray(ar.astype(np.uint8), "RGB" if ar.shape[2] == 3 else "RGBA").save(
30 f, "JPEG", quality=quality
31 )
32 return f.getvalue()
File ~/work/mmfbb/gpe-explorer/envs/super_hydro/lib/python3.9/site-packages/PIL/Image.py:2300, in Image.save(self, fp, format, **params)
2297 fp = builtins.open(filename, "w+b")
2299 try:
-> 2300 save_handler(self, fp, filename)
2301 except Exception:
2302 if open_fp:
File ~/work/mmfbb/gpe-explorer/envs/super_hydro/lib/python3.9/site-packages/PIL/JpegImagePlugin.py:635, in _save(im, fp, filename)
633 rawmode = RAWMODE[im.mode]
634 except KeyError as e:
--> 635 raise OSError(f"cannot write mode {im.mode} as JPEG") from e
637 info = im.encoderinfo
639 dpi = [round(x) for x in info.get("dpi", (0, 0))]
OSError: cannot write mode RGBA as JPEG
```
</details>
I installed this in a clean conda environment:
```bash
conda create -n tst python=3.9
conda activate tst
python3 -m pip install numpy notebook ipycanvas
```
<details><summary>Environment creation details</summary>
```bash
$ conda create -n tst python=3.9
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /Users/mforbes/.conda/envs/tst
added / updated specs:
- python=3.9
The following NEW packages will be INSTALLED:
ca-certificates pkgs/main/osx-64::ca-certificates-2022.4.26-hecd8cb5_0
certifi pkgs/main/osx-64::certifi-2022.5.18.1-py39hecd8cb5_0
libcxx pkgs/main/osx-64::libcxx-12.0.0-h2f01273_0
libffi pkgs/main/osx-64::libffi-3.3-hb1e8313_2
ncurses pkgs/main/osx-64::ncurses-6.3-hca72f7f_2
openssl pkgs/main/osx-64::openssl-1.1.1o-hca72f7f_0
pip pkgs/main/osx-64::pip-21.2.4-py39hecd8cb5_0
python pkgs/main/osx-64::python-3.9.12-hdfd78df_1
readline pkgs/main/osx-64::readline-8.1.2-hca72f7f_1
setuptools pkgs/main/osx-64::setuptools-61.2.0-py39hecd8cb5_0
sqlite pkgs/main/osx-64::sqlite-3.38.3-h707629a_0
tk pkgs/main/osx-64::tk-8.6.12-h5d9f67b_0
tzdata pkgs/main/noarch::tzdata-2022a-hda174b7_0
wheel pkgs/main/noarch::wheel-0.37.1-pyhd3eb1b0_0
xz pkgs/main/osx-64::xz-5.2.5-hca72f7f_1
zlib pkgs/main/osx-64::zlib-1.2.12-h4dc903c_2
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate tst
#
# To deactivate an active environment, use
#
# $ conda deactivate
$ conda activate tst
(tst) $ python3 -m pip install numpy notebook ipycanvas
Collecting numpy
Using cached numpy-1.22.4-cp39-cp39-macosx_10_14_x86_64.whl (17.7 MB)
Collecting notebook
Using cached notebook-6.4.12-py3-none-any.whl (9.9 MB)
Collecting ipycanvas
Using cached ipycanvas-0.12.0-py2.py3-none-any.whl (256 kB)
Collecting ipython-genutils
Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
Collecting jinja2
Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting jupyter-client>=5.3.4
Using cached jupyter_client-7.3.4-py3-none-any.whl (132 kB)
Collecting nest-asyncio>=1.5
Using cached nest_asyncio-1.5.5-py3-none-any.whl (5.2 kB)
Collecting jupyter-core>=4.6.1
Using cached jupyter_core-4.10.0-py3-none-any.whl (87 kB)
Collecting ipykernel
Using cached ipykernel-6.15.0-py3-none-any.whl (133 kB)
Collecting argon2-cffi
Using cached argon2_cffi-21.3.0-py3-none-any.whl (14 kB)
Collecting terminado>=0.8.3
Using cached terminado-0.15.0-py3-none-any.whl (16 kB)
Collecting traitlets>=4.2.1
Using cached traitlets-5.2.2.post1-py3-none-any.whl (106 kB)
Collecting nbformat
Using cached nbformat-5.4.0-py3-none-any.whl (73 kB)
Collecting prometheus-client
Using cached prometheus_client-0.14.1-py3-none-any.whl (59 kB)
Collecting Send2Trash>=1.8.0
Using cached Send2Trash-1.8.0-py3-none-any.whl (18 kB)
Collecting pyzmq>=17
Using cached pyzmq-23.1.0-cp39-cp39-macosx_10_9_x86_64.whl (1.3 MB)
Collecting nbconvert>=5
Using cached nbconvert-6.5.0-py3-none-any.whl (561 kB)
Collecting tornado>=6.1
Using cached tornado-6.1-cp39-cp39-macosx_10_9_x86_64.whl (416 kB)
Collecting ipywidgets>=7.6.0
Using cached ipywidgets-7.7.0-py2.py3-none-any.whl (123 kB)
Collecting pillow>=6.0
Using cached Pillow-9.1.1-cp39-cp39-macosx_10_10_x86_64.whl (3.1 MB)
Collecting ipython>=4.0.0
Using cached ipython-8.4.0-py3-none-any.whl (750 kB)
Collecting widgetsnbextension~=3.6.0
Using cached widgetsnbextension-3.6.0-py2.py3-none-any.whl (1.6 MB)
Collecting jupyterlab-widgets>=1.0.0
Using cached jupyterlab_widgets-1.1.0-py3-none-any.whl (245 kB)
Collecting packaging
Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting matplotlib-inline>=0.1
Using cached matplotlib_inline-0.1.3-py3-none-any.whl (8.2 kB)
Collecting debugpy>=1.0
Using cached debugpy-1.6.0-py2.py3-none-any.whl (4.1 MB)
Collecting appnope
Using cached appnope-0.1.3-py2.py3-none-any.whl (4.4 kB)
Collecting psutil
Using cached psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl (239 kB)
Collecting pickleshare
Using cached pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
Collecting pygments>=2.4.0
Using cached Pygments-2.12.0-py3-none-any.whl (1.1 MB)
Collecting decorator
Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting backcall
Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: setuptools>=18.5 in ./.conda/envs/tst/lib/python3.9/site-packages (from ipython>=4.0.0->ipywidgets>=7.6.0->ipycanvas) (61.2.0)
Collecting stack-data
Using cached stack_data-0.3.0-py3-none-any.whl (23 kB)
Collecting pexpect>4.3
Using cached pexpect-4.8.0-py2.py3-none-any.whl (59 kB)
Collecting jedi>=0.16
Using cached jedi-0.18.1-py2.py3-none-any.whl (1.6 MB)
Collecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0
Using cached prompt_toolkit-3.0.29-py3-none-any.whl (381 kB)
Collecting parso<0.9.0,>=0.8.0
Using cached parso-0.8.3-py2.py3-none-any.whl (100 kB)
Collecting entrypoints
Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB)
Collecting python-dateutil>=2.8.2
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting tinycss2
Using cached tinycss2-1.1.1-py3-none-any.whl (21 kB)
Collecting defusedxml
Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)
Collecting beautifulsoup4
Using cached beautifulsoup4-4.11.1-py3-none-any.whl (128 kB)
Collecting jupyterlab-pygments
Using cached jupyterlab_pygments-0.2.2-py2.py3-none-any.whl (21 kB)
Collecting nbclient>=0.5.0
Using cached nbclient-0.6.4-py3-none-any.whl (71 kB)
Collecting pandocfilters>=1.4.1
Using cached pandocfilters-1.5.0-py2.py3-none-any.whl (8.7 kB)
Collecting mistune<2,>=0.8.1
Using cached mistune-0.8.4-py2.py3-none-any.whl (16 kB)
Collecting bleach
Using cached bleach-5.0.0-py3-none-any.whl (160 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl (13 kB)
Collecting jsonschema>=2.6
Using cached jsonschema-4.6.0-py3-none-any.whl (80 kB)
Collecting fastjsonschema
Using cached fastjsonschema-2.15.3-py3-none-any.whl (22 kB)
Collecting attrs>=17.4.0
Using cached attrs-21.4.0-py2.py3-none-any.whl (60 kB)
Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0
Using cached pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl (81 kB)
Collecting ptyprocess>=0.5
Using cached ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB)
Collecting wcwidth
Using cached wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Collecting six>=1.5
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting argon2-cffi-bindings
Using cached argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl (53 kB)
Collecting cffi>=1.0.1
Using cached cffi-1.15.0-cp39-cp39-macosx_10_9_x86_64.whl (178 kB)
Collecting pycparser
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Collecting soupsieve>1.2
Using cached soupsieve-2.3.2.post1-py3-none-any.whl (37 kB)
Collecting webencodings
Using cached webencodings-0.5.1-py2.py3-none-any.whl (11 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting pure-eval
Using cached pure_eval-0.2.2-py3-none-any.whl (11 kB)
Collecting asttokens
Using cached asttokens-2.0.5-py2.py3-none-any.whl (20 kB)
Collecting executing
Using cached executing-0.8.3-py2.py3-none-any.whl (16 kB)
Installing collected packages: traitlets, six, pyrsistent, attrs, wcwidth, tornado, pyzmq, python-dateutil, pycparser, pure-eval, ptyprocess, parso, nest-asyncio, jupyter-core, jsonschema, fastjsonschema, executing, entrypoints, asttokens, webencodings, stack-data, soupsieve, pyparsing, pygments, prompt-toolkit, pickleshare, pexpect, nbformat, matplotlib-inline, MarkupSafe, jupyter-client, jedi, decorator, cffi, backcall, appnope, tinycss2, psutil, pandocfilters, packaging, nbclient, mistune, jupyterlab-pygments, jinja2, ipython, defusedxml, debugpy, bleach, beautifulsoup4, argon2-cffi-bindings, terminado, Send2Trash, prometheus-client, nbconvert, ipython-genutils, ipykernel, argon2-cffi, notebook, widgetsnbextension, jupyterlab-widgets, pillow, numpy, ipywidgets, ipycanvas
Successfully installed MarkupSafe-2.1.1 Send2Trash-1.8.0 appnope-0.1.3 argon2-cffi-21.3.0 argon2-cffi-bindings-21.2.0 asttokens-2.0.5 attrs-21.4.0 backcall-0.2.0 beautifulsoup4-4.11.1 bleach-5.0.0 cffi-1.15.0 debugpy-1.6.0 decorator-5.1.1 defusedxml-0.7.1 entrypoints-0.4 executing-0.8.3 fastjsonschema-2.15.3 ipycanvas-0.12.0 ipykernel-6.15.0 ipython-8.4.0 ipython-genutils-0.2.0 ipywidgets-7.7.0 jedi-0.18.1 jinja2-3.1.2 jsonschema-4.6.0 jupyter-client-7.3.4 jupyter-core-4.10.0 jupyterlab-pygments-0.2.2 jupyterlab-widgets-1.1.0 matplotlib-inline-0.1.3 mistune-0.8.4 nbclient-0.6.4 nbconvert-6.5.0 nbformat-5.4.0 nest-asyncio-1.5.5 notebook-6.4.12 numpy-1.22.4 packaging-21.3 pandocfilters-1.5.0 parso-0.8.3 pexpect-4.8.0 pickleshare-0.7.5 pillow-9.1.1 prometheus-client-0.14.1 prompt-toolkit-3.0.29 psutil-5.9.1 ptyprocess-0.7.0 pure-eval-0.2.2 pycparser-2.21 pygments-2.12.0 pyparsing-3.0.9 pyrsistent-0.18.1 python-dateutil-2.8.2 pyzmq-23.1.0 six-1.16.0 soupsieve-2.3.2.post1 stack-data-0.3.0 terminado-0.15.0 tinycss2-1.1.1 tornado-6.1 traitlets-5.2.2.post1 wcwidth-0.2.5 webencodings-0.5.1 widgetsnbextension-3.6.0
```
</details> | closed | 2022-06-15T23:37:23Z | 2022-08-19T07:51:14Z | https://github.com/jupyter-widgets-contrib/ipycanvas/issues/277 | [] | mforbes | 0 |
deepset-ai/haystack | nlp | 8,818 | LLM-based Evaluators ask for an API key when run with local LLMs | **Describe the bug**
When I run the llm-based evaluators with local llms, I get an error saying that OPENAI API key is missing even though no key is needed.
**Error message**
```bash
...
None of the following authentication environment variables are set: ('OPENAI_API_KEY',)
```
**Expected behavior**
`api_key` param should be optional maybe?
It works when I initialize the evaluator like this:
```python
evaluator = FaithfulnessEvaluator(api_key=Secret.from_token("just-a-placeholder"),
api_params={"api_base_url": local_endpoint, "model": "llama3"})
```
or like this:
```python
evaluator = FaithfulnessEvaluator(api_key=Secret.from_env_var("...", strict=False),
api_params={"api_base_url": local_endpoint, "model": "llama3"})
```
**To Reproduce**
Run the code below. It's taken from the [docs](https://docs.haystack.deepset.ai/docs/model-based-evaluation#using-local-llms)
```python
from haystack.components.evaluators import FaithfulnessEvaluator
questions = ["Who created the Python language?"]
contexts = [
[(
"Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming "
"language. Its design philosophy emphasizes code readability, and its language constructs aim to help "
"programmers write clear, logical code for both small and large-scale software projects."
)],
]
predicted_answers = [
"Python is a high-level general-purpose programming language that was created by George Lucas."
]
local_endpoint = "http://localhost:11434/v1"
evaluator = FaithfulnessEvaluator(api_params={"api_base_url": local_endpoint, "model": "llama3"})
result = evaluator.run(questions=questions, contexts=contexts, predicted_answers=predicted_answers)
```
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: Not relevant
- GPU/CPU: CPU
- Haystack version (commit or version number): 2.9.0
| closed | 2025-02-05T15:47:31Z | 2025-02-24T08:59:54Z | https://github.com/deepset-ai/haystack/issues/8818 | [
"type:bug",
"type:documentation",
"P2"
] | bilgeyucel | 3 |
3b1b/manim | python | 2,110 | Export with transparency on Windows | ### Describe the bug
I have spent a couple of days working on my animations and exporting them as .mp4 for testing purposes. I thought switching to a .mov format with transparent background would be as easy as adding the -t flag to my manim command, but apparently it’s not.
`manim -qh myfile.py BounceImage
`
works. It produces a .mp4 and I can see my animation (a simple image entering the screen with a bouncing effect, see code below). But with
`manim -tqh myfile.py BounceImage
`
it’s producing a .mov file that can only be opened in VLC and not Premiere ("File import failure: The file has no audio or video stream") neither Windows Media Player, Media Player, Movies & TV ("The file is encoded in rle format which isn’t supported").
Plus, when opened in VLC, I do not see the boucing animation, I just see my image at its final position.
I’m using Manim Community v0.18.0 inside a Conda environment on Windows if it matters.
If it’s a known bug, what would be the best workaround? Can ffmpeg remove the solid background from a .mp4 for instance? Or is it better to export as a series of transparent .png and then use ffmpeg to merge them into a .mov file that can be read by Premiere (which is the final editing software I will be importing into)?
My bouncing effect:
```
class BounceImage(Scene):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.img_path = "path/to/image.jpg"
def construct(self):
img = ImageMobject(self.img_path)
video_width = config.frame_width
img_width = video_width * 0.4
img.scale_to_fit_width(img_width)
margin = 0.02 * video_width
final_pos = np.array([-config.frame_width / 2 + margin + img_width / 2, config.frame_height / 2 - margin - img.height / 2, 0])
img.move_to(final_pos + 5*UP)
img_start = img.copy()
points = [
final_pos + 5*UP,
final_pos + 2*UP,
final_pos + 0.3*UP,
final_pos - 0.15*UP,
final_pos + 0.15*UP,
final_pos + 0.05*UP,
final_pos,
final_pos - 0.075*UP,
final_pos + 0.075*UP,
final_pos - 0.05*UP,
final_pos
]
path = VMobject().set_points_smoothly(points)
self.play(MoveAlongPath(img_start, path, rate_func=smooth), run_time=1)
self.wait(10)
```
| open | 2024-03-16T13:35:05Z | 2024-03-16T13:35:05Z | https://github.com/3b1b/manim/issues/2110 | [
"bug"
] | stephanedebove | 0 |
hzwer/ECCV2022-RIFE | computer-vision | 289 | RIFE-large trained model link?? | Where can I find the RIFE-large pretrained model? I can only see RIFE and RIFE-m models in the github repository. | open | 2022-11-10T18:50:18Z | 2022-11-15T03:45:46Z | https://github.com/hzwer/ECCV2022-RIFE/issues/289 | [] | abhishri-medewar | 3 |
matplotlib/matplotlib | data-visualization | 28,813 | [Bug]: pip install matplotlib fails | ### Bug summary
Cannot install matplotlib
### Code for reproduction
```Python
python3 -m pip install matplotlib --user --verbose --no-build-isolation
Using pip 24.2 from /usr/local/lib/python3.9/dist-packages/pip (python 3.9)
Collecting matplotlib
Using cached matplotlib-3.9.2.tar.gz (36.1 MB)
Running command Preparing metadata (pyproject.toml)
+ meson setup /tmp/pip-install-j03xwpgu/matplotlib_9bd21a6ab101489e99b984ba190ab967 /tmp/pip-install-j03xwpgu/matplotlib_9bd21a6ab101489e99b984ba190ab967/.mesonpy-re6h5qg_ -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=/tmp/pip-install-j03xwpgu/matplotlib_9bd21a6ab101489e99b984ba190ab967/.mesonpy-re6h5qg_/meson-python-native-file.ini
The Meson build system
Version: 1.5.1
Source dir: /tmp/pip-install-j03xwpgu/matplotlib_9bd21a6ab101489e99b984ba190ab967
Build dir: /tmp/pip-install-j03xwpgu/matplotlib_9bd21a6ab101489e99b984ba190ab967/.mesonpy-re6h5qg_
Build type: native build
Program python3 found: YES (/usr/bin/python3)
Project name: matplotlib
Project version: 3.9.2
C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0")
C linker for the host machine: cc ld.bfd 2.34
C++ compiler for the host machine: c++ (gcc 9.4.0 "c++ (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0")
C++ linker for the host machine: c++ ld.bfd 2.34
Host machine cpu family: arm
Host machine cpu: arm
Program python found: YES (/usr/bin/python3)
Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1
Run-time dependency python found: YES 3.9
pybind11-config found: YES (/home/pi/.local/bin/pybind11-config) 2.13.5
Run-time dependency pybind11 found: YES 2.13.5
Downloading freetype-2.6.1 source from https://download.savannah.gnu.org/releases/freetype/freetype-old/freetype-2.6.1.tar.gz
Executing subproject freetype-2.6.1
freetype-2.6.1| Project name: freetype2
freetype-2.6.1| Project version: 2.6.1
freetype-2.6.1| C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0")
freetype-2.6.1| C linker for the host machine: cc ld.bfd 2.34
freetype-2.6.1| Has header "unistd.h" : YES
freetype-2.6.1| Has header "fcntl.h" : YES
freetype-2.6.1| Has header "stdint.h" : YES
freetype-2.6.1| Configuring ftconfig.h using configuration
freetype-2.6.1| Configuring ftoption.h using configuration
freetype-2.6.1| Build targets in project: 3
freetype-2.6.1| Subproject freetype-2.6.1 finished.
Downloading qhull source from https://github.com/qhull/qhull/archive/v8.0.2/qhull-8.0.2.tar.gz
Executing subproject qhull
qhull| Project name: qhull
qhull| Project version: 8.0.2
qhull| C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0")
qhull| C linker for the host machine: cc ld.bfd 2.34
qhull| Build targets in project: 4
qhull| Subproject qhull finished.
Run-time dependency dl found: YES
Configuring _version.py using configuration
Program /tmp/pip-install-j03xwpgu/matplotlib_9bd21a6ab101489e99b984ba190ab967/tools/generate_matplotlibrc.py found: YES (/tmp/pip-install-j03xwpgu/matplotlib_9bd21a6ab101489e99b984ba190ab967/tools/generate_matplotlibrc.py)
Build targets in project: 14
matplotlib 3.9.2
Subprojects
freetype-2.6.1: YES
qhull : YES
User defined options
Native files : /tmp/pip-install-j03xwpgu/matplotlib_9bd21a6ab101489e99b984ba190ab967/.mesonpy-re6h5qg_/meson-python-native-file.ini
buildtype : release
b_ndebug : if-release
b_vscrt : md
Found ninja-1.11.1.git.kitware.jobserver-1 at /home/pi/.local/bin/ninja
+ /home/pi/.local/bin/ninja
[1/101] Compiling C++ object extern/agg24-svn/libagg.a.p/src_agg_trans_affine.cpp.o
[2/101] Compiling C++ object extern/agg24-svn/libagg.a.p/src_agg_image_filters.cpp.o
[3/101] Compiling C++ object extern/agg24-svn/libagg.a.p/src_agg_vcgen_dash.cpp.o
[4/101] Compiling C++ object extern/agg24-svn/libagg.a.p/src_agg_bezier_arc.cpp.o
[5/101] Compiling C++ object extern/agg24-svn/libagg.a.p/src_agg_vcgen_contour.cpp.o
[6/101] Compiling C++ object extern/agg24-svn/libagg.a.p/src_agg_curves.cpp.o
[7/101] Compiling C++ object extern/agg24-svn/libagg.a.p/src_agg_vpgen_segmentator.cpp.o
[8/101] Compiling C++ object extern/ttconv/libttconv.a.p/ttutil.cpp.o
[9/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftbbox.c.o
[10/101] Compiling C++ object extern/agg24-svn/libagg.a.p/src_agg_vcgen_stroke.cpp.o
[11/101] Linking static target extern/agg24-svn/libagg.a
[12/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftbdf.c.o
[13/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftbitmap.c.o
[14/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftcid.c.o
[15/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_autofit_autofit.c.o
[16/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftfntfmt.c.o
[17/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftgasp.c.o
[18/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftfstype.c.o
[19/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftglyph.c.o
[20/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftgxval.c.o
[21/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftinit.c.o
[22/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftlcdfil.c.o
[23/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftmm.c.o
[24/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftotval.c.o
[25/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftbase.c.o
[26/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftpatent.c.o
[27/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftpfr.c.o
[28/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftsynth.c.o
[29/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_fttype1.c.o
[30/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftsystem.c.o
[31/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftwinfnt.c.o
[32/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftstroke.c.o
[33/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_bzip2_ftbzip2.c.o
[34/101] Compiling C++ object extern/ttconv/libttconv.a.p/pprdrv_tt.cpp.o
[35/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_cache_ftcache.c.o
[36/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_bdf_bdf.c.o
[37/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_cid_type1cid.c.o
[38/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_lzw_ftlzw.c.o
[39/101] Compiling C++ object extern/ttconv/libttconv.a.p/pprdrv_tt2.cpp.o
[40/101] Linking static target extern/ttconv/libttconv.a
[41/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_gzip_ftgzip.c.o
[42/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_pcf_pcf.c.o
[43/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_pfr_pfr.c.o
[44/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_cff_cff.c.o
[45/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_pshinter_pshinter.c.o
[46/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_psaux_psaux.c.o
[47/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_raster_raster.c.o
[48/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_smooth_smooth.c.o
[49/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_psnames_psnames.c.o
[50/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_base_ftdebug.c.o
[51/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_winfonts_winfnt.c.o
[52/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_type1_type1.c.o
[53/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_type42_type42.c.o
[54/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_sfnt_sfnt.c.o
[55/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_geom_r.c.o
[56/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_geom2_r.c.o
[57/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_mem_r.c.o
[58/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_global_r.c.o
[59/101] Compiling C object subprojects/freetype-2.6.1/libfreetype.a.p/src_truetype_truetype.c.o
[60/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_libqhull_r.c.o
[61/101] Linking static target subprojects/freetype-2.6.1/libfreetype.a
[62/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_io_r.c.o
[63/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_qset_r.c.o
[64/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_random_r.c.o
[65/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_poly_r.c.o
[66/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_usermem_r.c.o
[67/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_userprintf_r.c.o
[68/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_userprintf_rbox_r.c.o
[69/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_rboxlib_r.c.o
[70/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_merge_r.c.o
[71/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_poly2_r.c.o
[72/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_stat_r.c.o
[73/101] Compiling C object subprojects/qhull-8.0.2/libqhull_r.a.p/src_libqhull_r_user_r.c.o
[74/101] Linking static target subprojects/qhull-8.0.2/libqhull_r.a
[75/101] Compiling C++ object src/_backend_agg.cpython-39-arm-linux-gnueabihf.so.p/py_converters.cpp.o
[76/101] Compiling C++ object src/_backend_agg.cpython-39-arm-linux-gnueabihf.so.p/_backend_agg.cpp.o
[77/101] Compiling C++ object src/ft2font.cpython-39-arm-linux-gnueabihf.so.p/ft2font_wrapper.cpp.o
[78/101] Compiling C++ object src/ft2font.cpython-39-arm-linux-gnueabihf.so.p/ft2font.cpp.o
[79/101] Compiling C++ object src/ft2font.cpython-39-arm-linux-gnueabihf.so.p/py_converters.cpp.o
[80/101] Compiling C++ object src/_path.cpython-39-arm-linux-gnueabihf.so.p/py_converters.cpp.o
[81/101] Compiling C++ object src/_backend_agg.cpython-39-arm-linux-gnueabihf.so.p/_backend_agg_wrapper.cpp.o
[82/101] Compiling C++ object src/_c_internal_utils.cpython-39-arm-linux-gnueabihf.so.p/_c_internal_utils.cpp.o
[83/101] Compiling C++ object src/_image.cpython-39-arm-linux-gnueabihf.so.p/py_converters_11.cpp.o
[84/101] Compiling C++ object src/_path.cpython-39-arm-linux-gnueabihf.so.p/py_converters_11.cpp.o
[85/101] Linking target src/_c_internal_utils.cpython-39-arm-linux-gnueabihf.so
[86/101] Compiling C++ object src/_image.cpython-39-arm-linux-gnueabihf.so.p/_image_wrapper.cpp.o
[87/101] Compiling C++ object src/_path.cpython-39-arm-linux-gnueabihf.so.p/_path_wrapper.cpp.o
FAILED: src/_path.cpython-39-arm-linux-gnueabihf.so.p/_path_wrapper.cpp.o
c++ -Isrc/_path.cpython-39-arm-linux-gnueabihf.so.p -Isrc -I../src -I../../../../home/pi/.local/lib/python3.9/site-packages/numpy/_core/include -I../extern/agg24-svn/include -I/usr/include/python3.9 -I/usr/include/arm-linux-gnueabihf/python3.9 -I/home/pi/.local/lib/python3.9/site-packages/pybind11/include -fvisibility=hidden -fvisibility-inlines-hidden -flto=4 -fdiagnostics-color=always -DNDEBUG -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -std=c++17 -O3 -fPIC -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -D__STDC_FORMAT_MACROS=1 -DPY_ARRAY_UNIQUE_SYMBOL=MPL__path_ARRAY_API -MD -MQ src/_path.cpython-39-arm-linux-gnueabihf.so.p/_path_wrapper.cpp.o -MF src/_path.cpython-39-arm-linux-gnueabihf.so.p/_path_wrapper.cpp.o.d -o src/_path.cpython-39-arm-linux-gnueabihf.so.p/_path_wrapper.cpp.o -c ../src/_path_wrapper.cpp
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
[88/101] Compiling C++ object src/_qhull.cpython-39-arm-linux-gnueabihf.so.p/_qhull_wrapper.cpp.o
[89/101] Linking target src/_backend_agg.cpython-39-arm-linux-gnueabihf.so
[90/101] Compiling C++ object src/_tkagg.cpython-39-arm-linux-gnueabihf.so.p/_tkagg.cpp.o
[91/101] Linking target src/_image.cpython-39-arm-linux-gnueabihf.so
[92/101] Linking target src/ft2font.cpython-39-arm-linux-gnueabihf.so
ninja: build stopped: subcommand failed.
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 /usr/local/lib/python3.9/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmp8b22hwsc
cwd: /tmp/pip-install-j03xwpgu/matplotlib_9bd21a6ab101489e99b984ba190ab967
Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
### Actual outcome
I don't know what to do
### Expected outcome
please help
### Additional information
This is a pi board in an FDM 3D printer with Klipper
The goal is to generate resonance charts according to https://www.klipper3d.org/Measuring_Resonances.html#__code_19
### Operating system
Ubuntu 20.04.6 LTS (GNU/Linux 4.9.191 aarch64)
### Matplotlib Version
3.9.2
### Matplotlib Backend
_No response_
### Python version
2.7.18 and 3.9.20
### Jupyter version
_No response_
### Installation
pip | closed | 2024-09-12T23:56:20Z | 2024-09-13T21:43:37Z | https://github.com/matplotlib/matplotlib/issues/28813 | [] | tokamac | 10 |
2noise/ChatTTS | python | 809 | Achieving Over 3x Faster with TensorRT on Windows | I accelerated ChatTTS to over 3x speed with TensorRT, and it can be used with a simple one-click extraction on Windows.
[ChatTTSPlus](https://github.com/warmshao/ChatTTSPlus) | closed | 2024-10-31T00:43:41Z | 2024-11-03T11:58:57Z | https://github.com/2noise/ChatTTS/issues/809 | [
"ad"
] | warmshao | 0 |
neuml/txtai | nlp | 654 | Update HFTrainer to add PEFT support | This change will add additional configuration to train models using the [PEFT](https://github.com/huggingface/peft) library and [bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
After this change, the trainer will support any of the methods supported by PEFT such as [LoRA](https://arxiv.org/abs/2106.09685), [QLoRA](https://arxiv.org/abs/2305.14314) and [LoftQ](https://arxiv.org/abs/2310.08659). | closed | 2024-01-31T15:15:50Z | 2024-01-31T18:53:04Z | https://github.com/neuml/txtai/issues/654 | [] | davidmezzetti | 0 |
howie6879/owllook | asyncio | 64 | redis密码问题 | REDIS_PASSWORD= 990990
无效
REDIS_PASSWORD='990990'
无效
REDIS_PASSWORD= '990990'
无效
REDIS_PASSWORD= '990990'
无效
测试以上都密码错误
| closed | 2019-04-14T14:56:52Z | 2019-04-22T05:43:20Z | https://github.com/howie6879/owllook/issues/64 | [] | anson1007 | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 167 | 有人量化了13B模型可以提供下载链接的吗 | 感谢您使用Issue提问模板,请按照以下步骤提供相关信息。我们将优先处理信息相对完整的Issue,感谢您的配合。
*提示:将[ ]中填入x,表示打对钩。*
### 问前必查项目
- [ ] 由于相关依赖频繁更新,请确保按照[README.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca)中的相关步骤执行
- [x] 我已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [ ] 我已阅读README中的[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca#faq),没有找到相似问题和解决方案
- [ ] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案
### 选择问题类型
基础模型:
- [ ] LLaMA
- [x] Alpaca
问题类型:
- [x] 下载问题
- [ ] 模型转换和合并问题
- [ ] 模型推理问题(🤗 transformers)
- [ ] 模型量化和部署问题(llama.cpp、text-generation-webui)
- [ ] 效果问题
- [ ] 其他问题
### 详细描述问题
因为免费版colab的RAM和磁盘空间对于13B模型的处理远远不够,多次尝试也无法成功,huggingface也没有13B的下载,有人愿意分享下量化好的13B模型吗,感谢~
### 运行截图或log
(如有必要)请提供文本log或者运行截图,以便我们更好地了解问题详情。
| closed | 2023-04-16T14:03:37Z | 2023-05-23T22:02:46Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/167 | [
"stale"
] | lianggaoquan | 22 |
allenai/allennlp | pytorch | 4,785 | GumbelMaxSampler should return examples in descending order of log probability | Should be a simple one-line fix somewhere. | closed | 2020-11-12T00:38:08Z | 2020-11-12T06:19:21Z | https://github.com/allenai/allennlp/issues/4785 | [] | epwalsh | 0 |
mwaskom/seaborn | matplotlib | 2,937 | documentation is not reachable | "NET::ERR_CERT_COMMON_NAME_INVALID" | closed | 2022-08-05T08:46:10Z | 2022-08-05T23:03:35Z | https://github.com/mwaskom/seaborn/issues/2937 | [] | dryguz | 4 |
HIT-SCIR/ltp | nlp | 244 | pyltp依存句法求助 | 各位老师好,我在使用pyltp进行中文句法分析后,想获取弧头和弧尾,但是pyltp中只有head和relation两个弧相关信息,不知道能否帮忙解决?
比如:系统->流畅,通过pyltp(arc.head)只能获取流畅,我想要同时获取系统,即我想获取整条弧的弧头和弧尾。
希望得到帮助,万分感谢! | closed | 2017-08-02T08:25:17Z | 2020-06-25T11:20:37Z | https://github.com/HIT-SCIR/ltp/issues/244 | [] | licunlin2012 | 20 |
sinaptik-ai/pandas-ai | data-science | 591 | Code not showing on Databricks notebook | ### 🐛 Describe the bug
Hi, you can see in the image below that when I try to use `show_code = True`, there is no code shown in Databricks.
How could I solve this?
Thanks
Francesco

| closed | 2023-09-25T09:07:22Z | 2023-09-30T20:32:25Z | https://github.com/sinaptik-ai/pandas-ai/issues/591 | [] | FrancescoRettondini | 6 |
Miserlou/Zappa | django | 1,502 | Tag release commits with version in repository for easier code browsing/inspection | ## Context
This issue is git / github / repository / release process related, rather than a bug in zappa code.
## Expected Behavior
Most projects tag their release commits with the version so that changes/diffs/checkouts of code is made easy for users. E.g. if I want to see code changes made between versions (or between the latest released version and 'master') using the `git diff` command, I can use tags rather than commit hashes (a lot more user friendly).
## Actual Behavior
Github shows no tags on this repository.
## Possible Fix
Please tag historical release commits with version and push. Update release process to include tagging new versions.
| closed | 2018-05-10T08:40:22Z | 2018-08-08T15:05:01Z | https://github.com/Miserlou/Zappa/issues/1502 | [
"good-idea"
] | mattaustin | 3 |
stanfordnlp/stanza | nlp | 1,280 | Missing language model for Livvi | **Describe the bug**
The [doc](https://stanfordnlp.github.io/stanza/available_models.html) lists Livvi as a supported language, but there are no available language model to be downloaded.
**To Reproduce**
```
import stanza
stanza.download('olo')
Downloading https://raw.githubusercontent.com/stanfordnlp/stanza-resources/main/resources_1.5.0.json: 216kB [00:00, 36.0MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Python\lib\site-packages\stanza\resources\common.py", line 551, in download
raise UnknownLanguageError(lang)
stanza.resources.common.UnknownLanguageError: Unknown language requested: olo
```
**Expected behavior**
Add a language model for Livvi, or remove Livvi as a supported language in the doc.
**Environment (please complete the following information):**
- OS: [e.g. Windows, Ubuntu, CentOS, MacOS] Windows 11 x64
- Python version: [e.g. Python 3.6.8 from Anaconda] Python 3.8.12
- Stanza version: [e.g., 1.0.0] 1.5.0
| closed | 2023-08-31T21:05:33Z | 2023-09-17T13:37:26Z | https://github.com/stanfordnlp/stanza/issues/1280 | [
"bug"
] | BLKSerene | 13 |
oegedijk/explainerdashboard | dash | 6 | Seperating global vs local explanation and adding what-if analysis under local explanation? | As the title suggests, is there a plan of adding such functionality? | closed | 2020-10-02T12:49:21Z | 2020-10-08T10:17:01Z | https://github.com/oegedijk/explainerdashboard/issues/6 | [] | rezacsedu | 4 |
horovod/horovod | machine-learning | 3,303 | Is there an example of distributed training using only CPU | I want to try distributed training with multiple machines using CPU. What command should I use to start it? Is there an example for reference | closed | 2021-12-08T10:08:25Z | 2021-12-08T10:33:15Z | https://github.com/horovod/horovod/issues/3303 | [] | liiitleboy | 0 |
lepture/authlib | flask | 359 | Why is refresh_token method unavailable on FASTAPI/Starlette client? How to enable it? | **Describe the bug**
Unable to access ```refresh_token``` method from FAST API starlette client
```python
from authlib.integrations.starlette_client import OAuth
oauth = OAuth()
provider = oauth.register(
"cognito",
client_id=cognito_config.client_id,
client_secret=cognito_config.client_secret,
server_metadata_url=cognito_config.auth_urls.discovery_url,
client_kwargs={"scope": "openid email"},
)
# method unrecognised
provider.refresh_token('url', 'refresh_token', 'body', None, None)
```
**Error Stacks**
```
StarletteOAuth2App object has no attribute 'refresh_token'
```
**To Reproduce**
```python
from authlib.integrations.starlette_client import OAuth
oauth = OAuth()
provider = oauth.register(
"cognito",
client_id=cognito_config.client_id,
client_secret=cognito_config.client_secret,
server_metadata_url=cognito_config.auth_urls.discovery_url,
client_kwargs={"scope": "openid email"},
)
provider.refresh_token('url', 'refresh_token', 'body', None, None)
```
From the [documentation](https://docs.authlib.org/en/latest/client/index.html) struggling to understand why the refresh_token method is unavailable on starlette/fastapi ```OAuth```, yet it is offered on ```requests_client.OAuth2Session``` or ```httpx_client.AsyncOAuth2Client```.
Confused with respect to what I should be using ```httpx_client.AsyncOAuth2Client``` / ```requests_client.OAuth2Session``` for. For example:
- What is the difference between this and ```OAuth``` classes?
- Why is ```refresh_token``` not available on ```OAuth```? This means that I will have to initialise ```httpx_client.AsyncOAuth2Client``` / ```requests_client.OAuth2Session``` just to use refresh_token functionality?
**Expected behaviour**
Should be able to access refresh_token method for FASTAPI/ Starlette client. How do I get the refresh_token method so that I can make the request to the cognito IDP?
**Environment:**
- OS: macos Catalina 10.15.7 (19H2)
- Python Version: 3.8.9
- Authlib Version: 1.0.0a1
| closed | 2021-06-22T15:16:08Z | 2021-11-02T21:49:24Z | https://github.com/lepture/authlib/issues/359 | [
"bug"
] | dcs3spp | 2 |
google-research/bert | tensorflow | 594 | how many data does it require to do sentence pair classification task? | Hi, my task is do sentence pair semantic meaning classification, 0 means the two sentences represent the same meaning, 1 means yes.
For example,
I want to eat breakfast, I need to take my breakfast, 1.
How many data should I prepare for this task? Thank you. | open | 2019-04-22T07:19:47Z | 2019-04-23T01:52:22Z | https://github.com/google-research/bert/issues/594 | [] | leolle | 1 |
psf/requests | python | 6,871 | "I use the same proxy for requests to 300 websites from china, and httpx and requests behave completely differently." | same code ,same proxies!
success rate verry diffrrence
this is requests success rate!
<img width="988" alt="image" src="https://github.com/user-attachments/assets/8e3c3338-c71b-4818-8d59-1e2a339f1738" />
<img width="1209" alt="image" src="https://github.com/user-attachments/assets/6f791157-e544-4d4c-a213-7d1839c69194" />
this is httpx success rate!
<img width="1073" alt="image" src="https://github.com/user-attachments/assets/533fd631-a090-4851-9fbe-974629d66b2e" />
I am a user from China. I suspect that when using a proxy with requests, the process of obtaining the DNS IP does not go through the proxy, or it directly uses the system's DNS to obtain the IP. At this point, it has already been intercepted by the Great Firewall of China, leading to the retrieval of an incorrect IP and ultimately causing a SOCKSHTTP timeout. However, httpx does not have this issue. I think it might be a bug in requests, but I can't find the cause. I also don't know how to fix it.
Since our project extensively uses the requests library, we cannot quickly switch to httpx. I hope you can help me solve this bug. We have debugged up to the point of establishing the SOCKS connection and found that both are the same, but the final success rates are completely different. httpx can achieve a 90% success rate, while requests only reaches a 20% success rate.
I would prefer to know and fix this bug.
| closed | 2025-01-15T05:46:49Z | 2025-01-20T13:15:32Z | https://github.com/psf/requests/issues/6871 | [] | wjsw1781 | 2 |
tqdm/tqdm | jupyter | 1,072 | NameError: name 'IProgress' is not defined | Jupyter notebook. Downloading MNIST. Installed ipywidgets. _Can_ import IntProgress from ipywidgets. _Can't_ import IProgress from ipywidgets, because there's no such thing.
I have no idea how to fix this. The error message says to update jupyter and ipywidgets, but they're both freshly installed and fully up to date. It's been a known problem for a long time. The only resolutions I found on other bugs with "NameError: name 'IProgress' is not defined" is installing ipywidgets... | closed | 2020-11-10T00:06:52Z | 2022-02-28T15:58:26Z | https://github.com/tqdm/tqdm/issues/1072 | [
"p0-bug-critical ☢",
"question/docs ‽",
"submodule-notebook 📓",
"c1-quick 🕐"
] | mercertom | 3 |
pytest-dev/pytest-cov | pytest | 238 | --no-cov on pytest 4.0.0 produces warning | Using pytest-cov 2.6.0 and pytest 4.0.0, running `py.test --no-cov` logs a long traceback with a warning that `config.warn` is deprecated:
```
% pipenv run py.test --no-cov
======================================================================================================== test session starts ========================================================================================================
platform linux -- Python 3.6.6, pytest-4.0.0, py-1.7.0, pluggy-0.8.0 -- /home/sybren/.virtualenvs/demo-VU2p87vc/bin/python3
cachedir: .pytest_cache
rootdir: /home/sybren/tmp/demo, inifile: setup.cfg
plugins: cov-2.6.0
collected 1 item
test_dummy.py::test_this PASSED [100%]
WARNING: Coverage disabled via --no-cov switch!
Traceback (most recent call last):
File "/home/sybren/.virtualenvs/demo-VU2p87vc/bin/py.test", line 11, in <module>
sys.exit(main())
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/_pytest/config/__init__.py", line 77, in main
return config.hook.pytest_cmdline_main(config=config)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/hooks.py", line 284, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/manager.py", line 67, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/manager.py", line 61, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/_pytest/main.py", line 218, in pytest_cmdline_main
return wrap_session(config, _main)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/_pytest/main.py", line 211, in wrap_session
session=session, exitstatus=session.exitstatus
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/hooks.py", line 284, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/manager.py", line 67, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/manager.py", line 61, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/_pytest/terminal.py", line 639, in pytest_sessionfinish
terminalreporter=self, exitstatus=exitstatus
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/hooks.py", line 284, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/manager.py", line 67, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/manager.py", line 61, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/pytest_cov/plugin.py", line 248, in pytest_terminal_summary
terminalreporter.config.warn(code='COV-1', message=message)
File "/home/sybren/.virtualenvs/demo-VU2p87vc/lib/python3.6/site-packages/_pytest/config/__init__.py", line 662, in warn
lineno=lineno,
_pytest.warning_types.RemovedInPytest4Warning: config.warn has been deprecated, use warnings.warn instead
```
This can be reproduced by taking the attached [demo.tar.gz](https://github.com/pytest-dev/pytest-cov/files/2585373/demo.tar.gz) and running `pipenv run py.test --no-cov`. I used Python 3.6.6 and Ubuntu 18.04. Note that `py.test` options are set in the `setup.cfg` file. | closed | 2018-11-15T13:00:21Z | 2018-11-15T14:43:24Z | https://github.com/pytest-dev/pytest-cov/issues/238 | [
"bug"
] | sybrenstuvel | 1 |
opengeos/leafmap | jupyter | 482 | Toolbar frozen with Geemap and Leafmap | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.22.0
- Python version: 3.10
- Operating System: Mac OS
### Description
When I go to use the toolbar it freezes and makes me unable to use the toolbar functionality. Anyone else having this issue?
### What I Did
```
import leafmap
m = leafmap.Map(center=[40, -100], zoom=4)
naip_url = 'https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2019_Land_Cover_L48/wms?'
m.add_wms_layer(
url=naip_url,
layers='NLCD_2019_Land_Cover_L48',
name='NLCD 2019',
attribution='MRLC',
format='image/png',
shown=True,
)
m.add_legend(title='NLCD Land Cover Type', builtin_legend='NLCD')
m
``

| closed | 2023-06-22T19:03:56Z | 2023-06-29T03:50:20Z | https://github.com/opengeos/leafmap/issues/482 | [
"bug"
] | codefean | 1 |
blacklanternsecurity/bbot | automation | 2,075 | Badsecrets erroring upon a specific URL | **Describe the bug**
Badsecrets dies upon hitting a certain URL.
**Expected behavior**
Process the event successfully.
**BBOT Command**
\<Redacted\>
**OS, BBOT Installation Method + Version**
\<Redacted\>
**BBOT Config**
\<Redacted\>
**Logs**
[DBUG] badsecrets.finished: False
[DBUG] running: True
[DBUG] tasks:
[DBUG] - badsecrets.handle_event(HTTP_RESPONSE("{'url': '\<Redacted\>', 'timestamp': '2...", module=httpx, tags={'in-scope', '\<Redacted\>', 'dir', 'ip-\<Redacted\>, 'status-200'})) running for 2 minutes, 6 seconds:
[DBUG] incoming_queue_size: 19969
[DBUG] outgoing_queue_size: 0
**Screenshots**
N/A | closed | 2024-12-09T16:48:45Z | 2024-12-10T05:06:22Z | https://github.com/blacklanternsecurity/bbot/issues/2075 | [
"bug"
] | ausmaster | 10 |
jina-ai/clip-as-service | pytorch | 588 | Is there a way to get the model to output the attention weights too? | Is there a way to get the model to output the attention weights too? | open | 2020-09-11T11:43:27Z | 2020-09-11T11:43:52Z | https://github.com/jina-ai/clip-as-service/issues/588 | [] | lucas0 | 0 |
huggingface/transformers | deep-learning | 36,012 | [feature request] Callback handler event after forward pass in Trainer | While working on training (LORA) some model, I wanted to track memory usage in reference with meaningful events:
1. after forward pass - to track activation memory consumption
2. after forward pass - to track gradient and optimizer memory consumption
3. after zero grad - to make sure everything is released and we go back to only model weights consuming memory.
It was very easy to set up 2 and 3, however I noticed that there is no event related to the forward pass. To inject the wanted behavior I had to override the compute loss although loss computation stayed exactly the same. I think an event related to forward pass will greatly reduce the need to override compute_loss to inject custom behavior (that is not related to loss computation), and will enable cleaner more robust code by allowing to just use callback in such cases.
It might not be relevant to the feature request, but here is the callback I used to log memory metrics after backward and zero grad:
```
class MemoryStatsCallback(TrainerCallback):
def __init__(self, filepath, step_log_interval):
super().__init__()
self.filepath = filepath
os.makedirs(os.path.dirname(self.filepath), exist_ok=True)
if os.path.exists(self.filepath):
os.remove(self.filepath)
self.global_step = -1
self.step_log_interval = step_log_interval
self.log_flag = False
def _increment_step_counter(self):
self.global_step += 1
if self.global_step % self.step_log_interval == 0:
self.log_flag = True
else:
self.log_flag = False
def write_memory_stats_to_file(self, event: str):
if not self.log_flag:
return
allocated_memory = get_used_gpu_memory()
with open(self.filepath, 'a') as f:
f.write(f'{event} {allocated_memory}\n')
def on_step_begin(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
self._increment_step_counter()
return control
def on_substep_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
# substep is forward-backward during gradient accumulation (if not finished the grad accum steps)
self.write_memory_stats_to_file(event='backward')
return control
def on_pre_optimizer_step(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
# after backward before doing optimizer.step
self.write_memory_stats_to_file(event='backward')
return control
def on_step_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
# after zero grad
self.write_memory_stats_to_file(event='zero_grad')
torch.cuda.empty_cache()
return control
```
To also account for memory metrics after forward pass I overriden the compute_loss | closed | 2025-02-03T10:07:43Z | 2025-03-18T18:03:08Z | https://github.com/huggingface/transformers/issues/36012 | [] | yanadrdr | 8 |
cookiecutter/cookiecutter-django | django | 5,184 | Add ngrok for developing locally with HTTPS | ## Description
Change the documentation for developing locally with HTTPS to use ngrok.
## Rationale
The existing documentation no longer works as described.
## Implementation
ngrok allows a user to connect localhost to the internet for testing applications and APIs
1. Install ngrok
- `brew install ngrok/ngrok/ngrok`
or
- `npm install -g ngrok`
2. Register for account - get AUTH_TOKEN
4. `ngrok config add-authtoken <AUTH_TOKEN>`
5. `ngrok http localhost:8000 - get URL`
6. In config/settings/local.py
- add ngrok domain to ALLOWED_HOSTS
- `ALLOWED_HOSTS` = ["localhost", "0.0.0.0", "127.0.0.1", `".ngrok-free.app"]`
- add HTTPS configuration
- `CSRF_TRUSTED_ORIGINS = ['https://*.ngrok-free.app']`
- `SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')`
- `SESSION_COOKIE_SECURE = True`
- `CSRF_COOKIE_SECURE = True`
ngrok URL should now serve localhost
| closed | 2024-07-02T09:55:00Z | 2024-08-05T19:40:14Z | https://github.com/cookiecutter/cookiecutter-django/issues/5184 | [
"enhancement"
] | hugomoran159 | 8 |
man-group/notebooker | jupyter | 13 | Add ability to add custom mongo connection logic | Usually you won't have a plaintext password in an environment variable (I hope) so we need to allow users to specify their own connection methods. This in future should be extendable to other storage mechanisms, e.g. postgres | closed | 2020-10-14T23:10:32Z | 2020-12-07T10:28:27Z | https://github.com/man-group/notebooker/issues/13 | [
"enhancement"
] | jonbannister | 0 |
iperov/DeepFaceLab | deep-learning | 665 | Xseg split can't work | I use these scripts, but "Xseg split" can't work correctly.
5) data_dst extract whole_face MANUAL.bat
5.XSeg) data_dst split.bat
The console show:
Processing: 100%|#####################################################################################################################################################| 22/22 [00:00<00:00, 203.69it/s]
Images processed: 0
Done.
Version: DeepFaceLab_NVIDIA_build_03_18_2020 | closed | 2020-03-20T23:44:04Z | 2020-03-21T08:26:57Z | https://github.com/iperov/DeepFaceLab/issues/665 | [] | allen651212 | 2 |
plotly/plotly.py | plotly | 4,700 | add diagonals for `go.Splom` | I'd like to point out that there is still interest in the feature discussed in plotly/plotly_express#42. Now that `plotly/plotly_express` is archived, this belongs here IMHO.
| open | 2024-07-29T15:44:33Z | 2024-08-13T13:27:10Z | https://github.com/plotly/plotly.py/issues/4700 | [
"feature",
"P3"
] | johannes-mueller | 1 |
ets-labs/python-dependency-injector | asyncio | 811 | RecursionError: maximum recursion depth exceeded while calling a Python object | File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
y = copier(x, memo)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
y = [deepcopy(a, memo) for a in x]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
y = [deepcopy(a, memo) for a in x]
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 153, in deepcopy
y = copier(memo)
^^^^^^^^^^^^
File "src/dependency_injector/providers.pyx", line 1046, in dependency_injector.providers.DependenciesContainer.__deepcopy__
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
y = copier(x, memo)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 153, in deepcopy
y = copier(memo)
^^^^^^^^^^^^
File "src/dependency_injector/providers.pyx", line 790, in dependency_injector.providers.Dependency.__deepcopy__
RecursionError: maximum recursion depth exceeded while calling a Python object
Exception ignored in: 'dependency_injector.providers.Provider._copy_overridings'
Traceback (most recent call last):
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
y = copier(x, memo)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
y = [deepcopy(a, memo) for a in x]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
y = [deepcopy(a, memo) for a in x]
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 153, in deepcopy
y = copier(memo)
^^^^^^^^^^^^
File "src/dependency_injector/providers.pyx", line 1046, in dependency_injector.providers.DependenciesContainer.__deepcopy__
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
y = copier(x, memo)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 153, in deepcopy
y = copier(memo)
^^^^^^^^^^^^
File "src/dependency_injector/providers.pyx", line 790, in dependency_injector.providers.Dependency.__deepcopy__
RecursionError: maximum recursion depth exceeded while calling a Python object
Traceback (most recent call last):
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
RecursionError: maximum recursion depth exceeded
Exception ignored in: 'dependency_injector.providers.Provider._copy_overridings'
Traceback (most recent call last):
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
RecursionError: maximum recursion depth exceeded
Traceback (most recent call last):
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
RecursionError: maximum recursion depth exceeded
Exception ignored in: 'dependency_injector.providers.Provider._copy_overridings'
Traceback (most recent call last):
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
RecursionError: maximum recursion depth exceeded
Traceback (most recent call last):
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
RecursionError: maximum recursion depth exceeded
Exception ignored in: 'dependency_injector.providers.Provider._copy_overridings'
Traceback (most recent call last):
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
RecursionError: maximum recursion depth exceeded
Traceback (most recent call last):
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
y = copier(x, memo)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
y = [deepcopy(a, memo) for a in x]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
y = [deepcopy(a, memo) for a in x]
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 153, in deepcopy
y = copier(memo)
^^^^^^^^^^^^
File "src/dependency_injector/providers.pyx", line 1044, in dependency_injector.providers.DependenciesContainer.__deepcopy__
File "src/dependency_injector/providers.pyx", line 5051, in dependency_injector.providers._memorized_duplicate
File "src/dependency_injector/providers.pyx", line 1034, in dependency_injector.providers.DependenciesContainer.__init__
File "src/dependency_injector/providers.pyx", line 461, in dependency_injector.providers.Object.__init__
File "src/dependency_injector/providers.pyx", line 211, in dependency_injector.providers.Provider.__init__
RecursionError: maximum recursion depth exceeded while calling a Python object
Exception ignored in: 'dependency_injector.providers.Provider._copy_overridings'
Traceback (most recent call last):
File "src/dependency_injector/providers.pyx", line 4920, in dependency_injector.providers.deepcopy
File "/usr/lib/python3.11/copy.py", line 146, in deepcopy
y = copier(x, memo)
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 211, in _deepcopy_tuple
y = [deepcopy(a, memo) for a in x]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 211, in <listcomp>
y = [deepcopy(a, memo) for a in x]
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/copy.py", line 153, in deepcopy
y = copier(memo)
^^^^^^^^^^^^
File "src/dependency_injector/providers.pyx", line 1044, in dependency_injector.providers.DependenciesContainer.__deepcopy__
File "src/dependency_injector/providers.pyx", line 5051, in dependency_injector.providers._memorized_duplicate
File "src/dependency_injector/providers.pyx", line 1034, in dependency_injector.providers.DependenciesContainer.__init__
File "src/dependency_injector/providers.pyx", line 461, in dependency_injector.providers.Object.__init__
File "src/dependency_injector/providers.pyx", line 211, in dependency_injector.providers.Provider.__init__
RecursionError: maximum recursion depth exceeded while calling a Python object | open | 2024-08-12T09:56:57Z | 2024-08-28T07:00:51Z | https://github.com/ets-labs/python-dependency-injector/issues/811 | [] | AlexandrIllarionov | 2 |
Miserlou/Zappa | flask | 1,358 | Zip file - Windows paths | <!--- Provide a general summary of the issue in the Title above -->
## Context
When deploying a Django app (over 50mb) from a Windows 10 machine the tarball retains the Windows directory separators '\\\\', when deployed to Lambda this causes the error "No module named 'django.core.wsgi': ModuleNotFoundError"
## Expected Behavior
1. tarball should keep Unix directory separators
## Actual Behavior
1. tarball retains Windows directory separators
## Possible Fix
In core.py, line 683 can be replaced with:
`tarinfo = tarfile.TarInfo(posixpath.join(root.replace(temp_project_path, '').lstrip(os.sep).replace('\\', '/'), filename))`
Which fixed it for me but is quite hacky and probably not that robust.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.` zappa deploy dev ` on Windows 10 machine with app over 50mb
## Your Environment
* Zappa version used: 0.45.1
* Operating System and Python version: Windows 10, Python 3.6
* Your `zappa_settings.py`:
```
{
"dev": {
"aws_region": "us-east-2",
"django_settings": "<redacted>",
"profile_name": "default",
"project_name": "<redacted>",
"runtime": "python3.6",
"s3_bucket": "<redacted>",
"exclude": ["*.env", "*.jpg", "*.png", "media*", "archive*", "node_*", ],
"slim_handler": true,
"timeout_seconds": 300,
}
}
```
| open | 2018-01-22T19:57:03Z | 2019-07-11T19:11:10Z | https://github.com/Miserlou/Zappa/issues/1358 | [
"bug",
"windows",
"easy-fix"
] | pgpgpg | 2 |
pyppeteer/pyppeteer | automation | 375 | Title: Test fail. | I have been working to build and test this package for amd64 and arm64 architectures. While testing this package with “tox” getting errors for **poetry install** command for both architectures, it is failed to install m2r (0.2.1) and showing the below error.
**Error:**
```
Command ['/pyppeteer/venv38/bin/pip', 'install', '--no-deps', 'file:///root/.cache/pypoetry/artifacts/4f/27/1c/8c4108008bcf8c4bb68f981912836a9bdefd1e6b91408a2291566af273/m2r-0.2.1.tar.gz'] errored with the following return code 1, and output:
Processing /root/.cache/pypoetry/artifacts/4f/27/1c/8c4108008bcf8c4bb68f981912836a9bdefd1e6b91408a2291566af273/m2r-0.2.1.tar.gz
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-req-build-q_b8pahy/setup.py", line 14, in <module>
from m2r import parse_from_file
File "/tmp/pip-req-build-q_b8pahy/m2r.py", line 59, in <module>
class RestBlockGrammar(mistune.BlockGrammar):
AttributeError: module 'mistune' has no attribute 'BlockGrammar'
[end of output]
note: This error originates from a subprocess and is likely not a problem with pip.
error: metadata-generation-failed
```
- I followed this [issue](https://github.com/Tribler/tribler/issues/6624#issue-1072155143) and changed the mistune version from 2.0.0 to 0.8.4, now getting the RuntimeError for mistune.
- Got another [issue](https://github.com/miyakogi/m2r/issues/66#issuecomment-988617920) for m2r it looks like this repository is not being maintained anymore.
- Also Installed m2r using “apt-get -y install m2r” command, it is installed successfully m2r (0.2.1-3), edit the m2r version mention in poetry.lock file (0.2.1 to 0.2.1-3) it is showing RuntimeError:
`Error: Unable to find installation candidates for m2r (0.2.1-3)`
**Error Log for your reference:** [pyppeteer_test_result.txt](https://github.com/pyppeteer/pyppeteer/files/8423754/pyppeteer_test_result.txt)
Could you please provide your feedback regarding this. | open | 2022-04-06T04:59:32Z | 2022-05-06T08:32:09Z | https://github.com/pyppeteer/pyppeteer/issues/375 | [] | odidev | 3 |
RobertCraigie/prisma-client-py | asyncio | 850 | Option to generate camelcase python methods/function names. | ## Problem
Interacting with models named as MyModule is done via db.mymodule. I'd like to ask for an option to generate db.my_module instead.
| open | 2023-12-04T10:49:33Z | 2024-05-17T14:44:47Z | https://github.com/RobertCraigie/prisma-client-py/issues/850 | [] | marcovc | 1 |
autogluon/autogluon | data-science | 4,439 | [publications] awesome.md vs readme.md, paper proposal: https://arxiv.org/abs/2408.14817 | ### Describe the issue linked to the documentation
1. I've found today interesting papers that quotest autogluon a lot: https://arxiv.org/abs/2408.14817 The question is that is not clear in which section it fits: readme or awesome ?
2. It may be right in front of my eyes, but I just can't see readme.md and awesome.md content on https://auto.gluon.ai/stable/index.html
### Suggest a potential alternative/fix
_No response_ | open | 2024-08-28T07:01:24Z | 2024-08-29T05:51:40Z | https://github.com/autogluon/autogluon/issues/4439 | [
"API & Doc",
"enhancement",
"priority: 0"
] | mglowacki100 | 3 |
erdewit/ib_insync | asyncio | 378 | Symbol Name Pending Order | Hi,
First thanks you for great lib !
Can you tell me how i can have the symbol name of my pending orders ? i found reqAllOpenOrders but i don't have symbol name :(
Thanks.
Regards
Ludo.
| closed | 2021-05-31T16:34:18Z | 2021-06-11T11:23:24Z | https://github.com/erdewit/ib_insync/issues/378 | [] | LinuxpowerLudo | 4 |
deezer/spleeter | deep-learning | 443 | [Bug] Command of evaluating metrics from trained(download) model | Hi! i try to evaluate the metrics of pre-trained model using command `spleeter evaluate -p spleeter:4stems --mus_dir /home/wuxuechao/spleeter/musdb18_stem/ -o eval_output` but get `nan` result. I have download musdb dataset and run the training stage successfully, but i still can't get the results.
By the way, i found that the `_MIXTURE = 'mixture.wav'` command in the 46 lines of `spleeter/commands/evaluate.py/ ` file is a wrong path of musdb dataset, may be there is some misunderstanding in this place?
Thanks a lot if there is any help. | open | 2020-07-07T13:18:51Z | 2020-07-07T13:19:29Z | https://github.com/deezer/spleeter/issues/443 | [
"bug",
"invalid"
] | dakenan1 | 0 |
aimhubio/aim | data-visualization | 2,433 | Add a community discord link in the sidebar | ## 🚀 Feature
Add a community discord link in the `Sidebar`
### Motivation
Provide users the ability to easily navigate to the `Aim discord` community channel from the `Sidebar`
### Pitch
Display a link to the discord with an icon in the `Sidebar`.
| closed | 2022-12-15T11:37:57Z | 2023-01-31T11:13:59Z | https://github.com/aimhubio/aim/issues/2433 | [
"type / enhancement",
"area / Web-UI",
"phase / shipped"
] | arsengit | 0 |
graphql-python/graphene-django | graphql | 671 | Feature request: way to require only_fields | There doesn't seem to be a way to enforce `only_fields` on `DjangoObjectType`s. This is a serious security issue, since fields all default to accessible, including Django's automatically created reverse relation fields.
I tried to introspect this value, but it gets erased at class creation time. We only end up with `MyType._meta.fields`, which is a value computed from several inputs including `only_fields`. Possible solutions:
- Copy `only_fields` onto `_meta`
- Don't delete the `Meta` attribute from the class in `SubclassWithMeta`
- Official support for requiring `only_fields`, in the form of a configuration setting
Related: #516 | closed | 2019-06-11T15:16:42Z | 2020-01-29T15:41:01Z | https://github.com/graphql-python/graphene-django/issues/671 | [
"wontfix"
] | reverie | 9 |
lukas-blecher/LaTeX-OCR | pytorch | 118 | MacOS with pixel doubling caused ImageGrab Error | In MacOS, turning on pixel doubling will cause a difference between the real resolution and the display resolution, which will affect the screenshots of ImageGrab.
For me, my monitor's real resolution is 3840x2160, after pixel doubling, it becomes 1920x1080, so the use of `ImageGrab.grab(bbox=(x1, y1, x2, y2))` caused `PIL.UnidentifiedImageError`.
To solve the problem,
1. Check your monitor's real resolution. (For me is 3840x2160)
2. Check your displayed resolution. (For me is 1920x1080)
```python
import tkinter as tk
root = tk.Tk()
print(root.winfo_screenwidth())
print(root.winfo_screenheight())
root.destroy()
```
3. I use `ImageGrab.grab()` for whole screenshot, then crop it.
`gui.py`, line 266
```python
factor = 3840 / 1920 # repalced with your real resolution / displayed resolution
img = ImageGrab.grab()
img = img.crop((x1*factor, y1*factor, x2*factor, y2*factor))
``` | closed | 2022-04-11T11:22:25Z | 2022-04-15T08:19:16Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/118 | [
"bug",
"macOS"
] | backtraxe | 3 |
pandas-dev/pandas | pandas | 60,692 | ENH: Make pd.Timestamp.astimezone() default to local timezone | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```py
import pandas as pd
pd.Timestamp.now().astimezone()
```
However, for datetime objects this is no problem, it uses the local timezone as default:
```py
from datetime import datetime
datetime.now().astimezone()
```
### Issue Description
It would be great if `Timestamp.astimezone()` would work like it does for the original `datetime`, so that e.g. a function that accepts a datetime doesn't have to treat pd.Timestamp (which inherits datetime) differently
### Expected Behavior
`Timestamp.astimezone` should work like it does for the original `datetime` (choosing the local timezone)
### Installed Versions
<details>
pandas : 2.2.2
numpy : 1.26.4
pytz : 2022.7.1
dateutil : 2.8.2
Cython : 3.0.11
pytest : 7.4.0
</details>
| open | 2025-01-11T12:06:50Z | 2025-01-27T22:07:39Z | https://github.com/pandas-dev/pandas/issues/60692 | [
"Enhancement",
"API Design",
"Needs Discussion",
"Closing Candidate",
"Localization"
] | powellnorma | 2 |
flasgger/flasgger | api | 266 | uiversion 3 with blank url after initializing default data | I overwrite the default data with following code:
```
# config.py
template = {
"swagger": "2.0",
"info": {
"title": "KG Service API & Algorithm API",
"description": "API for Knowledge Hub & Algorithm Hub",
"contact": {
"responsibleOrganization": "ME",
"responsibleDeveloper": "Me",
"email": "me@me.com",
"url": "www.privacy.com",
},
"termsOfService": "",
"version": "0.0.1"
},
"host": "mysite.com", # overrides localhost:500
"basePath": "/", # base bash for blueprint registration
"schemes": [
"http"
],
"operationId": "gskg_service_api"
}
# __init__.py
from flasgger import Flasgger
flask_swagger = Flasgger()
flask_swagger.init_app(app)
app.swag.template = template
```
Because I use factory pattern, I set the template with `app.swag.template` after looking up the source code.
The appearance on uiversion 3 looks strange just like screenshot below.
The problem is that `the developer - Website` link to blank not the `www.privacy.com`
<img width="623" alt="screen shot 2018-11-21 at 11 29 45 pm" src="https://user-images.githubusercontent.com/12616602/48851292-9d784a00-ede5-11e8-88f2-65fc585ecaad.png">
On version 2, it looks good
<img width="618" alt="screen shot 2018-11-21 at 11 40 27 pm" src="https://user-images.githubusercontent.com/12616602/48851798-dfee5680-ede6-11e8-981e-2e7c95847175.png">
I suppose this is the problem in frontend, please help, I like the more modern uiversion 3 style | closed | 2018-11-21T15:45:15Z | 2020-06-16T07:12:12Z | https://github.com/flasgger/flasgger/issues/266 | [] | huanghe314 | 1 |
tiangolo/uwsgi-nginx-flask-docker | flask | 23 | Add additional Alpine base image for Python 2.7 | Add Alpine base image for Python 2.7
Some users are fans of Alpine Linux, so it would be nice to have an additional base image based on Alpine.
This would depend on: https://github.com/tiangolo/uwsgi-nginx-docker/issues/10 being solved first. | closed | 2017-09-30T15:39:54Z | 2018-01-15T10:18:02Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/23 | [
"Hacktoberfest"
] | tiangolo | 1 |
influxdata/influxdb-client-python | jupyter | 533 | Query stops when encountering a field containing a string (query_csv) | ### Specifications
* Client Version: 1.30.00
* InfluxDB Version: 2.4
* Platform: Windows
### Code sample to reproduce problem
```from influxdb_client import InfluxDBClient
# You can generate a Token from the "Tokens Tab" in the UI
token = token
org = "my-org"
bucket = "bucket"
with InfluxDBClient(url="http://192.168.1.1:8086", token=token, org=org) as client:
query = """option v = {timeRangeStart: -1m, timeRangeStop: now()}
from(bucket: "bucket") |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "computer1")
|> filter(fn: (r) => r["device"] == "device1")"""
csv = client.query_api().query_csv(query, org=org)
input_list = list(csv)
for row in input_list:
if(len(row) < 2 ):
break
print(row)
```
### Expected behavior
When I query this online, I get 9 tables. 7 of them have doubles/ints and 2 strings as _value. The same works when I query the above using python and the following code:
tables = client.query_api().query(query, org=org)
for table in tables:
for record in table.records:
print(record)
### Actual behavior
However when I query with
client.query_api().query_csv(query, org=org)
I get only 7 tables without any error. The two strings are not present anymore
### Additional info
_No response_ | closed | 2022-11-24T15:31:54Z | 2022-12-07T14:21:07Z | https://github.com/influxdata/influxdb-client-python/issues/533 | [
"enhancement"
] | Olgidos | 11 |
quokkaproject/quokka | flask | 12 | recommendation backend based on search | closed | 2013-08-11T20:11:55Z | 2018-02-06T13:46:25Z | https://github.com/quokkaproject/quokka/issues/12 | [
"enhancement",
"MEDIUM",
"ready"
] | rochacbruno | 2 | |
tensorflow/tensor2tensor | deep-learning | 1,455 | AttributeError: 'NoneType' object has no attribute 'startswith' when using t2t_decoder | How do I fix this?
(magenta) usuario@Strix:~/Escritorio/train$ t2t_decoder --decode_hparams="${DECODE_HPARAMS}" --decode_interactive --hparams="sampling_method=random" --hparams_set=${HPARAMS_SET} --model=${MODEL} --problem=${PROBLEM} --output_dir=${TRAIN_DIR}
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/linear_model/base.py:35: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ..utils.seq_dataset import ArrayDataset, CSRDataset
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/linear_model/least_angle.py:23: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ..utils import arrayfuncs, as_float_array, check_X_y, deprecated
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/utils/random.py:10: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._random import sample_without_replacement
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/linear_model/coordinate_descent.py:30: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import cd_fast
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/linear_model/**init**.py:22: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .sgd_fast import Hinge, Log, ModifiedHuber, SquaredLoss, Huber
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/linear_model/**init**.py:22: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .sgd_fast import Hinge, Log, ModifiedHuber, SquaredLoss, Huber
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/linear_model/sag.py:12: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .sag_fast import sag
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/svm/base.py:8: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import libsvm, liblinear
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/svm/base.py:8: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import libsvm, liblinear
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/svm/base.py:9: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import libsvm_sparse
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/neighbors/**init**.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .ball_tree import BallTree
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/neighbors/**init**.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .ball_tree import BallTree
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/neighbors/**init**.py:6: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .ball_tree import BallTree
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/neighbors/**init**.py:7: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .kd_tree import KDTree
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/decomposition/online_lda.py:28: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._online_lda import (mean_change, _dirichlet_expectation_1d,
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/utils/graph.py:16: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from .graph_shortest_path import graph_shortest_path # noqa
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/isotonic.py:11: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._isotonic import _inplace_contiguous_isotonic_regression, _make_unique
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/manifold/t_sne.py:26: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _utils
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/manifold/t_sne.py:27: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _barnes_hut_tsne
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/manifold/t_sne.py:27: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _barnes_hut_tsne
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/tree/tree.py:40: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._criterion import Criterion
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/tree/tree.py:40: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._criterion import Criterion
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/tree/tree.py:40: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._criterion import Criterion
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/tree/tree.py:40: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._criterion import Criterion /home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/cluster/k_means_.py:37: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _k_means /home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/cluster/k_means_.py:38: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._k_means_elkan import k_means_elkan
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/cluster/hierarchical.py:23: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _hierarchical
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/cluster/hierarchical.py:23: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from . import _hierarchical /home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/cluster/dbscan_.py:20: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._dbscan_inner import dbscan_inner
/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/sklearn/feature_extraction/hashing.py:14: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
from ._hashing import transform as _hashing_transform
Traceback (most recent call last):
File "/home/usuario/.conda/envs/magenta/bin/t2t_decoder", line 10, in
sys.exit(console_entry_point())
File "/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/magenta/tensor2tensor/t2t_decoder.py", line 34, in console_entry_point
tf.app.run(main)
File "/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/magenta/tensor2tensor/t2t_decoder.py", line 29, in main
t2t_decoder.main(argv)
File "/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/tensor2tensor/bin/t2t_decoder.py", line 182, in main
hp = create_hparams()
File "/home/usuario/.conda/envs/magenta/lib/python2.7/site-packages/tensor2tensor/bin/t2t_decoder.py", line 67, in create_hparams
data_dir=os.path.expanduser(FLAGS.data_dir),
File "/home/usuario/.conda/envs/magenta/lib/python2.7/posixpath.py", line 254, in expanduser
if not path.startswith('~'):
AttributeError: 'NoneType' object has no attribute 'startswith' | open | 2019-02-19T00:51:24Z | 2019-02-24T22:05:06Z | https://github.com/tensorflow/tensor2tensor/issues/1455 | [] | aletote | 1 |
deepinsight/insightface | pytorch | 1,740 | Can you help me,I have one Error! please! | when I start
!CUDA_VISIBLE_DEVICES='0' python -u train_softmax.py --network r100 --loss arcface --dataset emore
I have one Error:
config.imageshape: 3
prefix ./models/r100-arcface-emore/model
image_size [112, 112]
num_classes 4
Called with argument: Namespace(batch_size=16, ckpt=2, ctx_num=1, dataset='emore', frequent=20, image_channel=3, kvstore='device', loss='arcface', lr=0.01, lr_steps='100000,160000,220000', models_root='./models', mom=0.9, network='r100', per_batch_size=16, pretrained='../models/arcface_r100_v1/model', pretrained_epoch=0, rescale_threshold=0, verbose=2000, wd=0.0005) {'bn_mom': 0.9, 'workspace': 256, 'emb_size': 512, 'ckpt_embedding': True, 'net_se': 0, 'net_act': 'prelu', 'net_unit': 3, 'net_input': 1, 'net_blocks': [1, 4, 6, 2], 'net_output': 'E', 'net_multiplier': 1.0, 'val_targets': ['lfw', 'cfp_fp', 'agedb_30'], 'ce_loss': True, 'fc7_lr_mult': 1.0, 'fc7_wd_mult': 1.0, 'fc7_no_bias': False, 'max_steps': 0, 'data_rand_mirror': True, 'data_cutoff': False, 'data_color': 0, 'data_images_filter': 0, 'count_flops': True, 'memonger': False, 'loss_name': 'margin_softmax', 'loss_s': 64.0, 'loss_m1': 1.0, 'loss_m2': 0.5, 'loss_m3': 0.0, 'net_name': 'fresnet', 'num_layers': 100, 'dataset': 'emore', 'dataset_path': '../src/data/dataset/4sv', 'num_classes': 4, 'image_shape': [112, 112, 3], 'loss': 'arcface', 'network': 'r100', 'num_workers': 1, 'batch_size': 16, 'per_batch_size': 16}
loading ../models/arcface_r100_v1/model 0
[04:50:10] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.0.0. Attempting to upgrade...
[04:50:10] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
0 1 E 3 prelu False
config.loss_name: margin_softmax
config.loss_s: 64.0
Network FLOPs: 24.2G
mx.gpu(): gpu(0)
triplet <= 0.
INFO:root:loading recordio ../src/data/dataset/4sv/train.rec...
loading idx... ../src/data/dataset/4sv/train.idx
Traceback (most recent call last):
File "train_softmax.py", line 480, in <module>
main()
File "train_softmax.py", line 476, in main
train_net(args)
File "train_softmax.py", line 332, in train_net
images_filter=config.data_images_filter
File "/content/drive/My Drive/khoaluantn/recognition/image_iter.py", line 40, in __init__
s = self.imgrec.read_idx(0)
File "/usr/local/lib/python3.7/dist-packages/mxnet/recordio.py", line 317, in read_idx
self.seek(idx)
File "/usr/local/lib/python3.7/dist-packages/mxnet/recordio.py", line 279, in seek
pos = ctypes.c_size_t(self.idx[idx])
KeyError: 0 | open | 2021-09-04T04:52:06Z | 2021-09-04T09:13:00Z | https://github.com/deepinsight/insightface/issues/1740 | [] | HauTC-DevXamarin-MAUI | 1 |
axnsan12/drf-yasg | django | 585 | url orders in main page | hi there,
swagger sort urls alphabetically, can I show them by order in url patterns? | open | 2020-05-03T15:13:08Z | 2025-03-07T12:14:00Z | https://github.com/axnsan12/drf-yasg/issues/585 | [
"triage"
] | sae13 | 1 |
lucidrains/vit-pytorch | computer-vision | 98 | About the vit pos embedding | where the sin and cos embedding? | closed | 2021-04-28T03:58:03Z | 2021-07-20T05:02:15Z | https://github.com/lucidrains/vit-pytorch/issues/98 | [] | ShiMinghao0208 | 3 |
deezer/spleeter | tensorflow | 145 | [Bug] Illegal instruction (core dumped) | Hello.
When I try to use spleeter, I am getting this error - Illegal instruction (core dumped)
All installations was without any errors, but I can't understand how to solve this error | closed | 2019-11-28T11:13:49Z | 2019-11-29T08:39:05Z | https://github.com/deezer/spleeter/issues/145 | [
"bug",
"invalid",
"wontfix",
"RTMP"
] | stolicamedia | 7 |
PeterL1n/RobustVideoMatting | computer-vision | 51 | [BUG Report] Inference.py | The file inference.py has a small bug.
When I call convert_video as shown below:
`convert_video(
model, # The loaded model, can be on any device (cpu or cuda).
input_source=input_folder, # A video file or an image sequence directory.
downsample_ratio=None, # [Optional] If None, make downsampled max size be 512px.
output_type='png_sequence', # Choose "video" or "png_sequence"
output_composition=output_folder+'/com', # File path if video; directory path if png sequence.
output_alpha=output_folder+'/alpha', # [Optional] Output the raw alpha prediction.
output_foreground=output_folder+'/foreground',# [Optional] Output the raw foreground prediction.
# output_video_mbps=4, # Output video mbps. Not needed for png sequence. 4
seq_chunk=1, # Process n frames at once for better parallelism.
num_workers=0, # Only for image sequence input. Reader threads.
progress=True # Print conversion progress.
)`
it comes with the following errors:
`.cache/torch/hub/PeterL1n_RobustVideoMatting_master/inference_utils.py", line 33, in __init__
self.container = av.open(path, mode='w')
File "av/container/core.pyx", line 364, in av.container.core.open
File "av/container/core.pyx", line 146, in av.container.core.Container.__cinit__
ValueError: Could not determine output format`
I've traced back to inference.py, and the issue is in lines 104 and 106:
` else:
if output_composition is not None:
writer_com = ImageSequenceWriter(output_composition, 'png')
if output_alpha is not None:
writer_pha = VideoWriter(output_alpha, 'png')
if output_foreground is not None:
writer_fgr = VideoWriter(output_foreground, 'png')`
It should be:
` else:
if output_composition is not None:
writer_com = ImageSequenceWriter(output_composition, 'png')
if output_alpha is not None:
writer_pha = ImageSequenceWriter(output_alpha, 'png')
if output_foreground is not None:
writer_fgr = ImageSequenceWriter(output_foreground, 'png')`
| closed | 2021-09-27T21:45:57Z | 2021-09-27T21:56:58Z | https://github.com/PeterL1n/RobustVideoMatting/issues/51 | [] | SamHSlva | 1 |
vastsa/FileCodeBox | fastapi | 195 | 100mb的文件上传失败显示文件过大 | **Describe the bug**
你好,首先谢谢你们的软件,我开始试着用,已经调整了我的传送文件大小为5G,但是就100多mb的文件也发不出去,然后出现错误的文字写着文件过大?可以帮帮忙看看是否有什么错误吗?

| closed | 2024-08-15T15:06:08Z | 2024-11-29T07:44:18Z | https://github.com/vastsa/FileCodeBox/issues/195 | [] | Andersonong-github | 10 |
fohrloop/dash-uploader | dash | 101 | small file upload: division by zero.. | pre-release version of dash-uploader 0.7.0, trying to upload a small jpeg image:
File "dash_uploader/callbacks.py", line 39, in wrapper
status = UploadStatus(
File "dash_uploader/uploadstatus.py", line 67, in __init__
self.progress = uploaded_size_mb / total_size_mb
>>> ZeroDivisionError: division by zero | closed | 2022-09-19T12:33:41Z | 2025-01-11T12:21:50Z | https://github.com/fohrloop/dash-uploader/issues/101 | [] | afkrause | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,324 | [Bug]: When using "prompt" as the filename, the filename gets truncated. | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
macOS M2 with webUI 1.8.
I use prompts as file names, but the image file names always get truncated (the total length of the truncated file name is 165 characters, with the remaining prompt being 132 characters). I tried changing `max_filename_part_length = 128` to `max_filename_part_length = 256` in modules/image.py and restarted webUI, but the file names are still being truncated at the same place. Is this a bug or by design?
I often use "Prompts from file or textbox" to generate many images in bulk, and these images will be moved around, so knowing each image's corresponding prompt is very important. Does anyone know how to remove this limitation?
### Steps to reproduce the problem
Every time a image is generated, it happens like this.
### What should have happened?
The file name should be able to display the complete prompts, provided that the file name does not exceed 256 characters.
### What browsers do you use to access the UI ?
Apple Safari
### Sysinfo
{
"Platform": "macOS-14.4-arm64-arm-64bit",
"Python": "3.10.13",
"Version": "v1.8.0",
"Commit": "bef51aed032c0aaa5cfd80445bc4cf0d85b408b5",
"Script path": "/Users/paul/stable-diffusion-webui",
"Data path": "/Users/paul/stable-diffusion-webui",
"Extensions dir": "/Users/paul/stable-diffusion-webui/extensions",
"Checksum": "c93867aa0bf80aa80cc4fbe2d190dade91ed4a1c069f3faa55ba4864794e5c52",
"Commandline": [
"launch.py",
"--listen",
"--no-gradio-queue",
"--skip-torch-cuda-test",
"--upcast-sampling",
"--no-half-vae",
"--use-cpu",
"interrogate"
],
"Torch env info": {
"torch_version": "2.1.0",
"is_debug_build": "False",
"cuda_compiled_version": null,
"gcc_version": null,
"clang_version": "15.0.0 (clang-1500.3.9.4)",
"cmake_version": "version 3.28.3",
"os": "macOS 14.4 (arm64)",
"libc_version": "N/A",
"python_version": "3.10.13 (main, Aug 24 2023, 12:59:26) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime)",
"python_platform": "macOS-14.4-arm64-arm-64bit",
"is_cuda_available": "False",
"cuda_runtime_version": null,
"cuda_module_loading": "N/A",
"nvidia_driver_version": null,
"nvidia_gpu_models": null,
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.1.0",
"torchdiffeq==0.2.3",
"torchmetrics==1.3.1",
"torchsde==0.2.6",
"torchvision==0.16.0"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": "Apple M2"
},
"Exceptions": [],
"CPU": {
"model": "arm",
"count logical": 8,
"count physical": 8
},
"RAM": {
"total": "24GB",
"used": "11GB",
"free": "712MB",
"active": "10GB",
"inactive": "10GB"
},
"Extensions": [
{
"name": "sd-webui-controlnet",
"path": "/Users/paul/stable-diffusion-webui/extensions/sd-webui-controlnet",
"version": "aa2aa812",
"branch": "main",
"remote": "https://github.com/Mikubill/sd-webui-controlnet"
}
],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": "--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate",
"GIT": "git",
"GRADIO_ANALYTICS_ENABLED": "False",
"TORCH_COMMAND": "pip install torch==2.1.0 torchvision==0.16.0"
},
### Console logs
```Shell
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on paul user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.13 (main, Aug 24 2023, 12:59:26) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Version: v1.8.0
Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
ControlNet init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
Launching Web UI with arguments: --listen --no-gradio-queue --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Traceback (most recent call last):
File "/Users/paul/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/Users/paul/stable-diffusion-webui/launch.py", line 44, in main
start()
File "/Users/paul/stable-diffusion-webui/modules/launch_utils.py", line 465, in start
import webui
File "/Users/paul/stable-diffusion-webui/webui.py", line 13, in <module>
initialize.imports()
File "/Users/paul/stable-diffusion-webui/modules/initialize.py", line 39, in imports
from modules import processing, gradio_extensons, ui # noqa: F401
File "/Users/paul/stable-diffusion-webui/modules/processing.py", line 18, in <module>
import modules.sd_hijack
File "/Users/paul/stable-diffusion-webui/modules/sd_hijack.py", line 5, in <module>
from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet, patches
File "/Users/paul/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 13, in <module>
from modules.hypernetworks import hypernetwork
File "/Users/paul/stable-diffusion-webui/modules/hypernetworks/hypernetwork.py", line 13, in <module>
from modules import devices, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors
File "/Users/paul/stable-diffusion-webui/modules/sd_samplers.py", line 1, in <module>
from modules import sd_samplers_kdiffusion, sd_samplers_timesteps, sd_samplers_lcm, shared
File "/Users/paul/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 4, in <module>
from modules import sd_samplers_common, sd_samplers_extra, sd_samplers_cfg_denoiser
File "/Users/paul/stable-diffusion-webui/modules/sd_samplers_common.py", line 6, in <module>
from modules import devices, images, sd_vae_approx, sd_samplers, sd_vae_taesd, shared, sd_models
File "/Users/paul/stable-diffusion-webui/modules/images.py", line 692
oversize = image.width > opts.target_side_length or image.height > opts.target_side_length
IndentationError: unexpected indent
paul@zhiguangs-Mac-mini stable-diffusion-webui % ./webui.sh --listen --no-gradio-queue
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on paul user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.13 (main, Aug 24 2023, 12:59:26) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Version: v1.8.0
Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
ControlNet init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
Launching Web UI with arguments: --listen --no-gradio-queue --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
==============================================================================
You are running torch 2.1.0.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
ControlNet preprocessor location: /Users/paul/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
2024-03-19 19:31:45,549 - ControlNet - INFO - ControlNet v1.1.441
2024-03-19 19:31:45,596 - ControlNet - INFO - ControlNet v1.1.441
```
### Additional information
_No response_ | closed | 2024-03-19T12:02:51Z | 2024-03-20T03:01:15Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15324 | [
"bug-report"
] | wangpao | 3 |
BeanieODM/beanie | asyncio | 236 | Activating validate_on_save without defining id field errors | Creating a document without defining an `id` field, appears to generate the following error during save (of a new document) when `validate_on_save` is true in the settings.
```
E pydantic.error_wrappers.ValidationError: 1 validation error for TestDocument
E id
E none is not an allowed value (type=type_error.none.not_allowed)
```
Don't think it's directly relevant, but additional context: the document is defined as an extension of an existing pydantic model, so we don't duplicate the common field definitions; also not defining an Id field, since we won't ever directly be using it with this data and want to let the db handle it.
Likely related to #202 but different error / circumstances, so creating new issue.
| closed | 2022-04-11T08:16:34Z | 2024-10-20T13:43:30Z | https://github.com/BeanieODM/beanie/issues/236 | [
"Stale"
] | infinityredux | 4 |
miguelgrinberg/python-socketio | asyncio | 225 | Client cannot connect to server when transports is websocket | Server demo:
```python
from aiohttp import web
import socketio
sio = socketio.AsyncServer(async_mode='aiohttp')
app = web.Application()
sio.attach(app)
async def index(request):
pass
@sio.on('connection')
async def on_connect():
print('client connected')
@sio.on('schedule')
async def ping(sid):
await sio.emit('schedule', {
'data': 'ok'
})
app.router.add_get('/', index)
if __name__ == '__main__':
web.run_app(app)
```
Client code (default transports):
```python
import socketio
sio = socketio.Client()
@sio.on('connect')
def on_connect():
print('connected to server')
sio.emit('schedule')
@sio.on('schedule')
def on_pong(data):
print(data)
if __name__ == '__main__':
sio.connect('http://localhost:8080')
sio.wait()
```
When `transports` is default (`['polling', 'websocket']`), everything is fine, output is expected:
```
connected to server
{'data': 'ok'}
```
But when `transports` set to `websocket`, client exited without msg and server did not receive any msg.
```python
sio.connect('http://localhost:8080', transports='websocket')
``` | closed | 2018-12-26T09:05:18Z | 2018-12-26T12:26:01Z | https://github.com/miguelgrinberg/python-socketio/issues/225 | [
"question"
] | sdvcrx | 3 |
cookiecutter/cookiecutter-django | django | 4,778 | python: can't open file '/app/manage.py': [Errno 2] No such file or directory | Hello, I'm writing to you because I'm experiencing a problem that I've been trying to solve for several days but which I'm really unable to overcome using cookiecutter. I'm doing all the steps as explained in the documentation and in the few videos I've seen on the internet. I set the parameters with Docker and Postrges15 but when I run the command:
```shell
docker compose -f local.yml up -d
```
Then the django and docs container doesn't launch and I get the following in the logs:
```shell
Waiting for PostgreSQL to become available... 2024-01-08T18:56:08.078170950Z PostgreSQL is available 2024-01-08T18:56:08.115577061Z python: can't open file '/app/manage.py': [Errno 2] No such file or directory
```` | closed | 2024-01-08T19:04:40Z | 2024-01-08T19:10:09Z | https://github.com/cookiecutter/cookiecutter-django/issues/4778 | [] | phoenixhackt | 0 |
openapi-generators/openapi-python-client | fastapi | 266 | Invalid python generated when using nested dictionaries | **Describe the bug**
Using `openapi-python-client` on a valid FastAPI/pydantic API is generating invalid client library on 0.7.2 when there is a somewhat complex set of nested `Dict` and `Union` types.
**To Reproduce**
Put the following in a main.py
```
from typing import Any, Dict, List, Union
from fastapi import FastAPI
from pydantic import (
BaseModel,
StrictBool,
StrictInt,
StrictFloat,
StrictStr,
)
app = FastAPI()
JSONValue = Union[
Dict[str, Any], List[Any], StrictBool, StrictFloat, StrictInt, StrictStr, None
]
JSONDict = Dict[str, JSONValue]
class ItemMapResource(BaseModel):
items: JSONDict
@app.get("/", response_model=ItemMapResource)
def read_item():
return ItemMapResource(items={})
```
Run it like this:
```
uvicorn main:app --reload --port 4000
```
Generate a client library like this
```
openapi-python-client generate --url http://localhost:4000/openapi.json
```
**Expected behavior**
Should generate valid python library
**Actual behavior**
Generates code with broken indents which fails to execute:
```
@attr.s(auto_attribs=True)
class ItemMapResourceItems:
""" """
additional_properties: Dict[str, Union[ItemMapResourceItemsAdditionalProperty, List[None], bool, float, int, str]] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
field_dict: Dict[str, Any] = {}
for prop_name, prop in self.additional_properties.items():
if isinstance(prop, ItemMapResourceItemsAdditionalProperty):
field_dict[prop_name] = prop.to_dict()
elif isinstance(prop, List[None]):
field_dict[prop_name] = []
for additional_property_item_data in prop:
additional_property_item = None
field_dict[prop_name].append(additional_property_item)
elif isinstance(prop, bool):
field_dict[prop_name] = prop
elif isinstance(prop, float):
field_dict[prop_name] = prop
elif isinstance(prop, int):
field_dict[prop_name] = prop
else:
field_dict[prop_name] = prop
field_dict.update({
})
return field_dict
```
Note the line `if isinstance(prop, ItemMapResourceItemsAdditionalProperty):` is indented incorrectly.
**OpenAPI Spec File**
```
{"openapi":"3.0.2","info":{"title":"FastAPI","version":"0.1.0"},"paths":{"/":{"get":{"summary":"Read Item","operationId":"read_item__get","responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"$ref":"#/components/schemas/ItemMapResource"}}}}}}}},"components":{"schemas":{"ItemMapResource":{"title":"ItemMapResource","required":["items"],"type":"object","properties":{"items":{"title":"Items","type":"object","additionalProperties":{"anyOf":[{"type":"object"},{"type":"array","items":{}},{"type":"boolean"},{"type":"number"},{"type":"integer"},{"type":"string"}]}}}}}}}
```
**Desktop (please complete the following information):**
- OS: Ubuntu 20.04
- Python Version: 3.8.3
- openapi-python-client version 0.7.2
**Additional context**
This worked fine with 0.7.0, so I suspect a regression from #252 | closed | 2020-12-15T01:06:45Z | 2020-12-21T18:00:14Z | https://github.com/openapi-generators/openapi-python-client/issues/266 | [
"🐞bug"
] | joshzana | 1 |
chatanywhere/GPT_API_free | api | 371 | 添加gpt-4o-mini-tts |
添加gpt-4o-mini-tts | open | 2025-03-21T00:52:12Z | 2025-03-24T01:52:14Z | https://github.com/chatanywhere/GPT_API_free/issues/371 | [] | sujianqingfeng | 3 |
seleniumbase/SeleniumBase | web-scraping | 3,395 | How can i block custom images and css not all | How can i block custom images and css not all | closed | 2025-01-06T18:23:21Z | 2025-01-07T01:59:47Z | https://github.com/seleniumbase/SeleniumBase/issues/3395 | [
"question",
"UC Mode / CDP Mode"
] | aboessa | 9 |
ijl/orjson | numpy | 442 | Serialization of Pandas' `Timestamp` | I was wondering why the serialization of Pandas' `Timestamp` object was not possible even though they inherit from `datetime.datetime`.
```python
>>> import orjson
>>> orjson.__version__
'3.9.10'
>>> import pandas as pd
>>> import datetime
dt = datetime.datetime(2023, 11, 23)
>>> orjson.dumps(dt)
b'"2023-11-23T00:00:00"'
>>> ts = pd.Timestamp(dt)
>>> isinstance(ts, datetime.datetime)
True
>>> orjson.dumps(ts)
TypeError: Type is not JSON serializable: Timestamp
```
Wouldn't it better to check first if the object to serialize is a subclass of serializable types?
It already seems to be the case for classes inheriting from `dict`, but not from `datetime.datetime`.
```python
>>> class CustomDT(datetime.datetime):
>>> pass
>>> cdt = CustomDT(2023, 11, 23)
>>> orjson.dumps(cdt)
TypeError: Type is not JSON serializable: CustomDT
>>> class CustomDict(dict):
>>> pass
>>> cdict = CustomDict([('key', 'value')])
>>> orjson.dumps(cdict)
b'{"key":"value"}'
```
| closed | 2023-11-23T12:32:06Z | 2023-12-03T08:02:06Z | https://github.com/ijl/orjson/issues/442 | [
"Stale"
] | odelmarcelle | 0 |
aiortc/aiortc | asyncio | 405 | when trying to connect from other host got an error | 405 method not allowed
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8000/offer. (Reason: CORS preflight response did not succeed).
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8000/offer. (Reason: CORS request did not succeed). | closed | 2020-08-17T13:53:50Z | 2021-01-29T08:37:29Z | https://github.com/aiortc/aiortc/issues/405 | [
"invalid"
] | rahooftkh | 2 |
autogluon/autogluon | scikit-learn | 4,442 | Contributing to Model Monitoring and Interpretability | Hello there! :wave:
My name is Guilherme and I’m a software engineering student at UNIPAMPA, a university in southern Brazil. I’m currently developing my undergraduate thesis and I’m very interested in working on improving AutoGluon in terms of features such as model monitoring and interpretability.
I see from the roadmap that these points are still open, so I would be happy to collaborate on this great project.
Thank you! | open | 2024-08-28T23:33:36Z | 2024-08-28T23:33:36Z | https://github.com/autogluon/autogluon/issues/4442 | [
"enhancement"
] | guijasss | 0 |
saulpw/visidata | pandas | 2,085 | [input help] Please add input help back. | I already miss visidata/ddw/input.ddw. I prefer that detailed help message over the new message.
| closed | 2023-10-25T20:53:51Z | 2023-10-27T22:28:05Z | https://github.com/saulpw/visidata/issues/2085 | [
"wishlist",
"wish granted"
] | frosencrantz | 6 |
tqdm/tqdm | jupyter | 1,486 | Threading | Multiprocessing Visual Error | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ +] visual output bug
- [+ ] I have visited the [source website], and in particular
read the [known issues]
- [+] I have searched through the [issue tracker] for duplicates
- [+] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
4.65.0 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] linux
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
I have stumbled upon a case where tqdm bars need to be updated manually. Specifically I am trying to track a process that is launched by Fortran library. In the code below I have reproduced the behavior sparing the case-oriented details. The problem arises when a process is completed. Specifically when a process finished, the cursor position changes and therefore the bars move relative to their initial position leaving behind the previous iteration. Also, it happens, that if a bar tries to update itself, when another bar is changing, its position is recalculated from there leaving a mess. I believe this behavior has to do with the absence of lock, or a way to lock the changes in the bars. I haven't found anything on the repository so I am filing this issue! It should be possible to solve the issue with queues but i would prefer not to pass tqdm objects around since they are unpickable.
```
from concurrent.futures import ThreadPoolExecutor, as_completed
import random
from tqdm.auto import tqdm
from time import sleep
from threading import Thread
def latest_time(dir:str):
return random.randint(0,100), 'a', False
def monitor_run(
dir:str,
name: str,
position: int,
max_iter: int,
refresh_progress: float=2,
) -> None:
sleep(1 + (position+1)/10)
with tqdm(
total = max_iter,
bar_format="{l_bar}{bar:30}{r_bar}",
desc=f"\t\t{name}: 0.0 Progress",
position= position,
ncols = 100,
leave = True,
ascii= True,
colour ='#00ff00',
) as pbar:
desc_prev: float = 0
while True:
sleep(refresh_progress)
time, desc, error = latest_time(dir)
if desc is None:
desc: float | None = desc_prev
else:
desc_prev = desc
pbar.desc = f"\t\t {name}: {desc} Progress"
if error:
pbar.write(f"Encountered Error at {desc}")
break
if time is None:
continue
pbar.n = int(time)
pbar.refresh(nolock=True)
if time>=max_iter:
pbar.close()
break
def serial_monitor(
dir:str,
position: int,
max_iter: int,
refresh_progress: float = 2,
)-> None:
monitor_run(dir,str(position), position ,max_iter, refresh_progress)
def serial_monitor_star(args)-> None:
serial_monitor(*args)
def parallel_monitor(
dirs: list[str],
max_iter: int,
refresh_progress: float =2,
) -> None:
args_list = [
[
dir, position+1, max_iter, refresh_progress
] for position, dir in enumerate(dirs)
]
# tqdm.write("\t\tStarting:")
# thread_map(
# serial_monitor_star, args_list, tqdm_class = tqdm ,max_workers = len(reynolds)
# )
# tqdm.write("\t\tCompleted")
with tqdm(total=2*len(dirs)):
with ThreadPoolExecutor(max_workers= len(dirs)) as ex:
futures = [
ex.submit(
serial_monitor_star,
args
) for args in args_list
]
for future in as_completed(futures):
result = future.result()
if __name__ =="__main__":
parallel_monitor(
['','','','','','',''],
100,
2
)
``` | open | 2023-07-25T10:56:55Z | 2023-12-06T05:04:01Z | https://github.com/tqdm/tqdm/issues/1486 | [] | trifwn | 2 |
huggingface/datasets | deep-learning | 7,067 | Convert_to_parquet fails for datasets with multiple configs | If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error:
```
Traceback (most recent call last):
File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main
service.run()
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run
dataset.push_to_hub(
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub
api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch
hf_raise_for_status(response)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f)
Bad request:
Invalid reference for a branch: refs/pr/1
``` | closed | 2024-07-23T15:09:33Z | 2024-07-30T10:51:02Z | https://github.com/huggingface/datasets/issues/7067 | [] | HuangZhen02 | 3 |
scikit-image/scikit-image | computer-vision | 6,784 | Error introduced in version 0.20.0 when using PyInstaller | ### Description:
The executable produced by [PyInstaller](https://pyinstaller.org/en/stable/) is producing an error introduced by scikit-image v0.20.0.

### Way to reproduce:
_No response_
### Traceback or output:
```Shell
Traceback (most recent call last):
File "__main__.py", line 4, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "views\main_view.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "services\project_service.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "analysis\classify_fullcore.py", line 6, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "skimage\__init__.py", line 74, in <module>
File "lazy_loader\__init__.py", line 243, in attach_stub
ValueError: Cannot load imports from non-existent stub 'D:\\Users\\Rodrigo\\GitHub\\mosis\\dist\\mosis\\skimage\\__init__.pyci'
```
### Version information:
```Shell
3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)]
Windows-10-10.0.19045-SP0
scikit-image version: 0.20.0
numpy version: 1.24.2
```
| open | 2023-03-07T17:03:26Z | 2023-09-16T14:09:18Z | https://github.com/scikit-image/scikit-image/issues/6784 | [
":bug: Bug"
] | rodrigomologni | 2 |
httpie/cli | api | 989 | Feature request: support "http(s) request" from raw requests file | From https://httpie.io/docs#offline-mode,
> Generating raw requests that can be sent with any other client:
>
> ▶ RUN# 1. save a raw request to a file:
> $ http --offline POST httpbin.org/post hello=world > request.http
> ▶ RUN# 2. send it over the wire with, for example, the fantastic netcat tool:
> $ nc httpbin.org 80 < request.http
## Expected
one can run `http < request.http` or `https < request.https` to reproduce some results.
| open | 2020-11-11T14:55:03Z | 2024-01-30T14:34:03Z | https://github.com/httpie/cli/issues/989 | [
"enhancement",
"needs product design"
] | snowman | 4 |
biolab/orange3 | data-visualization | 6,083 | widget 'Save Data' doesn't give options described in the documentation | Hello,
First of all, I hope I use the appropriate tracker to report, I hesitate somehow...
According to the Documentation, the widget 'Save Data' has options to save data with various extensions
Actually I could not access the window with these options such as described in the doc, and the only possibility is pkl extension
A warm thanks for your attention at this ticket
Michel Souweine
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:
- Orange version:
- How you installed Orange:
| closed | 2022-07-31T06:31:26Z | 2022-08-01T07:19:07Z | https://github.com/biolab/orange3/issues/6083 | [
"bug report"
] | michous | 1 |
vaexio/vaex | data-science | 2,202 | [BUG-REPORT] Error converting from csv file to hdf5 file with | **Description**
ArrowInvalid: Failed casting from large_string to string
Code: vaex.from_csv("/data/transactions.csv",convert=True,chunk_size=10000000)
When I tried to call the from_csv function to convert the csv file to hdf5, each small hdf5 file was generated smoothly, but when aggregating each small file, an error occurred. I suspect if some of my fields are too long for vaex compatibility.
**Software information**
- Vaex version (`import vaex; vaex.__version__)`:
{'vaex-core': '4.12.0',
'vaex-viz': '0.5.3',
'vaex-hdf5': '0.12.3',
'vaex-server': '0.8.1',
'vaex-astro': '0.9.1',
'vaex-jupyter': '0.8.0',
'vaex-ml': '0.18.0'}
- Vaex was installed via: conda-forge
- OS: Ubuntu LTS 20.04
**Additional information**
- screenshots


- data

| open | 2022-09-07T01:58:42Z | 2022-09-07T05:39:50Z | https://github.com/vaexio/vaex/issues/2202 | [] | zhiyongm | 1 |
Yorko/mlcourse.ai | numpy | 76 | week 3 workbooks seed issue | Right now code in https://github.com/Yorko/mlcourse_open/blob/master/jupyter_notebooks/topic03_decision_trees_knn/topic3_trees_knn.ipynb `In[3]` is `np.seed = 7` but this seems to be typo and should be `np.random.seed(7)`? | closed | 2017-09-21T06:20:13Z | 2017-09-21T14:10:38Z | https://github.com/Yorko/mlcourse.ai/issues/76 | [
"minor_fix"
] | sudodoki | 1 |
521xueweihan/HelloGitHub | python | 1,950 | 项目自荐 | Vue Color Avatar 一个纯前端实现的头像生成网站 🧑🦱 | ## 项目推荐
- 项目地址:[https://github.com/Codennnn/vue-color-avatar](https://github.com/Codennnn/vue-color-avatar)
- 类别:JS
- 项目后续更新计划:丰富头像素材,并且可以自定义素材的颜色
- 项目描述:
- 一个纯前端实现的矢量风格**头像生成**网站,可以搭配不同的素材组件,生成自己的个性化头像
- 推荐理由:使用 Vite + Vue3 开发,帮助前端初学者学习 Vue3 的语法以及掌握项目搭建
- 截图:



| closed | 2021-10-31T04:23:39Z | 2021-11-26T01:00:27Z | https://github.com/521xueweihan/HelloGitHub/issues/1950 | [
"已发布",
"JavaScript 项目"
] | Codennnn | 1 |
pallets/flask | flask | 4,728 | `Flask test` does not work (tables not created) after 2.2.0 update | With Flask 2.1.0 :
```bash
$ flask test
=========================================================================== test session starts ============================================================================
platform darwin -- Python 3.10.3, pytest-7.1.2, pluggy-1.0.0 -- /Users/antoine/Documents/Git/portail/venv/bin/python3
cachedir: .pytest_cache
rootdir: /Users/antoine/Documents/Git/portail
collected 29 items
...
===================================================================== 29 passed, 5 warnings in 15.65s ======================================================================
```
With Flask 2.2.0 :
```bash
=========================================================================== test session starts ============================================================================
platform darwin -- Python 3.10.3, pytest-7.1.2, pluggy-1.0.0 -- /Users/antoine/Documents/Git/portail/venv/bin/python3
cachedir: .pytest_cache
rootdir: /Users/antoine/Documents/Git/portail
collected 29 items
test/test_accessibility.py::TestAccessibility::test_admin_accessibility ERROR
test/test_accessibility.py::TestAccessibility::test_client_accessibility ERROR
...
========================================================================= short test summary info ==========================================================================
ERROR test/test_accessibility.py::TestAccessibility::test_admin_accessibility - sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: cre_module
ERROR test/test_accessibility.py::TestAccessibility::test_client_accessibility - sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: cre_module
```
My tables are not created in Flask 2.2.0
However I didn't change anything in my tests and I don't see anything in the changelog that could affect the tests.
The errors happen only when I change the version of Flask (2.1.3 --> 2.2.0).
Here is the code that creates the tables (which works in 2.1.3 but not in 2.2.0).
```python
from portail import create_app
from portail.modules import PORTAIL_MODULES, detect_modules
from portail.database import db as _db
from portail.settings import TestConfig
@pytest.fixture()
def app():
"""An application for the tests."""
_app = create_app(TestConfig())
with _app.app_context():
_db.create_all()
modules = detect_modules()
for m in modules:
m.save(False)
_db.session.commit()
for m in modules:
PORTAIL_MODULES[m.id] = m
m.expunge()
ctx = _app.test_request_context()
ctx.push()
yield _app
ctx.pop()
with _app.app_context():
_db.drop_all()
@pytest.fixture()
def db(app):
"""A database for the tests."""
_db.app = app
with app.app_context():
_db.create_all()
yield _db
# Explicitly close DB connection
_db.session.close()
_db.drop_all()
```
Has `app.app_context()` been modified or something I use for database setup?
Environment:
- Python version: 3.10.3
- Flask version: 2.2.0 | closed | 2022-08-02T13:24:08Z | 2022-08-17T00:06:03Z | https://github.com/pallets/flask/issues/4728 | [] | probance-antoine | 1 |
Avaiga/taipy | automation | 2,286 | [🐛 BUG] Applying width to a specific column in a tgb.table does not work | ### What went wrong? 🤔
A customer requested for specific columns in his tgb.table to have a bigger width. The column names have spaces in them so using tgb.table(width__column_name="200px") won't work.
This example is supposed to work but applies the style to all columns instead:
```python
from taipy.gui import Gui
import taipy.gui.builder as tgb
import pandas as pd
data = pd.DataFrame({"A C": [1, 2, 3], "B C": [4, 5, 6], "C C": [7, 8, 9]})
properties_table = {"width[B C]": "500px"}
with tgb.Page() as page:
tgb.table(data="{data}", properties="{properties_table}")
Gui(page=page).run()
```

### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-11-27T22:04:22Z | 2024-12-23T10:50:08Z | https://github.com/Avaiga/taipy/issues/2286 | [
"💥Malfunction",
"🟨 Priority: Medium",
"🔒 Staff only",
"GUI: Front-End"
] | AlexandreSajus | 6 |
Farama-Foundation/Gymnasium | api | 692 | [Proposal] Observation Space in FrozenLake environment | ### Proposal
This is a detail but it's confusing, and repeated in HF's RL course.
Right now the [observation space](https://gymnasium.farama.org/environments/toy_text/frozen_lake/#observation-space), and more precisely the player's position in FrozenLake is defined as:
> current_row * nrows + current_col (where both the row and col start at 0)
Although it would not change anything in terms of computations, I believe using describing it as current_row * ncols + current_col would make more sense, as this formula would generalize to non-square environments (grids).
### Motivation
Using the library, I have confirmed that the player position 1 is (x, y) = (0, 1). If this is true, then the current formula does not generalize to non-square grids.
Example with a 2*3 grid. Using the current formula, the players positions would be:
```
[[0, 1, 2],
[2, 3, 4]]
```
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-08-27T11:19:36Z | 2023-08-28T13:27:58Z | https://github.com/Farama-Foundation/Gymnasium/issues/692 | [
"enhancement"
] | PierreCounathe | 3 |
mwaskom/seaborn | pandas | 3,728 | Incorrect plotting of exactly overlapping scatter with `hue` and `hue_order` | While working with `sns.scatterplot` for representing locations on a grid, I discovered an issue where using `hue` and `hue_order` produces an incorrect plot: markers that should be perfectly overlapping—they have identical (`x`, `y`) coordinates—are drawn at a small offset, such that the edge of one can be seen intersecting the other. Here's a minimal example that reproduces the issue with `matplotlib 3.9.1` and `seaborn 0.13.2`:
```python
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
df = pd.DataFrame.from_dict({
'x': [6.3, 6.3, 6.3, 6.3, 6.633333, 6.633333, 6.633333, 6.633333, 33.48, 33.48, 33.48, 33.48, 33.813333, 33.813333, 33.813333, 33.813333],
'y': [-12.42, -12.42, -4.0, -4.0, -12.42, -12.42, -4.0, -4.0, -12.42, -12.42, -4.0, -4.0, -12.42, -12.42, -4.0, -4.0],
'locid': ['loc1', 'loc1', 'loc1', 'loc1', 'loc2', 'loc2', 'loc2', 'loc2', 'loc1', 'loc1', 'loc1', 'loc1', 'loc2', 'loc2', 'loc2', 'loc2']
})
sns.scatterplot(
data=df,
x='x',
y='y',
marker="o",
hue='locid',
hue_order=['loc1'],
)
print('Pandas version: ', pd.__version__) # 2.2.2
print('Matplotlib version: ', matplotlib.__version__) # 3.9.1
print('Seaborn version: ', sns.__version__) # 0.13.2
```
That code produces the following plot:

where at each corner, the edge of the second marker is clearly seen to intersect the face of the first
From my brief dive into this problem:
1. As in the example, it doesn't matter whether a tall stack of markers are made to overlap: there's only to points with the exact (6.3, -12.42) coordinates and the problem is there.
2. The issue is seaborn-specific. Using matplotlib's `plt.scatter` does yield a correct plot.
3. Both `hue` and `hue_order` need to be used in order for the issue to appear. Slicing the data with `df[df.locid == 'loc1']` makes a correct plot.
4. The problem persists even with `marker='.' `, `marker='s'`, `marker='v'` and `marker='d'`, but not with `marker='x'`. | open | 2024-07-12T14:04:03Z | 2024-07-15T15:23:55Z | https://github.com/mwaskom/seaborn/issues/3728 | [] | eloyvallinaes | 3 |
ultralytics/yolov5 | machine-learning | 13,204 | AttributeError: 'DetectMultiBackend' object has no attribute 'input_details' | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello, I am doing a YOLOv5 project with DeepSort added to give each detected object a unique ID. I use yolov5n.pt as the weight to test a short video and It all works well. However, when I try to put my own trained weight profile to test the video, it gives me this error: 'AttributeError: 'DetectMultiBackend' object has no attribute 'input_details' Could anyone help me with this problem?


### Additional
_No response_ | open | 2024-07-21T09:16:21Z | 2024-07-21T12:57:09Z | https://github.com/ultralytics/yolov5/issues/13204 | [
"question"
] | Kelly02140 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.