repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
mage-ai/mage-ai | data-science | 5,221 | [BUG] Setting cache_block_output_in_memory breaking pipeline on K8s (EKS) | ### Mage version
0.9.72
### Describe the bug
All pipelines that we configure as explained in [the docs](https://docs.mage.ai/design/data-pipeline-management#cache-block-output-in-memory) to not write to the EFS filesystem but store the data in memory, are passing an empty dataframe between blocks.
We set both required settings:
```yaml
cache_block_output_in_memory: true
run_pipeline_in_one_process: true
```
Thus, the pipeline is run in one single k8s job.
However, only the first block finishes successfully.
The direct downstream block does not receive any data.
When I set `cache_block_output_in_memory: false`, everything works as expected.
### To reproduce
1. Set `cache_block_output_in_memory: true` and `run_pipeline_in_one_process: true`
2. Create a pipeline run using the "Run@once" trigger
3. Check the pipeline run's logs
### Expected behavior
I expect data to be passed between blocks even if data is kept fully in memory and not spilled out to disk.
### Screenshots
_No response_
### Operating system
- AWS EKS (K8s)
- EFS
### Additional context
_No response_ | open | 2024-06-23T16:20:34Z | 2024-06-23T16:20:34Z | https://github.com/mage-ai/mage-ai/issues/5221 | [
"bug"
] | MartinLoeper | 0 |
iperov/DeepFaceLab | machine-learning | 675 | Newest update not being able to use Full Face images... | Giving me this error when i use the aligned src that i always use, full face. This is the latest dfl with xseg and whole face. Previous versions works fine.

| closed | 2020-03-25T00:54:31Z | 2020-03-25T22:37:17Z | https://github.com/iperov/DeepFaceLab/issues/675 | [] | mpmo10 | 2 |
deeppavlov/DeepPavlov | nlp | 853 | ODQA inference speed very very slow | Running the default configuration and model on a EC2 p2.xlarge instance (60~GB Ram and Nvidia K80 GPU) and inference for simple questions take 40 seconds to 5 minutes.
Sometimes, no result even after 10 minutes.
<img width="1093" alt="MobaXterm_2019-05-27_16-36-13" src="https://user-images.githubusercontent.com/3790163/58415912-98020200-809d-11e9-936e-022089c5aba3.png">
| closed | 2019-05-27T11:06:48Z | 2020-05-21T10:05:58Z | https://github.com/deeppavlov/DeepPavlov/issues/853 | [] | shubhank008 | 12 |
PaddlePaddle/PaddleHub | nlp | 2,320 | yolov3_darknet53_pedestrian 每次加载都需要下载很长时间,且推理错误 | 欢迎您反馈PaddleHub使用问题,非常感谢您对PaddleHub的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
- 版本、环境信息
1)PaddleHub和PaddlePaddle版本:请提供您的PaddleHub和PaddlePaddle版本号,例如PaddleHub1.4.1,PaddlePaddle1.6.2
2)系统环境:请您描述系统类型,例如Linux/Windows/MacOS/,python版本
- 复现信息:如为报错,请给出复现环境、复现步骤
-
○ →
`import paddlehub as hub
import cv2
pedestrian_detector = hub.Module(name="yolov3_darknet53_pedestrian")
result = pedestrian_detector.object_detection(images=[cv2.imread('/home/ai02/test/people/8.jpg')])
print(result)`
/bin/python3 /home/ai02/test/people/test.py
Download https://bj.bcebos.com/paddlehub/paddlehub_dev/yolov3_darknet53_pedestrian_1_1_0.zip
[##################################################] 100.00%
Decompress /home/ai02/.paddlehub/tmp/tmp7dw1rgc9/yolov3_darknet53_pedestrian_1_1_0.zip
Traceback (most recent call last):
File "/home/ai02/test/people/test.py", line 4, in <module>
pedestrian_detector = hub.Module(name="yolov3_darknet53_pedestrian")
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/module/module.py", line 388, in __new__
module = cls.init_with_name(
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/module/module.py", line 487, in init_with_name
user_module_cls = manager.install(
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/module/manager.py", line 190, in install
return self._install_from_name(name, version, ignore_env_mismatch)
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/module/manager.py", line 265, in _install_from_name
return self._install_from_url(item['url'])
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/module/manager.py", line 258, in _install_from_url
return self._install_from_archive(file)
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/module/manager.py", line 374, in _install_from_archive
for path, ds, ts in xarfile.unarchive_with_progress(archive, _tdir):
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/utils/xarfile.py", line 225, in unarchive_with_progress
with open(name, mode='r') as file:
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/utils/xarfile.py", line 162, in open
return XarFile(name, mode, **kwargs)
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/utils/xarfile.py", line 91, in __init__
if self.arctype in ['tar.gz', 'tar.bz2', 'tar.xz', 'tar', 'tgz', 'txz']:
AttributeError: 'XarFile' object has no attribute 'arctype'
Exception ignored in: <function XarFile.__del__ at 0x7fd670f1fb80>
Traceback (most recent call last):
File "/home/ai02/.local/lib/python3.8/site-packages/paddlehub/utils/xarfile.py", line 101, in __del__
self._archive_fp.close()
AttributeError: 'XarFile' object has no attribute '_archive_fp' | closed | 2024-02-27T02:21:25Z | 2024-03-17T05:39:36Z | https://github.com/PaddlePaddle/PaddleHub/issues/2320 | [] | sun-rabbit | 4 |
microsoft/qlib | machine-learning | 1,193 | How to adapt LSTM to DDG-DA | I want to adapt LSTM to DDG-DA, how can I do that?
What I have tried:
1. modify rolling_benchmark.py to fit with LSTM parameters
2. modify the bug caused by changing the dataset object to TSDatasetH.
change the file in qlib > contrib > meta > data_selection > model.py:
```
def reweight(self, data: Union[pd.DataFrame, pd.Series]):
# TODO: handling TSDataSampler
if isinstance(data, pd.DataFrame):
idx = data.index
else:
idx = data.get_index()
w_s = pd.Series(1.0, index=idx)
for k, w in self.time_weight.items():
w_s.loc[slice(*k)] = w
logger.info(f"Reweighting result: {w_s}")
return w_s
```
However, the valid loss remains the same in different epoch and I don't know why.
| closed | 2022-07-12T11:53:33Z | 2022-10-21T15:05:41Z | https://github.com/microsoft/qlib/issues/1193 | [
"question",
"stale"
] | Xxiaoting | 2 |
taverntesting/tavern | pytest | 787 | how can i get coverage after running pytest | how can i get coverage after running
`
pytest -v test_01_init_gets.tavern.yaml --html=all.html
` | closed | 2022-06-09T10:17:46Z | 2022-06-15T09:37:31Z | https://github.com/taverntesting/tavern/issues/787 | [] | iakirago | 2 |
sqlalchemy/alembic | sqlalchemy | 1,246 | Minor typing issue for alembic.context.configure in 1.11.0 | **Describe the bug**
Signature of the `alembic.context.configure` function has changed in 1.11.0, where `compare_server_default` argument uses `Column` classes, which should be generics. This kind of definition produces type checkers warnings.
**Expected behavior**
No warnings reported by static type checkers
**To Reproduce**
E.g. using VSCode with Pyright in "strict" mode (usually it is a part of `env.py`):
```py
from alembic.context import configure
# Throws: Type of "configure" is partially unknown
```
**Versions.**
- OS: MacOS
- Python: 3.11.3
- Alembic: 1.11.0
- SQLAlchemy: 2.0.13
- Database: Postgres 15
- DBAPI: asyncpg
**Additional context**
**Have a nice day!**
| closed | 2023-05-16T18:00:39Z | 2023-05-17T15:15:23Z | https://github.com/sqlalchemy/alembic/issues/1246 | [
"bug",
"pep 484"
] | AlexanderPodorov | 2 |
graphql-python/graphene-django | django | 840 | Validate Meta.fields and Meta.exclude on DjangoObjectType | tl;dr: DjangoObjectType ignores all unknown values in `Meta.fields`. It should compare the fields list with the available Model's fields instead.
---
I'm in the process of rewriting DRF-based backend to graphene-django, and I was surprised when my graphene-django generated schema was silently missing the fields I specified in `fields`.
(I'm copy-pasting `fields` from DRF serializers to DjangoObjectType's Meta class).
Turns out some of these fields were implemented as properties or methods on models, and I'm ok with writing custom resolvers for those (otherwise there's no way to detect types, at least in the absence of type hints), but I didn't expect DjangoObjectType to quietly accept unknown values.
I believe the reason for this is that `graphene_django.types.construct_fields` iterates over model's fields, but it could/should iterate over `only_fields` too.
Implementing the same check for `exclude` also seems like a good idea to me (otherwise you could make a typo in `exclude`, but never notice it until it's too late). | closed | 2019-12-29T11:45:02Z | 2019-12-31T13:55:46Z | https://github.com/graphql-python/graphene-django/issues/840 | [] | berekuk | 1 |
NullArray/AutoSploit | automation | 798 | Divided by zero exception68 | Error: Attempted to divide by zero.68 | closed | 2019-04-19T16:00:55Z | 2019-04-19T16:37:44Z | https://github.com/NullArray/AutoSploit/issues/798 | [] | AutosploitReporter | 0 |
seleniumbase/SeleniumBase | web-scraping | 2,216 | Setting a `user_data_dir` while using Chrome extensions | First time,I use SeleniumBase open chrome without add extensions,like editcookies, and I really add "user-data-dir",generate special folder,at the end I use driver.quit() ,second time,i just use SeleniumBase open by "user-data-dir=special folder path", I can't find the installed extensions in chrome.Did I do something wrong? or How can I see the extensions installed for the first time after the second startup?
wait u r,thx | closed | 2023-10-28T11:04:25Z | 2023-10-29T06:36:57Z | https://github.com/seleniumbase/SeleniumBase/issues/2216 | [
"question"
] | SiTu-JIanying | 1 |
scikit-learn/scikit-learn | python | 30,753 | ⚠️ CI failed on Linux_Runs.pylatest_conda_forge_mkl (last failure: Feb 03, 2025) ⚠️ | **CI failed on [Linux_Runs.pylatest_conda_forge_mkl](https://dev.azure.com/scikit-learn/scikit-learn/_build/results?buildId=73883&view=logs&j=dde5042c-7464-5d47-9507-31bdd2ee0a3a)** (Feb 03, 2025)
- Test Collection Failure | closed | 2025-02-03T02:34:16Z | 2025-02-03T16:44:29Z | https://github.com/scikit-learn/scikit-learn/issues/30753 | [] | scikit-learn-bot | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,256 | Export failure when users have configured a language that has been disabled | **Describe the bug**
'Save/Export' submission function returns error
**To Reproduce**
Steps to reproduce the behavior:
1. Recipient login
2. Go to 'Submissions'
3. Click on 'save/export' in the list of submissions

4. Error showned in a new page:` {"error_message": "InternalServerError [Unexpected]", "error_code": 1, "arguments": ["Unexpected"]}`
5. Enter the specific Submission
7. Click on 'save/export' on the top of the page report

8. Error showned in a new page:` {"error_message": "InternalServerError [Unexpected]", "error_code": 1, "arguments": ["Unexpected"]}`
**Expected behavior**
Zip file to be dowloaded.
**Desktop:**
- OS: windows 10
- Browser: Edge
- Version [103.0.1264.77]
**Additional context**
Email sent to admin with this content:
Version: 4.9.9
KeyError Mapping key not found.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/usr/lib/python3/dist-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/export.py", line 130, in get
files = yield prepare_tip_export(self.session.cc, tip_export)
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/export.py", line 109, in prepare_tip_export
export_template = Templating().format_template(tip_export['notification']['export_template'], tip_export).encode()
KeyError: 'export_template'
| open | 2022-08-04T14:12:42Z | 2022-08-05T10:18:36Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3256 | [
"T: Bug",
"C: Backend"
] | zangels | 8 |
collerek/ormar | sqlalchemy | 980 | pytest DatabaseBackend is not running | **Describe the bug**
When trying to run tests with `pytest`, I get an exception `DatabaseBackend is not running`.
I think that `pytest` is using `BaseMeta`'s database, which is not the test database.
This is the setup code:
```python
TEST_DATABASE_URL_WITH_DB = f"postgresql://....."
# tried with postgresql+asyncpg as well
database = databases.Database(TEST_DATABASE_URL_WITH_DB)
@pytest.fixture()
def engine():
return sqlalchemy.create_engine(DATABASE_URL_WITH_DB)
# yield engine
# engine.sync_engine.dispose()
@pytest.fixture(autouse=True)
def create_test_database(engine):
metadata = BaseMeta.metadata
metadata.drop_all(engine)
metadata.create_all(engine)
# await database.connect()
yield
# await database.disconnect()
metadata.drop_all(engine)
@pytest.mark.asyncio
async def test_actual_logic():
await database.connect()
async with database:
org = await Org.objects.create(name="test-org", auth0_id="test-org-auth0-id")
```
The models:
```python
database = databases.Database(PROD_DATABASE_URL)
class BaseMeta(ormar.ModelMeta):
metadata = metadata
database = database
class Org(ormar.Model):
id = ormar.Integer(primary_key=True)
public_id: str = ormar_postgres_extensions.UUID(
index=True, unique=True, nullable=False, default=uuid.uuid4()
)
name: str = ormar.Text(max_length=320, index=True)
auth0_id: str = ormar.Text(max_length=320, index=True, nullable=True)
class Meta(BaseMeta):
tablename = "orgs"
```
Stack trace:
```bash
@pytest.mark.asyncio
async def test_actual_logic():
await database.connect()
> org = await Org.objects.create(name="test-org", auth0_id="test-org-auth0-id")
tests/efforts/test_service.py:57:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../../.venvs/core/lib/python3.11/site-packages/ormar/queryset/queryset.py:1121: in create
instance = await instance.save()
../../../../.venvs/core/lib/python3.11/site-packages/ormar/models/model.py:94: in save
pk = await self.Meta.database.execute(expr)
../../../../.venvs/core/lib/python3.11/site-packages/databases/core.py:164: in execute
async with self.connection() as connection:
../../../../.venvs/core/lib/python3.11/site-packages/databases/core.py:235: in __aenter__
raise e
../../../../.venvs/core/lib/python3.11/site-packages/databases/core.py:232: in __aenter__
await self._connection.acquire()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <databases.backends.postgres.PostgresConnection object at 0x106afa7d0>
async def acquire(self) -> None:
print("acquire")
print(self._database._pool)
assert self._connection is None, "Connection is already acquired"
> assert self._database._pool is not None, "DatabaseBackend is not running"
E AssertionError: DatabaseBackend is not running
../../../../.venvs/core/lib/python3.11/site-packages/databases/backends/postgres.py:180: AssertionError
```
**Versions (please complete the following information):**
- Database backend used (mysql/sqlite/postgress): **postgres 14.1**
- Python version: 3.11
- `ormar` version: 0.12.0
- if applicable `fastapi` version 0.88 | closed | 2023-01-08T13:21:39Z | 2023-01-09T18:22:27Z | https://github.com/collerek/ormar/issues/980 | [
"bug"
] | AdamGold | 2 |
Gozargah/Marzban | api | 1,543 | Node Data Limit | I believe it would be useful to able to specify Node Data Limit as some servers don't have traffic limit, therefore we must be careful that the server's traffic usage doesn't exceed the limit. | closed | 2024-12-27T23:18:05Z | 2024-12-28T14:42:00Z | https://github.com/Gozargah/Marzban/issues/1543 | [] | iamtheted | 0 |
modin-project/modin | data-science | 7,465 | BUG: Series.rename_axis raises AttributeError | ### Modin version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest released version of Modin.
- [x] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-main-branch).)
### Reproducible Example
```python
import modin.pandas as pd
s = pd.Series(["dog", "cat", "monkey"])
s.rename_axis("animal")
```
### Issue Description
`Series.rename_axis` should rename the index of the series, but currently raises due to a missing method.
Found in Snowpark pandas: https://github.com/snowflakedb/snowpark-python/pull/3040
### Expected Behavior
Does not raise and renames the index.
### Error Logs
<details>
```python-traceback
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/joshi/code/modin/modin/logging/logger_decorator.py", line 149, in run_and_log
result = obj(*args, **kwargs)
File "/Users/joshi/code/modin/modin/pandas/series.py", line 1701, in rename_axis
return super().rename_axis(
File "/Users/joshi/code/modin/modin/logging/logger_decorator.py", line 149, in run_and_log
result = obj(*args, **kwargs)
File "/Users/joshi/code/modin/modin/pandas/base.py", line 2565, in rename_axis
return self._set_axis_name(mapper, axis=axis, inplace=inplace)
File "/Users/joshi/code/modin/modin/pandas/series.py", line 358, in __getattr__
raise err
File "/Users/joshi/code/modin/modin/pandas/series.py", line 354, in __getattr__
return _SERIES_EXTENSIONS_.get(key, object.__getattribute__(self, key))
AttributeError: 'Series' object has no attribute '_set_axis_name'. Did you mean: '_get_axis_number'?
```
</details>
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c114e7b0a38ff025c5f69ff752510a62ede6506f
python : 3.10.13.final.0
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:23 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6020
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.32.0+19.gc114e7b0.dirty
ray : 2.34.0
dask : 2024.8.1
distributed : 2024.8.1
pandas dependencies
-------------------
pandas : 2.2.2
numpy : 1.26.4
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.0.0
pip : 23.3
Cython : None
pytest : 8.3.2
hypothesis : None
sphinx : 5.3.0
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 5.3.0
html5lib : None
pymysql : None
psycopg2 : 2.9.9
jinja2 : 3.1.4
IPython : 8.17.2
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.2
bottleneck : None
dataframe-api-compat : None
fastparquet : 2024.5.0
fsspec : 2024.6.1
gcsfs : None
matplotlib : 3.9.2
numba : None
numexpr : 2.10.1
odfpy : None
openpyxl : 3.1.5
pandas_gbq : 0.23.1
pyarrow : 17.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : 2024.6.1
scipy : 1.14.1
sqlalchemy : 2.0.32
tables : 3.10.1
tabulate : None
xarray : 2024.7.0
xlrd : 2.0.1
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
| closed | 2025-03-11T21:12:35Z | 2025-03-20T20:56:20Z | https://github.com/modin-project/modin/issues/7465 | [
"bug 🦗",
"P3"
] | sfc-gh-joshi | 0 |
microsoft/nni | data-science | 5,038 | can i use netadapt with yolov5? | **Describe the issue**:
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2022-08-01T10:26:41Z | 2022-08-04T01:49:59Z | https://github.com/microsoft/nni/issues/5038 | [] | mumu1431 | 1 |
samuelcolvin/watchfiles | asyncio | 330 | Expose `follow_links` | ### Description
Notifications for linked files seem to be deduplicated at the `notify` level, which leads to issues like https://github.com/Aider-AI/aider/issues/3315.
I believe this could be solved by exposing `notify`s `follow_links` and then setting it to `False` in the client program.
### Example Code
```Python
```
### Watchfiles Output
```Text
```
### Operating System & Architecture
Linux-6.13.2-zen1-1-zen-x86_64-with-glibc2.41
#1 ZEN SMP PREEMPT_DYNAMIC Sat, 08 Feb 2025 18:54:38 +0000
### Environment
_No response_
### Python & Watchfiles Version
python: 3.12.8 (main, Jan 3 2025, 17:16:36) [GCC 14.2.1 20240910], watchfiles: 1.0.4
### Rust & Cargo Version
_No response_ | open | 2025-02-27T09:06:11Z | 2025-02-27T09:06:11Z | https://github.com/samuelcolvin/watchfiles/issues/330 | [
"bug"
] | bard | 0 |
sgl-project/sglang | pytorch | 4,410 | [Bug] support gemma3 | ### Describe the bug
get this error
```
ValueError: The checkpoint you are trying to load has model type `gemma3` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
```
update Transformers to `transformers-4.49.0`
got
```
File "/opt/my-venv/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 833, in register
raise ValueError(f"'{key}' is already used by a Transformers model.")
ValueError: '<class 'sglang.srt.configs.qwen2_5_vl_config.Qwen2_5_VLConfig'>' is already used by a Transformers model.
```
### Reproduction
```
python -m sglang.launch_server --model-path /opt/model/models--google--gemma-3-27b-it/snapshots/dfb98f29ff907e391ceed2be3834ca071ea260f1 --served-model-name gemma-3-27b-it --mem-fraction-static 0.7 --tp 2 --host 0.0.0.0 --port 8000
```
### Environment
ubuntu, have 2 `rtx a6000` with `nvlink` bridage support
```
sglang[all]>=0.4.4.post1
Driver Version: 570.124.04 CUDA Version: 12.8
``` | closed | 2025-03-14T04:46:15Z | 2025-03-18T19:01:16Z | https://github.com/sgl-project/sglang/issues/4410 | [] | Liusuqing | 4 |
OFA-Sys/Chinese-CLIP | nlp | 20 | import cn_clip出错UnicodeDecodeError: 'gbk' codec can't decode byte 0x81 in position 1564: illegal multibyte sequence | import cn_clip.clip as clip
发生异常: UnicodeDecodeError
Traceback (most recent call last):
File "D:\develop\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "D:\develop\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Users\saizong\.vscode\extensions\ms-python.python-2022.4.1\pythonFiles\lib\python\debugpy\__main__.py", line 45, in <module>
cli.main()
File "c:\Users\saizong\.vscode\extensions\ms-python.python-2022.4.1\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 444, in main
run()
File "c:\Users\saizong\.vscode\extensions\ms-python.python-2022.4.1\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 285, in run_file
runpy.run_path(target_as_str, run_name=compat.force_str("__main__"))
File "D:\develop\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\develop\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\develop\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "d:\develop\workspace\today_video\clipcn.py", line 5, in <module>
import cn_clip.clip as clip
File "D:\develop\anaconda3\Lib\site-packages\cn_clip\clip\__init__.py", line 3, in <module>
_tokenizer = FullTokenizer()
File "D:\develop\anaconda3\Lib\site-packages\cn_clip\clip\bert_tokenizer.py", line 170, in __init__
self.vocab = load_vocab(vocab_file)
File "D:\develop\anaconda3\Lib\site-packages\cn_clip\clip\bert_tokenizer.py", line 132, in load_vocab
token = convert_to_unicode(reader.readline())
UnicodeDecodeError: 'gbk' codec can't decode byte 0x81 in position 1564: illegal multibyte sequence
请问如何处理?谢谢! | closed | 2022-11-28T08:07:28Z | 2022-12-13T11:37:28Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/20 | [] | bigmarten | 13 |
huggingface/datasets | machine-learning | 6,935 | Support for pathlib.Path in datasets 2.19.0 | ### Describe the bug
After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle?
### Steps to reproduce the bug
```
from datasets import Dataset
import pathlib
path = pathlib.Path("./my_out_path")
Dataset.from_dict(
{"text": ["hello world"], "label": [777], "split": ["train"]}
.save_to_disk(path)
```
This results in an error when using datasets 2.19:
```
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/Users/jb/scratch/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1515, in save_to_disk
fs, _ = url_to_fs(dataset_path, **(storage_options or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 383, in url_to_fs
chain = _un_chain(url, kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 323, in _un_chain
if "::" in path
^^^^^^^^^^^^
TypeError: argument of type 'PosixPath' is not iterable
```
Converting to str works, however.
```
Dataset.from_dict(
{"text": ["hello world"], "label": [777], "split": ["train"]}
).save_to_disk(str(path))
```
### Expected behavior
My dataset gets saved to disk without an error.
### Environment info
aiohttp==3.9.5
aiosignal==1.3.1
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
datasets==2.19.0
dill==0.3.8
filelock==3.14.0
frozenlist==1.4.1
fsspec==2024.3.1
huggingface-hub==0.23.2
idna==3.7
multidict==6.0.5
multiprocess==0.70.16
numpy==1.26.4
packaging==24.0
pandas==2.2.2
pyarrow==16.1.0
pyarrow-hotfix==0.6
python-dateutil==2.9.0.post0
pytz==2024.1
PyYAML==6.0.1
requests==2.32.3
six==1.16.0
tqdm==4.66.4
typing_extensions==4.12.0
tzdata==2024.1
urllib3==2.2.1
xxhash==3.4.1
yarl==1.9.4 | open | 2024-05-30T12:53:36Z | 2025-01-14T11:50:22Z | https://github.com/huggingface/datasets/issues/6935 | [] | lamyiowce | 2 |
ultralytics/ultralytics | pytorch | 19,425 | KeyError: 'ratio_pad' | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
When I started training on my self-built dataset, the following error was reported during the first verification. Please help me solve this problem. I can't figure out where the specific problem is. Thank you.
> Traceback (most recent call last):
File "D:\XXX\XXX\ultralytics-8.3.78\train.py", line 9, in <module>
results = model.train(data=r"./data.yaml",
File "D:\XXX\XXX\ultralytics-8.3.78\ultralytics\engine\model.py", line 810, in train
self.trainer.train()
File "D:\XXX\XXX\ultralytics-8.3.78\ultralytics\engine\trainer.py", line 208, in train
self._do_train(world_size)
File "D:\XXX\XXX\ultralytics-8.3.78\ultralytics\engine\trainer.py", line 433, in _do_train
self.metrics, self.fitness = self.validate()
File "D:\XXX\XXX\ultralytics-8.3.78\ultralytics\engine\trainer.py", line 607, in validate
metrics = self.validator(self)
File "D:\XXX\XXX\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "D:\XXX\XXX\ultralytics-8.3.78\ultralytics\engine\validator.py", line 193, in __call__
self.update_metrics(preds, batch)
File "D:\XXX\XXX\ultralytics-8.3.78\ultralytics\models\yolo\detect\val.py", line 139, in update_metrics
pbatch = self._prepare_batch(si, batch)
File "D:\XXX\XXX\ultralytics-8.3.78\ultralytics\models\yolo\detect\val.py", line 115, in _prepare_batch
ratio_pad = batch["ratio_pad"][si]
KeyError: 'ratio_pad'
### Additional
_No response_ | closed | 2025-02-25T16:42:35Z | 2025-02-27T06:48:48Z | https://github.com/ultralytics/ultralytics/issues/19425 | [
"question",
"detect"
] | ywWang-coder | 6 |
mlfoundations/open_clip | computer-vision | 827 | how to get hidden_state from every layers of ViT of openclip vision encoder? | if you could solve my problem, thanks a lot ! | open | 2024-02-24T07:03:10Z | 2024-04-12T19:50:25Z | https://github.com/mlfoundations/open_clip/issues/827 | [] | jzssz | 2 |
jupyter/docker-stacks | jupyter | 1,969 | [ENH] - /home/jovyan/work is confusing (documentation) | ### What docker image(s) is this feature applicable to?
scipy-notebook
### What change(s) are you proposing?
User Guide documentation suggests mounting a local directory (`$PWD`, etc.) to `/home/jovyan/work` to persist notebooks. An excellent suggestion, let's keep our data around.
But at no point in _Quick Start_, _Selecting an Image_, _Running a Container_, or _Common Features_ does the documentation instruct you that by default, the notebook will save to `/home/jovyan`.
### How does this affect the user?
The user will discover that their data didn't persist only upon running a new container.
Further, there's no immediately available troubleshooting topic or search query that will illuminate that you didn't click "work" in the left sidebar of jupyter-server. The natural inclination is to click the large, friendly Python logo to get a Python notebook, since after all, that's why your here.
But that notebook ends up in `/home/jovyan`.
### Anything else?
I was running:
```
docker run --rm --name=jupyter \
-p 8888:8888 -v $(pwd):/home/jovyan/work \
-e RESTARTABLE=yes \
jupyter/scipy-notebook:python-3.11.4 "$@"
```
I'm using jupyter for instructional purposes. My resolution is to mount `$PWD` to `/home/jovyan` (without the `work` folder) and accept that I end up with extra files on the host from jovyan's `$HOME` (which I presume could be an issue when selecting another image at a later date). Not a problem, since I'm primarily concerned with making sure I don't lose any .ipynb files. | closed | 2023-08-17T13:55:59Z | 2023-08-18T17:16:50Z | https://github.com/jupyter/docker-stacks/issues/1969 | [
"type:Enhancement"
] | 4kbyte | 2 |
simple-login/app | flask | 2,188 | Wrong unsubscribe link format? | To me it looks like that the way the original unsubscribe links are encoded does not match the way simple-login would handle them.
In `app/handler/unsubscribe_encoder.py`, line 100:
`return f"{config.URL}/dashboard/unsubscribe/encoded?data={encoded}"`
In `app/dashboard/views/unsubscribe.py`, line 76:
`@dashboard_bp.route("/unsubscribe/encoded/<encoded_request>", methods=["GET"])`
I.e. the links did not work for me, unless I changed the URL format from `/dashboard/unsubscribe/encoded?data=DATA` to `/dashboard/unsubscribe/encoded/DATA`. | open | 2024-08-19T06:02:53Z | 2024-12-21T18:55:35Z | https://github.com/simple-login/app/issues/2188 | [] | a-bali | 1 |
iMerica/dj-rest-auth | rest-api | 333 | RegisterView complete_signup receives HttpReuest instead of a Request | I needed to access `Request.data` inside the `AccountAdapter` and it worked until I tested it with `raw` JSON body.
By examing `perform_create()` at [RegisterView](https://github.com/iMerica/dj-rest-auth/blob/b72a55f86b2667e0fa10070485967f5e42588e3b/dj_rest_auth/registration/views.py#L76)
```
def perform_create(self, serializer):
user = serializer.save(self.request)
if allauth_settings.EMAIL_VERIFICATION != \
allauth_settings.EmailVerificationMethod.MANDATORY:
if getattr(settings, 'REST_USE_JWT', False):
self.access_token, self.refresh_token = jwt_encode(user)
else:
create_token(self.token_model, user, serializer)
complete_signup(
self.request._request, user,
allauth_settings.EMAIL_VERIFICATION,
None,
)
return user
```
I noticed that `complete_signup` receives `self.requests._request` which is django `HttpRequest`.
My code accessed request data as follows:
```
def send_confirmation_mail(self, request, email_confirmation, signup): # noqa: D102
request.POST['value_passed_to_email_template']
```
Everything was fine even tests using `rest_framework.tests.ApiClient` passed fine.
Until I tried to POST raw json body.
`http POST localhost:8000/auth/registration/ value_passed_to_email_template=a_value`
This caused `KeyError` because `request.POST` was empty.
It took me some time to notice that the `request` inside `send_confirmation_mail` is `WSGIRequest (django HttpRequest)` and not `rest_framework.request.Request`.
Then I did some tests.
1. POST body as `x-www-form-urlencoded` -> PASS
2. POST body as `form-data` -> PASS
3. POST body as `raw` -> FAIL | open | 2021-11-25T10:58:16Z | 2021-11-25T10:58:16Z | https://github.com/iMerica/dj-rest-auth/issues/333 | [] | 1oglop1 | 0 |
exaloop/codon | numpy | 195 | compile on mac failed when link libcodonc.dylib | Mac OS: Catalina
version: 10.15
llvm: [clang+llvm-15.0.7-x86_64-apple-darwin21.0.tar.xz](https://github.com/llvm/llvm-project/releases/download/llvmorg-15.0.7/clang+llvm-15.0.7-x86_64-apple-darwin21.0.tar.xz)
cmake version 3.24.3
codon: v0.15.5
```
[build] [ 95%] Linking CXX shared library libcodonc.dylib
[build] Undefined symbols for architecture x86_64:
[build] "typeinfo for llvm::ErrorInfoBase", referenced from:
[build] typeinfo for llvm::ErrorInfo<codon::error::ParserErrorInfo, llvm::ErrorInfoBase> in compiler.cpp.o
[build] typeinfo for llvm::ErrorInfo<llvm::ErrorList, llvm::ErrorInfoBase> in jit.cpp.o
[build] typeinfo for llvm::ErrorInfo<codon::error::ParserErrorInfo, llvm::ErrorInfoBase> in jit.cpp.o
[build] typeinfo for llvm::ErrorInfo<codon::error::RuntimeErrorInfo, llvm::ErrorInfoBase> in jit.cpp.o
[build] typeinfo for llvm::ErrorInfo<llvm::ErrorList, llvm::ErrorInfoBase> in memory_manager.cpp.o
[build] typeinfo for llvm::ErrorInfo<llvm::jitlink::JITLinkError, llvm::ErrorInfoBase> in memory_manager.cpp.o
[build] typeinfo for llvm::ErrorInfo<codon::error::PluginErrorInfo, llvm::ErrorInfoBase> in plugins.cpp.o
[build] ...
[build] "typeinfo for llvm::JITEventListener", referenced from:
[build] typeinfo for codon::DebugListener in debug_listener.cpp.o
[build] "typeinfo for llvm::SectionMemoryManager", referenced from:
[build] typeinfo for codon::BoehmGCMemoryManager in memory_manager.cpp.o
[build] "typeinfo for llvm::cl::GenericOptionValue", referenced from:
[build] typeinfo for llvm::cl::OptionValueCopy<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > in gpu.cpp.o
[build] "typeinfo for llvm::orc::ObjectLinkingLayer::Plugin", referenced from:
[build] typeinfo for codon::DebugPlugin in debug_listener.cpp.o
[build] "typeinfo for llvm::detail::format_adapter", referenced from:
[build] typeinfo for llvm::detail::provider_format_adapter<unsigned long long> in memory_manager.cpp.o
[build] "typeinfo for llvm::jitlink::JITLinkMemoryManager::InFlightAlloc", referenced from:
[build] typeinfo for codon::BoehmGCJITLinkMemoryManager::IPInFlightAlloc in memory_manager.cpp.o
[build] "typeinfo for llvm::jitlink::JITLinkMemoryManager", referenced from:
[build] typeinfo for codon::BoehmGCJITLinkMemoryManager in memory_manager.cpp.o
[build] ld: symbol(s) not found for architecture x86_64
[build] clang-15: error: linker command failed with exit code 1 (use -v to see invocation)
[build] make[2]: *** [libcodonc.dylib] Error 1
[build] make[1]: *** [CMakeFiles/codonc.dir/all] Error 2
[build] make: *** [all] Error 2
```
```
libcodonc.dylib depend content in file build/CMakefiles/codonc.dir/build.make:
libcodonc.dylib: CMakeFiles/codonc.dir/codon/compiler/compiler.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/compiler/debug_listener.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/compiler/engine.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/compiler/error.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/compiler/jit.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/compiler/memory_manager.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/dsl/plugins.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/ast/expr.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/ast/stmt.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/ast/types/type.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/ast/types/link.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/ast/types/class.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/ast/types/function.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/ast/types/union.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/ast/types/static.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/ast/types/traits.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/cache.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/common.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/peg/peg.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/doc/doc.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/format/format.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/simplify.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/ctx.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/assign.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/basic.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/call.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/class.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/collections.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/cond.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/function.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/access.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/import.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/loops.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/op.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/simplify/error.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/translate/translate.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/translate/translate_ctx.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/typecheck.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/infer.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/ctx.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/assign.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/basic.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/call.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/class.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/collections.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/cond.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/function.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/access.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/loops.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/op.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/typecheck/error.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/parser/visitors/visitor.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/attribute.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/analyze/analysis.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/analyze/dataflow/capture.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/analyze/dataflow/cfg.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/analyze/dataflow/dominator.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/analyze/dataflow/reaching.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/analyze/module/global_vars.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/analyze/module/side_effect.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/base.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/const.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/dsl/nodes.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/flow.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/func.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/instr.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/llvm/gpu.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/llvm/llvisitor.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/llvm/optimize.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/module.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/cleanup/canonical.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/cleanup/dead_code.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/cleanup/global_demote.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/cleanup/replacer.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/folding/const_fold.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/folding/const_prop.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/folding/folding.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/lowering/imperative.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/lowering/pipeline.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/manager.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/parallel/openmp.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/parallel/schedule.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/pass.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/pythonic/dict.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/pythonic/generator.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/pythonic/io.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/pythonic/list.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/transform/pythonic/str.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/types/types.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/util/cloning.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/util/format.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/util/inlining.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/util/irtools.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/util/matching.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/util/outlining.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/util/side_effect.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/util/visitor.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/value.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/cir/var.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon/util/common.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/extra/jupyter/jupyter.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/codon_rules.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/omp_rules.cpp.o
libcodonc.dylib: CMakeFiles/codonc.dir/build.make
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAArch64AsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAMDGPUAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMARMAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAVRAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMBPFAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMHexagonAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMLanaiAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMipsAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMSP430AsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMPowerPCAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMRISCVAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSparcAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSystemZAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMVEAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMWebAssemblyAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMX86AsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAArch64CodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAMDGPUCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMARMCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAVRCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMBPFCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMHexagonCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMLanaiCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMipsCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMSP430CodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMNVPTXCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMPowerPCCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMRISCVCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSparcCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSystemZCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMVECodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMWebAssemblyCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMX86CodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMXCoreCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAArch64Desc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAMDGPUDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMARMDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAVRDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMBPFDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMHexagonDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMLanaiDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMipsDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMSP430Desc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMNVPTXDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMPowerPCDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMRISCVDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSparcDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSystemZDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMVEDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMWebAssemblyDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMX86Desc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMXCoreDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAArch64Info.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAMDGPUInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMARMInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAVRInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMBPFInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMHexagonInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMLanaiInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMipsInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMSP430Info.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMNVPTXInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMPowerPCInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMRISCVInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSparcInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSystemZInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMVEInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMWebAssemblyInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMX86Info.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMXCoreInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAggressiveInstCombine.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAnalysis.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMBitWriter.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMCore.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMExtensions.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMipo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMIRReader.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMInstCombine.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMInstrumentation.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMC.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMCJIT.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMObjCARCOpts.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMOrcJIT.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMRemarks.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMScalarOpts.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSupport.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSymbolize.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMTarget.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMTransformUtils.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMVectorize.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMPasses.a
libcodonc.dylib: _deps/fmt-build/libfmtd.a
libcodonc.dylib: libcodonrt.dylib
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAArch64Utils.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAMDGPUUtils.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMIRParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMARMUtils.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMHexagonAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMHexagonDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMHexagonInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMLanaiAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMLanaiDesc.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMLanaiInfo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMWebAssemblyUtils.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMGlobalISel.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMCFGuard.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAsmPrinter.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSelectionDAG.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMCodeGen.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libPolly.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libPollyISL.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMPasses.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMObjCARCOpts.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMCoroutines.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMipo.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMBitWriter.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMIRReader.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAsmParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMInstrumentation.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMVectorize.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMFrontendOpenMP.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMLinker.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMScalarOpts.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAggressiveInstCombine.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMInstCombine.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMTransformUtils.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMCDisassembler.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMExecutionEngine.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMTarget.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMAnalysis.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMProfileData.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSymbolize.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMDebugInfoPDB.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMDebugInfoMSF.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMDebugInfoDWARF.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMRuntimeDyld.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMJITLink.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMObject.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMCParser.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMMC.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMDebugInfoCodeView.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMBitReader.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMCore.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMRemarks.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMBitstreamReader.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMTextAPI.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMBinaryFormat.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMOrcTargetProcess.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMOrcShared.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMSupport.a
libcodonc.dylib: /Users/robot/Projects/llvm/lib/libLLVMDemangle.a
libcodonc.dylib: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/lib/libz.tbd
libcodonc.dylib: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/lib/libcurses.tbd
libcodonc.dylib: CMakeFiles/codonc.dir/link.txt
@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --green --bold --progress-dir=/Users/robot/GitHub/codon/build/CMakeFiles --progress-num=$(CMAKE_PROGRESS_106) "Linking CXX shared library libcodonc.dylib"
$(CMAKE_COMMAND) -E cmake_link_script CMakeFiles/codonc.dir/link.txt --verbose=$(VERBOSE)
``` | closed | 2023-02-12T05:17:33Z | 2024-11-08T18:43:29Z | https://github.com/exaloop/codon/issues/195 | [] | dipadipa | 2 |
plotly/dash | data-visualization | 2,691 | [Feature Request] Validate Arguments to components | If I break my Dash app by supplying, for example, the wrong type for `marks` when instantiating a `dcc.Slider`, the error message is not useful: "Error loading layout" is displayed in the browser, and nothing at all is logged on the back end.
What I'd hope for, and to some extent expect, is a helpful error message pointing to the issue, something like:
```python
Slider(marks={year: year for year in df["Year"].unique()}, ...)
is an invalid type, as what's required is dict[str, str | dict].
```
Or I guess the actual values which were fed into `Slider`, I think you get what I mean.
**Describe alternatives you've considered**
The only alternative is to revert (hopefully) your most recent changes which broke the app; otherwise, to go bug-hunting.
**Additional context**
Possibly something like `pydantic` could be useful here, and if type hints were added to the components, either directly or via Pydantic models, there might not be a lot of additional work to implement such a validation feature. I haven't dug too deeply into the React-side props validation and how you've implemented "React Component" -> "Python Class", but maybe it would make sense to tackle it from that end and generate the Python component classes from that.
I'm happy to implement this, by the way!
Cheers,
Zev | closed | 2023-11-13T12:08:23Z | 2023-12-16T12:23:50Z | https://github.com/plotly/dash/issues/2691 | [] | zevaverbach | 7 |
scikit-learn-contrib/metric-learn | scikit-learn | 185 | [DOC] Calibration example | It would be nice to have an example in the doc which demonstrates how to calibrate the pairwise metric learners with respect to several scores as introduced in #168, as well as the use of CalibratedClassifierCV (once this is properly tested, see #173) | open | 2019-03-14T16:11:44Z | 2021-04-22T21:25:32Z | https://github.com/scikit-learn-contrib/metric-learn/issues/185 | [
"documentation"
] | bellet | 0 |
ets-labs/python-dependency-injector | flask | 335 | Unable to inject dependencies in Django Graphene project | Hi, I have tried to setup dependecy-injector in order to use in a project with Django and Graphql using [Graphene](https://graphene-python.org/). but I am get `Provide' object has no attribute 'execute_strategy`, I follow these steps [https://python-dependency-injector.ets-labs.org/examples/django.html](url) for Django setup, however the dependency doesn't work... I have somenthing like:

Where `resolve_get_dashboard_data` is a tipycal Graphql resolver | closed | 2020-12-14T15:04:10Z | 2020-12-14T16:48:33Z | https://github.com/ets-labs/python-dependency-injector/issues/335 | [
"question"
] | juanmarin96 | 2 |
InstaPy/InstaPy | automation | 6,530 | like_by_tags not working! pls suggest if any xpath is changed | Traceback (most recent call last):
File "C:/Scarper/insta2.py", line 62, in <module>
session.like_by_tags(smart_hashtags,amount=random.randint(5, 6))
File "C:\Users\CJ\miniconda3\envs\Scarper\lib\site-packages\instapy-0.6.16-py3.7.egg\instapy\instapy.py", line 1995, in like_by_tags
self.browser, self.max_likes, self.min_likes, self.logger
File "C:\Users\CJ\miniconda3\envs\Scarper\lib\site-packages\instapy-0.6.16-py3.7.egg\instapy\like_util.py", line 933, in verify_liking
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
likes_count = post_page["items"][0]["like_count"]
KeyError: 'items'
---------------------------------------------------------
**like_util.py**
line 933:
def verify_liking(browser, maximum, minimum, logger):
"""Get the amount of existing existing likes and compare it against maximum
& minimum values defined by user"""
post_page = get_additional_data(browser)
#DEF: 22jan
print(post_page)
likes_count = post_page["items"][0]["like_count"]
if not likes_count:
likes_count = 0
| open | 2022-03-02T08:57:24Z | 2022-03-02T08:58:05Z | https://github.com/InstaPy/InstaPy/issues/6530 | [] | charan89 | 0 |
s3rius/FastAPI-template | graphql | 4 | Change aioschedule to aiosheduler | Currently, in schedule.py, I use the Aioschedule lib, but there is another high-performant lib called Aioscheduler.
We need to change aioschedule to the new [scheduler lib](https://pypi.org/project/aioscheduler/). | closed | 2020-11-15T12:51:38Z | 2021-08-30T01:25:07Z | https://github.com/s3rius/FastAPI-template/issues/4 | [] | s3rius | 1 |
Yorko/mlcourse.ai | seaborn | 758 | Proofread topic 7 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | closed | 2023-10-24T07:41:55Z | 2024-08-25T08:10:28Z | https://github.com/Yorko/mlcourse.ai/issues/758 | [
"enhancement",
"articles"
] | Yorko | 2 |
pennersr/django-allauth | django | 4,070 | ModuleNotFoundError: No module named 'allauth.socialaccount.providers.linkedin' | Seems like allauth.socialaccount.providers.linkedin is not yet implemented; only linkedin_oauth2 is implemented, even though the documentation says "linkedin_oauth2" is now deprecated.
Current latest version of django-allauth: **64.1.0**
| closed | 2024-08-24T16:32:00Z | 2024-08-24T18:50:27Z | https://github.com/pennersr/django-allauth/issues/4070 | [] | takuonline | 1 |
tableau/server-client-python | rest-api | 1,520 | Retry for request in use_server_version | **Describe the bug**
We are indexing data from multiple Tableau instances as a service provider integrating with Tableau.
We observed flaky requests on some instances:
```
2024-11-01T08:42:57.565777616Z stderr F INFO 2024-11-01 08:42:57,565 server 14 140490986871680 Could not get version info from server: <class 'tableauserverclient.server.endpoint.exceptions.InternalServerError'>
2024-11-01T08:42:57.565846529Z stderr F
2024-11-01T08:42:57.565851784Z stderr F Internal error 504 at https://XXXX/selectstar/api/2.4/serverInfo
2024-11-01T08:42:57.565860665Z stderr F b'<html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body>\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.25.3</center>\r\n</body>\r\n</html>\r\n'
2024-11-01T08:42:57.566048361Z stderr F INFO 2024-11-01 08:42:57,565 server 14 140490986871680 versions: None, 2.4
```
We observed that the request in `use_server_version` does not apply retry with exponential backoff, which is a good practice in such scenarios. There is no easy way to implement it, as this is implicit call in `__init__`.
**Versions**
Details of your environment, including:
- Tableau Server version (or note if using Tableau Online)
- Python version
- TSC library version
**To Reproduce**
Steps to reproduce the behavior. Please include a code snippet where possible.
That issue is transistent.
1/ Initalize SDK:
```
self._server = TSC.Server(
base_url,
use_server_version=True,
http_options={"timeout": self.REQUEST_TIMEOUT},
)
```
2/ Ensure that network connectivity to Tableau is unreliable and may drop connection.
**Results**
What are the results or error messages received?
See exception above. | open | 2024-11-01T19:57:41Z | 2025-01-03T23:54:02Z | https://github.com/tableau/server-client-python/issues/1520 | [
"Design Proposal",
"docs"
] | ad-m-ss | 2 |
donnemartin/data-science-ipython-notebooks | numpy | 33 | "Error 503 No healthy backends" | Hello,
When I try to open the hyperlinks which should direct me to the correct ipython notebook, it returns me "Error 503 No healthy backends"
"No healthy backends
Guru Mediation:
Details: cache-fra1236-FRA 1462794681 3780339426
Varnish cache server"
<img width="833" alt="capture" src="https://cloud.githubusercontent.com/assets/14320144/15112809/3e3a020c-15f9-11e6-9440-bfed7debac08.PNG">
<img width="350" alt="capture2" src="https://cloud.githubusercontent.com/assets/14320144/15112808/3e391b62-15f9-11e6-86b0-cf57a5d2e16e.PNG">
Thanks
Jiahong Wang
| closed | 2016-05-09T12:19:03Z | 2016-05-10T09:55:50Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/33 | [
"question"
] | wangjiahong | 1 |
iterative/dvc | machine-learning | 10,234 | gc: keep last `n` versions of data files, while ignoring commits with only code changes | Suppose I have the following commits in my project (from newest to oldest):
```
sha | changes
------------------------------
a01 | only dvc files changed
a02 | only code files changed
a03 | only dvc files changed
a04 | both dvc and code files changed
```
Now, suppose I'd like to keep the last 2 versions of dvc tracked files. Using this command:
```
dvc gc -w --cloud -r my-remote --num 2 --rev a01
```
it would only consider commits `a01` and `a02` and therefore **only the last version of files are kept** (whereas I need to keep the files in the `a03` commit as well). This is especially important if we would like to do this in an automated script on a regular interval, say every week (and hence we don't know about the history of commits to tune the command arguments). | closed | 2024-01-12T11:10:04Z | 2024-03-05T01:58:07Z | https://github.com/iterative/dvc/issues/10234 | [
"p3-nice-to-have",
"A: gc"
] | mkaze | 5 |
gradio-app/gradio | deep-learning | 10,813 | ERROR: Exception in ASGI application after downgrading pydantic to 2.10.6 | ### Describe the bug
There were reports of the same error in https://github.com/gradio-app/gradio/issues/10662, and the suggestion is to downgrade pydantic, but even after I downgraded pydantic, I am still seeing the same error.
I am running my code on Kaggle
and the error
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 789, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 214, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 37, in run_in_threadpool
return await anyio.to_thread.run_sync(func)
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 584, in main
gradio_api_info = api_info(request)
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 615, in api_info
api_info = utils.safe_deepcopy(app.get_blocks().get_api_info())
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 3019, in get_api_info
python_type = client_utils.json_schema_to_python_type(info)
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 931, in json_schema_to_python_type
type_ = _json_schema_to_python_type(schema, schema.get("$defs"))
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 985, in _json_schema_to_python_type
des = [
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 986, in <listcomp>
f"{n}: {_json_schema_to_python_type(v, defs)}{get_desc(v)}"
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 993, in _json_schema_to_python_type
f"str, {_json_schema_to_python_type(schema['additionalProperties'], defs)}"
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 939, in _json_schema_to_python_type
type_ = get_type(schema)
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 898, in get_type
if "const" in schema:
TypeError: argument of type 'bool' is not iterable
```
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```
!pip install -Uqq fastai
!pip uninstall gradio -y
!pip uninstall pydantic -y
!pip cache purge
!pip install pydantic==2.10.6
!pip install gradio
import gradio as gr
from fastai.learner import load_learner
learn = load_learner('export.pkl')
labels = learn.dls.vocab
def predict(img):
img = PILImage.create(img)
pred,pred_idx,probs = learn.predict(img)
result {labels[i]: float(probs[i].item()) for i in range(len(labels))}
gr.Interface(
fn=predict,
inputs=gr.Image(),
outputs=gr.Label()
).launch(share=True)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.21.0
gradio_client version: 1.7.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 22.1.0
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.7.2 is not installed.
groovy: 0.1.2
httpx: 0.28.1
huggingface-hub: 0.29.0
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.12
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.11.0
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.1
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.29.0
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.1
```
### Severity
Blocking usage of gradio | open | 2025-03-15T15:27:56Z | 2025-03-17T18:26:54Z | https://github.com/gradio-app/gradio/issues/10813 | [
"bug"
] | yumengzhao92 | 1 |
nltk/nltk | nlp | 2,818 | WordNetLemmatizer in nltk.stem module | What's the parameter of WordNetLemmatizer.lemmatize() in nltk.stem module?
Turn to the document, what are the candidate value of the parameter **'pos'**?

The default value is 'Noun'. But use the function pos_tag() to get the pos of the word, the value appears to come from several options. | closed | 2021-09-26T02:44:43Z | 2021-09-27T08:20:53Z | https://github.com/nltk/nltk/issues/2818 | [
"documentation"
] | Beliefuture | 3 |
clovaai/donut | computer-vision | 188 | key information extraction with DonUT on hand-written documents? | Hi everyone,
Has anyone tried fine-tuning DonUT for key information extraction on a corpus with documents half-digital and half-handwritten? Specifically, I am wondering if anyone has any evidence on how it performs on handwritten text, given that all the suggestions on generating a synthetic dataset with SynthDoG for pre-training point to selecting appropriate fonts of the digital text.
I have a private corpus of invoices similar to CORD in nature (with slightly more variability in shape, size and format), but some of them **may** have sections of handwritten text from time to time in addition to or in place of digital text. | open | 2023-05-09T14:38:18Z | 2023-05-09T18:50:13Z | https://github.com/clovaai/donut/issues/188 | [] | DiTo97 | 2 |
Esri/arcgis-python-api | jupyter | 2,231 | Setting a value with no color | I have an imagery layer, in ArcGIS Pro value of 31 is set to no colour default. But when I add to a map widget with Python API the value of 31 is having colour. How should I set it no colour. I've been looked at the docs but couldn't figure it out.
This is the layer on Living Atlas: https://www.arcgis.com/home/item.html?id=87f875a0e4ac4400bad9063c18520f9a | closed | 2025-03-03T00:01:48Z | 2025-03-05T19:10:14Z | https://github.com/Esri/arcgis-python-api/issues/2231 | [] | hieutrn1205 | 5 |
joke2k/django-environ | django | 113 | MySQL Socket for Host | For the host I need to use a path to a socket, but it doesn't seem to be working. Is this supported? | open | 2017-03-17T22:54:26Z | 2021-09-04T21:16:56Z | https://github.com/joke2k/django-environ/issues/113 | [
"question",
"documentation"
] | chadsaun | 1 |
kiwicom/pytest-recording | pytest | 20 | Throw an error if pytest-vcr is installed | Otherwise, it could lead to incompatibilities on the fixture level (they will be mixed) | closed | 2019-10-21T15:32:59Z | 2019-10-21T16:35:30Z | https://github.com/kiwicom/pytest-recording/issues/20 | [] | Stranger6667 | 1 |
xinntao/Real-ESRGAN | pytorch | 387 | Conda Install BasicSR | Is there a way to install basicsr on a conda environment?
I tried installing it with pip but the package doesn't show up on the conda environment so I am not able to run the model.
Thanks. | open | 2022-07-11T22:51:59Z | 2022-07-20T21:24:26Z | https://github.com/xinntao/Real-ESRGAN/issues/387 | [] | AvirupJU | 1 |
allenai/allennlp | nlp | 5,259 | Initialization of InterleavingDatasetReader from Jsonnet | **Is your feature request related to a problem? Please describe.**
It may be that it's possible to do this already, but it's unclear to me whether an `InterleavingDatasetReader` can be fully initialized from a Jsonnet config file, as it seems the `readers` parameter expects a dictionary whose values are already-constructed `DatasetReader`s.
**Describe the solution you'd like**
It would be nice if you could specify the config for the component readers of the `InterleavingDatasetReader` in the Jsonnet itself and have the `from_params` logic construct those component readers first, then use them to initialize `InterleavingDatasetReader`. So the config might look like the following:
```
...
dataset_reader: {
type: interleaving,
readers: {
"reader1": {
type: "reader1_type",
...more reader1 config
},
"reader2": {
type: "reader2_type",
...more reader2 config
},
....
},
scheme: "round_robin",
...
}
```
**Describe alternatives you've considered**
Subclassing `InterleavingDatasetReader` for my own purposes to do basically just what I describe above.
**Additional context**
N/A
| closed | 2021-06-14T19:45:31Z | 2021-06-15T15:35:24Z | https://github.com/allenai/allennlp/issues/5259 | [
"Feature request"
] | wgantt | 1 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 149 | What's the ratio of each losses when training mapping T | In my case, the G_Feat_L2(lambda=60) is much larger than other loss with your script. Below is the first 1200 iters:
(epoch: 1, iters: 24, time: 1.745 lr: 0.00020) G_Feat_L2: 71.198 G_GAN: 6.186 G_GAN_Feat: 15.838 G_VGG: 11.113 D_real: 6.164 D_fake: 5.030
(epoch: 1, iters: 600, time: 0.069 lr: 0.00020) G_Feat_L2: 66.881 G_GAN: 12.149 G_GAN_Feat: 12.813 G_VGG: 10.724 D_real: 10.091 D_fake: 11.878
(epoch: 1, iters: 1200, time: 0.068 lr: 0.00020) G_Feat_L2: 65.914 G_GAN: 4.420 G_GAN_Feat: 8.283 G_VGG: 9.541 D_real: 4.062 D_fake: 4.153
Maybe i should lower the number of l2_feat to make all losses at the same level? | closed | 2021-04-12T05:38:06Z | 2021-04-20T02:19:02Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/149 | [] | syfbme | 2 |
oegedijk/explainerdashboard | dash | 129 | joblib.dump(explainer,explainer_path) fails with KernelExplainer | Hi Oege,
and thanks for this project. It's very helpful!
This applies to both joblib.dump and explainer.dump().
This happens only with self.shap=='kernel' Which provides the model_predict function to shap.KernelExplainer().
Here is the error:
`_pickle.PicklingError: Can't pickle <function BaseExplainer.shap_explainer.<locals>.model_predict at 0x00000228AA27AAF8>: it's not found as explainerdashboard.explainers.BaseExplainer.shap_explainer.<locals>.model_predict`
| closed | 2021-07-01T10:51:33Z | 2021-07-01T13:46:16Z | https://github.com/oegedijk/explainerdashboard/issues/129 | [] | tunayokumus | 4 |
MycroftAI/mycroft-core | nlp | 2,701 | Extract existing audioservices, STT and TTS engines for new plugin system | As we are moving to a new [plugin system for audioservices, STT and TTS engines](https://github.com/MycroftAI/mycroft-core/pull/2594) we need to create a plugin for each of the services that will no longer be included by default in core.
Examples are provided in the PR #2594
We also need to explore the best ways to surface the available plugins. Most likely an extension of the Selene Marketplace. | closed | 2020-09-24T04:22:30Z | 2024-09-08T08:33:51Z | https://github.com/MycroftAI/mycroft-core/issues/2701 | [
"Type: Enhancement - roadmapped",
"Breaking change"
] | krisgesling | 3 |
ading2210/poe-api | graphql | 137 | timeout error | socket timeout
------------
File "/home/huyremy/.local/lib/python3.7/site-packages/poe.py", line 502, in send_message
raise RuntimeError("Response timed out.")
RuntimeError: Response timed out.
------------
Stop and restart it run well but it will be timeout error in few minutes later.
------------
Please check and correct it. Thanks | closed | 2023-07-01T09:07:50Z | 2023-07-04T08:54:06Z | https://github.com/ading2210/poe-api/issues/137 | [
"bug"
] | huyremy | 4 |
mljar/mljar-supervised | scikit-learn | 690 | mljar should not configure logging level | Hi Piotr,
First, I wanted to let you know you are doing a great job!
We are trying to use mljar-supervised as a library in a large application. However, when trying to get log messages we see that your code set the default log level to ERROR. For example, at exceptions.py and automl.py. Calling basicConfig the second time does not affect the default logger and subsequent loggers. This makes it hard to use mljar as a library. We must make sure we call basicConfig first and then import AutoML... It feels like a race :-)
When running a library as part of an application, libraries should let the logging level, only use logger (e.g. ```logger = logging.getLogger(__name__)```). Let the running application set the desired logging level. Libraries should set logging levels only on unit tests or CLI's.
In addition, mljar print messages about its current status using `print` command and not logging. This makes it hard to follow when running the application in a logging managed environment (like cloud providers). Can you please make the change? As those print messages comes mostly from ```verbose_print``` method, it looks like it is a single place to replace.
Thanks!
Haim | open | 2024-01-08T08:22:13Z | 2024-01-31T10:32:40Z | https://github.com/mljar/mljar-supervised/issues/690 | [
"enhancement",
"help wanted"
] | haim-cohen-moonactive | 3 |
WeblateOrg/weblate | django | 14,101 | Highlight string page number on click | ### Describe the problem
When you're translating and want to hop to a string page you remember the number of, you've got click once on the page number, and then you have to manually delete the digits and replace them with the desired number. It's a small grievance, but it can add up pretty quickly and feels unnecessarily cumbersome.
### Describe the solution you would like
When the user clicks on the string page number, make it so the input area's number is automatically highlighted so that you can type right away without having to click again.
### Describe alternatives you have considered
_No response_
### Screenshots
My screenshot app unexplicably crashes these days so i'm not able to provide any.
### Additional context
_No response_ | closed | 2025-03-04T11:54:12Z | 2025-03-19T16:07:30Z | https://github.com/WeblateOrg/weblate/issues/14101 | [
"enhancement",
"Area: UX"
] | Cwpute | 5 |
keras-team/keras | python | 20,030 | Are different "set of batches" selected at each epoch when using `steps_per_epoch` ? | This fits the model using 10 batches of 64 samples per epoch:
```py
model.fit(train_data, epochs=5, steps_per_epoch=10)
```
If the Dataset is `.batch`ed with 64 samples, but has more than 640 samples (say 2000), are all those remaining samples used at all ? | closed | 2024-07-23T11:21:43Z | 2024-07-24T17:44:07Z | https://github.com/keras-team/keras/issues/20030 | [
"type:support"
] | newresu | 3 |
koaning/scikit-lego | scikit-learn | 426 | [FEATURE] Time Series Grouped Predictor including predictions from last lag | Hi!
I am finding really useful the 'GroupedPredictor' meta estimator.
I sometimes deal with the a similar problem at work and I have my own sketchy implementation. But after finding of 'GroupedPredictor' I believe that there might be a better way to solve the problem.
I deal with supervised learning time series. Let say I want to predict some feature for the next months [1,12]. What I do that helps is fitting a model per month instead of one generic model.
What helps, even more, my model is to include in the predictions for month N the predictions that N-1 model made. Example: When I predict March, use the predictions that the month of February has done.
Also, what I find very helpful is to get the feature relevance for each model. ( This might be a way to get it at the moment)
The feature that I have in mind is adding the predictions of the N-1 group when fitting the N model.
| closed | 2020-12-08T10:03:31Z | 2020-12-18T09:45:40Z | https://github.com/koaning/scikit-lego/issues/426 | [
"enhancement"
] | cmougan | 2 |
marshmallow-code/flask-smorest | rest-api | 444 | UploadFile converter overrides custom converters | When adding custom converter for API spec fields the UploadFile converter resets any previous changes.
The converter should not do this. | closed | 2023-01-17T15:57:19Z | 2023-01-17T16:06:15Z | https://github.com/marshmallow-code/flask-smorest/issues/444 | [] | arthurvanduynhoven | 1 |
donnemartin/data-science-ipython-notebooks | pandas | 82 | Ipython notebook | open | 2021-03-07T14:02:35Z | 2023-03-16T10:41:21Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/82 | [
"needs-review"
] | alfa0977 | 0 | |
deepfakes/faceswap | deep-learning | 670 | train failed | 03/15/2019 22:08:06 MainProcess training_0 multithreading run DEBUG Error in thread (training_0): OOM when allocating tensor with shape[16384,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc\n [[{{node training_1/Adam/mul_43}} = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Adam/beta_2/read, training_1/Adam/Variable_30/read)]]\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\n\n [[{{node loss_1/mul/_401}} = _Recv[[[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1638_loss_1/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\n
03/15/2019 22:08:06 MainProcess MainThread train monitor_console DEBUG Thread error detected
03/15/2019 22:08:06 MainProcess MainThread train monitor_console DEBUG Closed Console Monitor
03/15/2019 22:08:06 MainProcess MainThread train end_thread DEBUG Ending Training thread
03/15/2019 22:08:06 MainProcess MainThread train end_thread CRITICAL Error caught! Exiting...
03/15/2019 22:08:06 MainProcess MainThread multithreading join DEBUG Joining Threads: 'training'
03/15/2019 22:08:06 MainProcess MainThread multithreading join DEBUG Joining Thread: 'training_0'
03/15/2019 22:08:06 MainProcess MainThread multithreading join ERROR Caught exception in thread: 'training_0'
Traceback (most recent call last):
File "C:\Users\jinyi\faceswap\lib\cli.py", line 107, in execute_script
process.process()
File "C:\Users\jinyi\faceswap\scripts\train.py", line 101, in process
self.end_thread(thread, err)
File "C:\Users\jinyi\faceswap\scripts\train.py", line 126, in end_thread
thread.join()
File "C:\Users\jinyi\faceswap\lib\multithreading.py", line 443, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\jinyi\faceswap\lib\multithreading.py", line 381, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\jinyi\faceswap\scripts\train.py", line 152, in training
raise err
File "C:\Users\jinyi\faceswap\scripts\train.py", line 142, in training
self.run_training_cycle(model, trainer)
File "C:\Users\jinyi\faceswap\scripts\train.py", line 214, in run_training_cycle
trainer.train_one_step(viewer, timelapse)
File "C:\Users\jinyi\faceswap\plugins\train\trainer\_base.py", line 139, in train_one_step
loss[side] = batcher.train_one_batch(do_preview)
File "C:\Users\jinyi\faceswap\plugins\train\trainer\_base.py", line 214, in train_one_batch
loss = self.model.predictors[self.side].train_on_batch(*batch)
File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
return self._call(inputs)
File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
fetched = self._callable_fn(*array_vals)
File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__
run_metadata_ptr)
File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[16384,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training_1/Adam/mul_43}} = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Adam/beta_2/read, training_1/Adam/Variable_30/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[{{node loss_1/mul/_401}} = _Recv[[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1638_loss_1/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. | closed | 2019-03-15T14:34:28Z | 2019-03-18T17:44:52Z | https://github.com/deepfakes/faceswap/issues/670 | [] | Nostalgia1990 | 11 |
ned2/slapdash | plotly | 31 | Input('url','pathname') Not Working | Converting some existing code from another project over into slapdash, it seems like `Input('url','pathname')` cannot be used?
Am I missing something? ...Is there another way to use the url in a callback?
```
@app.callback(Output('api-connections', 'children'),
[Input('submit-settings-button', 'n_clicks'),
Input('url','pathname')])
def update_api_connection_status(n_clicks, pathname):
if n_clicks and n_clicks > 0:
print(pathname)
return html.Div(children=[
html.Div(className='twelve columns', children=[check_oura_connection()]),
html.Div(className='twelve columns', children=[check_strava_connection()]),
html.Div(className='twelve columns', children=[check_withings_connection()])
])
``` | closed | 2020-03-05T21:30:15Z | 2020-03-06T12:45:10Z | https://github.com/ned2/slapdash/issues/31 | [] | ethanopp | 2 |
mwaskom/seaborn | matplotlib | 2,924 | next gen usage question | How can I plot all (or a subset of) the columns of a pandas dataframe, using the index as x-axis, with the new object-based interface? | closed | 2022-07-26T11:18:56Z | 2022-07-28T14:17:38Z | https://github.com/mwaskom/seaborn/issues/2924 | [
"question",
"objects-plot"
] | bdch1234 | 6 |
thunlp/OpenPrompt | nlp | 253 | How to use openprompt in an In-context learning setting? | Is there a way to use Openprompt for an in-context learning setting (i.e., adding examples to the prompt). | open | 2023-03-20T12:58:25Z | 2023-03-30T05:23:18Z | https://github.com/thunlp/OpenPrompt/issues/253 | [] | YamenAjjour | 1 |
scikit-optimize/scikit-optimize | scikit-learn | 1,107 | gp_minimize returns lowest found point, not minimum of surrogate model | I am not sure if this is expected behavior or not, so this is a question and only potentially an actual issue:
`gp_minimize` returns the lowest seen value. However, for very noisy data, this is very unlikely to be the best estimate of the minimum.
As far as I can see, there is no option to instead have the minimum of the surrogate model returned, which in many cases would make more sense.
Minimum working example below:
```
from scipy.optimize import minimize_scalar
from skopt import gp_minimize
import numpy as np
import matplotlib.pyplot as plt
from skopt.plots import plot_gaussian_process
def f(x):
x = x[0]
return (x - 1.5)**2 + np.random.randn()
for _ in range(1000):
x = 5 * np.random.random()
plt.plot([x], [f([x])], 'ko', alpha=0.2)
bound = (0, 5.0)
res = gp_minimize(f, [bound], n_calls=50)
print(res)
plot_gaussian_process(res)
def loss(x0):
return res['models'][-1].predict(np.asarray(x0).reshape(-1, 1))
min_fun_res = minimize_scalar(loss, bounds=(0, 1), method='bounded').x
true_x0 = res['space'].inverse_transform(min_fun_res.reshape(1, 1))
print('SURROGATE MINIMUM =', true_x0)
plt.show()
``` | open | 2022-03-02T11:02:44Z | 2023-03-10T18:51:36Z | https://github.com/scikit-optimize/scikit-optimize/issues/1107 | [] | juliusbierk | 3 |
open-mmlab/mmdetection | pytorch | 11,409 | Multi-class MOT in QDTrack | Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
3. The bug has not been fixed in the latest version.
**Describe the bug**
When using QDTrack, I found that all the category labels of ground truths are set to 0 in `mmdet/models/mot/qdtrack.py`, func `loss` of class `QDTrack`, line `138-139`:
```python
key_data_sample.gt_instances.labels = \
torch.zeros_like(key_data_sample.gt_instances.labels)
```
However, it is not suitable for training multi-class datasets like BDD100K or VisDrone. I wonder is it a bug or it refers another purpose?
**Reproduction**
1. What command or script did you run?
```none
CUDA_VISIBLE_DEVICES=3 python tools/train.py configs/qdtrack/qdtrack_visdrone_baseline.py
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
No except my own-defined datasets.
4. What dataset did you use?
VisDrone-MOT
**Bug fix**
I ignored the codes and find everything seems okay.
| open | 2024-01-19T14:04:56Z | 2024-01-19T14:05:12Z | https://github.com/open-mmlab/mmdetection/issues/11409 | [] | JackWoo0831 | 0 |
httpie/cli | python | 1,599 | Request to server is very slow | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. Download the latest version of HTTPie (brew or pip).
2. Make a GET request to `https://repro.pacemakr.at`.
3. Wait.
## Current result
Request takes several seconds or even minutes.
## Expected result
Return (pretty much) immediately.
---
## Debug output
```bash
$ https --debug GET repro.pacemakr.at
HTTPie 3.2.3
Requests 2.31.0
Pygments 2.18.0
Python 3.12.5 (main, Aug 6 2024, 19:08:49) [Clang 15.0.0 (clang-1500.3.9.4)]
/opt/homebrew/Cellar/httpie/3.2.3/libexec/bin/python
Darwin 23.6.0
<Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x101f0c7c0>,
'args': Namespace(),
'as_silent': <function Environment.as_silent at 0x101f0c680>,
'colors': 256,
'config': {'default_options': []},
'config_dir': PosixPath('/Users/guri/.config/httpie'),
'devnull': <property object at 0x101ef9cb0>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x101f0c720>,
'program_name': 'https',
'quiet': 0,
'rich_console': <functools.cached_property object at 0x101e81cd0>,
'rich_error_console': <functools.cached_property object at 0x100e0fec0>,
'show_displays': True,
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': RequestJSONDataDict(),
'headers': <HTTPHeadersDict('User-Agent': b'HTTPie/3.2.3')>,
'method': 'get',
'params': <generator object MultiValueOrderedDict.items at 0x10212e4d0>,
'url': 'https://repro.pacemakr.at'})
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Type: application/json
Date: Sat, 07 Sep 2024 07:43:57 GMT
Server: nginx/1.26.2
Transfer-Encoding: chunked
Via: kong/2.8.1
X-Kong-Proxy-Latency: 2
X-Kong-Upstream-Latency: 85
content-encoding: gzip
vary: Accept-Encoding
"Hello World!"
```
## Additional information, screenshots, or code examples
I've tested the endpoint on three different machines (Windows, Ubuntu and Mac), including the server running the service itself. Clients I've tested with are HTTPie CLI and desktop app, cURL and another HTTP client as well as the browser. Each returns in less than 500ms except the CLI. This happens regardless of how the program was installed (pip or brew). | open | 2024-09-07T08:01:05Z | 2024-09-07T08:01:05Z | https://github.com/httpie/cli/issues/1599 | [
"bug",
"new"
] | gurbindersingh | 0 |
iMerica/dj-rest-auth | rest-api | 331 | How do I redirect after logout? | I'm working on a project with React and Django. I want to be redirected to the main page when I log out. Please help me ... | open | 2021-11-22T10:47:01Z | 2021-11-27T18:08:18Z | https://github.com/iMerica/dj-rest-auth/issues/331 | [] | wopa7210 | 1 |
JaidedAI/EasyOCR | deep-learning | 878 | Fine-Tuning Dataset Size | I wish to fine-tune easyocr to detect text from signs in the wild. I was wondering if there has been any research of if there are any general rules to help me estimate how many images I will need to achieve > 95% accuracy?
Thanks! | open | 2022-10-27T17:49:59Z | 2022-10-27T17:49:59Z | https://github.com/JaidedAI/EasyOCR/issues/878 | [] | jamesSmith54 | 0 |
dpgaspar/Flask-AppBuilder | rest-api | 2,188 | user_registration error | Hi,
It is my first time web development with fab.
I want to make a login page with self registration and i inspected the user_registration in examples folder. But it is not working correctly. I used a test recaptcha key, because default key didn't work. I always got registerDbModelview's error message:
Not possible to register you at the moment, try again later
log:
ERROR:flask_appbuilder.security.registerviews:Send email exception: (535, b'5.7.8 Username and Password not accepted. For more information, go to\n5.7.8 https://support.google.com/mail/?p=BadCredentials eh19-20020a0564020f9300b0055ffe74e39dsm575562edb.85 - gsmtp')
How can i fix it or run the example correctly? | closed | 2024-02-07T11:12:27Z | 2024-02-12T07:17:53Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2188 | [] | ayseelifvural-aras | 0 |
yzhao062/pyod | data-science | 443 | How I can use PyOD with Image and Text data along with other information present in Tabular data? | Hi,
I have data of products and for each product, I have images, descriptions (text data), and product attributes like color, size, etc. How can I use PyOD to do anomaly detection with such mix data? And also what would be right step to convert the data in right format so that I can use PyOD with such mix data to perform Anomaly detection.
Thanks | open | 2022-09-21T20:43:19Z | 2022-09-24T11:46:27Z | https://github.com/yzhao062/pyod/issues/443 | [] | karndeepsingh | 6 |
nsidnev/fastapi-realworld-example-app | fastapi | 48 | Update test running guide | I think this need some guideline to how to run unit test under the `test` dir for some beginners
It will be very helpful, so much thanks | closed | 2020-06-08T06:30:44Z | 2020-06-11T16:18:32Z | https://github.com/nsidnev/fastapi-realworld-example-app/issues/48 | [] | JasonLee-crypto | 6 |
AutoViML/AutoViz | scikit-learn | 109 | Bar Charts customization and skipping WordArt. | 
1. Could I adjust the 'Counts' displayed in the bar plots based on selected variables, or can i manually input two variables to compare in bar chart for 'depVar'? The bar generated seems lacks meaningful insights.
```
from autoviz import AutoViz_Class
AV = AutoViz_Class()
filename = "C:\\Users\\gxu\\Desktop\\Book1.csv"
target_variable = "Administered Time"
dft = AV.AutoViz(
filename,
sep=",",
depVar=target_variable,
dfte=None,
header=0,
verbose=2,
lowess=False,
chart_format ='html',
max_rows_analyzed=300000,
max_cols_analyzed=30,
save_plot_dir="C:\\Users\\gxu\\Desktop\\"
)
```
2. Also, is there a way to skip generating WordCloud?
3. for chartformat = 'server', some of the charts are too small, can i adjust the size of the chart?

Anyway, Great work on this Python tool for visualization! It's incredibly helpful and has saved me a lot of time in getting an overview. I'll definitely be checking back regularly for updates. | closed | 2024-04-28T06:05:29Z | 2024-04-29T18:31:26Z | https://github.com/AutoViML/AutoViz/issues/109 | [] | jackfood | 1 |
jina-ai/serve | machine-learning | 5,575 | jina实现负载均衡,是否需要自己根据需求设置executor的功能 | **Describe your proposal/problem**
<!-- A clear and concise description of what the proposal is. -->
---
<!-- Optional, but really help us locate the problem faster -->
我有一个微服务,他的参数是字符串列表,这个微服务用到gpu,我想用jina实现负载均衡,把字符串列表拆成多个固定长度的列表,然后再去调用微服务,然后再拼接结果,这个过程flow可以自动实现嘛
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
| closed | 2023-01-05T09:20:09Z | 2023-02-01T06:42:01Z | https://github.com/jina-ai/serve/issues/5575 | [] | fqzhao-win | 11 |
jumpserver/jumpserver | django | 14,443 | [Bug] 添加账号密钥接口,密钥参数内容需要手动加换行符(\n)且合并为一整行,才能调用成功 | ### 产品版本
3.10.1
### 版本类型
- [ ] 社区版
- [X] 企业版
- [ ] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [X] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### 环境信息
JumpServer 版本为 v3.10.15
### 🐛 缺陷描述
添加账号密钥接口,密钥参数直接复制密钥内容报错
![Uploading 336f2ac9bd40e5ae279ccaf5846ba28.png…]()
### 复现步骤
1. 调用创建账号密钥的接口
2. 密钥参数直接复制密钥内容报错
3. 密钥内容每行末尾加换行符,写到密钥参数里面,重新调用接口成功

### 期望结果
_No response_
### 补充信息
_No response_
### 尝试过的解决方案
_No response_ | closed | 2024-11-13T07:39:28Z | 2024-11-13T07:43:42Z | https://github.com/jumpserver/jumpserver/issues/14443 | [
"🐛 Bug",
"💡 FAQ"
] | hedanhedan | 1 |
koaning/scikit-lego | scikit-learn | 551 | [FEATURE] - Grid search across model parameters AND thresholds with Thresholder() without refitting | Thanks for this great set of extensions to sklearn.
The Tresholder() model is quite close to something I've been looking for for a while.
I'm looking to include threshold optimisation as part of a *broader* parameter search.
I can perhaps best describe the desired behaviour as follows
```
for each parameters in grid:
fit model with parameters
for each threshold in thresholds:
evaluate model
```
However, if I pass a model that has not yet been fit to Thresholder(), then, even with `refit=False`, the same model is fit also for each threshold.
Is there an easy way around this? Thinking about this the best way to achieve this would be tinkering with the GridSearchCV code, but perhaps you have an idea and would also find this interesting?
Thanks!
| open | 2022-11-16T16:05:08Z | 2023-09-26T14:54:13Z | https://github.com/koaning/scikit-lego/issues/551 | [
"enhancement"
] | mcallaghan | 3 |
pydata/xarray | numpy | 10,099 | Timedelta64 data cannot be round-tripped to netCDF files without a warning | ### What is your issue?
We added a future warning about not decoding time units to timedelta64 in https://github.com/pydata/xarray/pull/9966 (cc @spencerkclark, @kmuehlbauer).
Unfortunately, this warning is raised by default when reading timedelta64 serialized data to disk. This makes it much harder to use this dtype (which is quite useful for storing the "lead time" dimension in weather forecasts), and means that if we ever do finalize this deprecation warning it will break a lot of users.
I would love to see special handling of `timedelta64` data, similar to what I described here: https://github.com/pydata/xarray/issues/1621#issuecomment-339116478. In particular, we could write a `dtype='timedelta64'` attribute (possibly also with a specified precision) when writing a dataset to disk, which could be interpreted as np.timedelta64 data when reading the data with Xarray. This would allow us to at least ensure that datasets with timedelta64 data that are written to Zarr/netCDF now will always be able to be read faithfullly in the future.
To reproduce:
```python
import xarray
import numpy as np
deltas = np.array([1, 2, 3], dtype='timedelta64[D]').astype('timedelta64[ns]')
ds = xarray.Dataset({'lead_time': deltas})
xarray.open_dataset(ds.to_netcdf())
```
This issues:
`FutureWarning: In a future version of xarray decode_timedelta will default to False rather than None. To silence this warning, set decode_timedelta to True, False, or a 'CFTimedeltaCoder' instance.`
| open | 2025-03-05T18:20:05Z | 2025-03-06T14:32:36Z | https://github.com/pydata/xarray/issues/10099 | [] | shoyer | 3 |
marcomusy/vedo | numpy | 1,206 | Jupyter backends problems (trame, ipyvtk, k3d) | ### k3d
```python
"""Create a Volume from a numpy array"""
import numpy as np
from vedo import Volume, show, settings
settings.default_backend = "k3d"
data_matrix = np.zeros([70, 80, 90], dtype=np.uint8)
data_matrix[ 0:30, 0:30, 0:30] = 1
data_matrix[30:50, 30:60, 30:70] = 2
data_matrix[50:70, 60:80, 70:90] = 3
vol = Volume(data_matrix)
vol.cmap(['white','b','g','r']).mode(1)
vol.add_scalarbar()
show(vol, __doc__, axes=1)
```
```
Error displaying widget: model not found
```
### ipyvtk
```python
"""Create a Volume from a numpy array"""
import numpy as np
from vedo import Volume, show, settings
settings.default_backend = "ipyvtk"
data_matrix = np.zeros([70, 80, 90], dtype=np.uint8)
data_matrix[ 0:30, 0:30, 0:30] = 1
data_matrix[30:50, 30:60, 30:70] = 2
data_matrix[50:70, 60:80, 70:90] = 3
vol = Volume(data_matrix)
vol.cmap(['white','b','g','r']).mode(1)
vol.add_scalarbar()
show(vol, __doc__, axes=1)
```
```
file: plotter.py
-> 663 x, y = screensize
ValueError: too many values to unpack (expected 2)
```
### trame
```python
"""Create a Volume from a numpy array"""
import numpy as np
import vedo
from vedo import Volume, show, settings
settings.default_backend = "trame"
data_matrix = np.zeros([70, 80, 90], dtype=np.uint8)
data_matrix[ 0:30, 0:30, 0:30] = 1
data_matrix[30:50, 30:60, 30:70] = 2
data_matrix[50:70, 60:80, 70:90] = 3
vol = Volume(data_matrix)
vol.cmap(['white','b','g','r']).mode(1)
vol.add_scalarbar()
show(vol, __doc__, axes=1)
```
```
file: vue2.py
-> 16 raise TypeError(
17 f"Server using client_type='{client_type}' while we expect 'vue2'"
TypeError: Server using client_type='vue3' while we expect 'vue2'
```
__I tried to manually change the `client_type` into `vue2`, but error remains__
```python
...
from trame import app
server = app.get_server()
server.client_type = "vue2"
settings.default_backend = "trame"
...
```
### Env
```
Package Version
--------------------------------- --------------
absl-py 2.1.0
aiohappyeyeballs 2.4.3
aiohttp 3.10.10
aiosignal 1.3.1
aiosqlite 0.20.0
alabaster 1.0.0
altair 5.4.1
altair_pandas 0.1.0.dev0
annotated-types 0.7.0
anyio 3.7.1
appnope 0.1.4
appscript 1.3.0
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
astroid 3.3.5
asttokens 2.4.1
astunparse 1.6.3
async-lru 2.0.4
attrs 24.2.0
autopep8 2.0.4
babel 2.16.0
beautifulsoup4 4.12.3
black 24.10.0
bleach 6.1.0
bokeh 3.6.0
build 1.2.2.post1
CacheControl 0.14.0
cattrs 24.1.2
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.4.0
cleo 2.1.0
click 8.1.7
cloudpickle 3.1.0
cmocean 4.0.3
colorcet 3.1.0
colour-science 0.4.6
comm 0.2.2
contourpy 1.3.0
crashtest 0.4.1
curio 1.6
cycler 0.12.1
dask 2024.10.0
dataclasses-json 0.6.7
debugpy 1.8.7
decorator 5.1.1
deepmerge 2.0
defusedxml 0.7.1
dill 0.3.9
distlib 0.3.9
distributed 2024.10.0
docrepr 0.2.0
docstring-to-markdown 0.15
docutils 0.21.2
dulwich 0.21.7
et-xmlfile 1.1.0
exceptiongroup 1.2.2
executing 2.1.0
faiss-cpu 1.8.0
fastjsonschema 2.20.0
filelock 3.16.1
flake8 7.1.1
flatbuffers 24.3.25
fonttools 4.54.1
fqdn 1.5.1
frozenlist 1.5.0
fsspec 2024.10.0
grpcio 1.67.0
h11 0.14.0
httpcore 1.0.6
httpx 0.27.2
httpx-sse 0.4.0
huggingface-hub 0.26.2
idna 3.10
imageio 2.36.0
imagesize 1.4.1
importlib_metadata 8.5.0
iniconfig 2.0.0
installer 0.7.0
intersphinx_registry 0.2411.25
ipycanvas 0.13.3
ipyevents 2.0.2
ipyflow-core 0.0.204
ipykernel 6.29.5
ipympl 0.9.4
ipyparallel 9.0.0
ipython 8.30.0
ipython-genutils 0.2.0
ipyvtklink 0.2.3
ipywidgets 7.8.5
isoduration 20.11.0
isort 5.13.2
jaraco.classes 3.4.0
jax 0.4.35
jaxlib 0.4.35
jedi 0.19.1
Jinja2 3.1.4
joblib 1.4.2
json5 0.9.25
jsonpatch 1.33
jsonpath-ng 1.7.0
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
jupyter_ai 2.28.0
jupyter_ai_magics 2.28.0
jupyter_bokeh 4.0.5
jupyter_client 8.6.3
jupyter-console 6.6.3
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-lsp 2.2.5
jupyter-resource-usage 1.1.0
jupyter_server 2.14.2
jupyter_server_proxy 4.4.0
jupyter_server_terminals 0.5.3
jupyterlab 4.2.5
jupyterlab_cell_flash 0.4.0
jupyterlab_code_formatter 3.0.2
jupyterlab_execute_time 3.2.0
jupyterlab-lsp 5.1.0
jupyterlab_pygments 0.3.0
jupyterlab-rainbow-brackets 0.1.0
jupyterlab_server 2.27.3
jupyterlab-spellchecker 0.8.4
jupyterlab-spreadsheet 0.4.2
jupyterlab-spreadsheet-editor 0.7.2
jupyterlab-unfold 0.3.2
jupyterlab_widgets 1.1.11
jupyterlabcodetoc 4.0.1
jupytext 1.16.4
k3d 2.16.1
keyring 24.3.1
kiwisolver 1.4.7
langchain 0.2.17
langchain-community 0.2.18
langchain-core 0.2.43
langchain-mistralai 0.1.13
langchain-text-splitters 0.2.4
langsmith 0.1.141
lazy_loader 0.4
lckr_jupyterlab_variableinspector 3.2.4
linkify-it-py 2.0.3
locket 1.0.0
lsprotocol 2023.0.1
lxml 5.3.0
Markdown 3.7
markdown-it-py 3.0.0
MarkupSafe 3.0.2
marshmallow 3.23.1
matplotlib 3.9.2
matplotlib-inline 0.1.7
mccabe 0.7.0
mdit-py-plugins 0.4.2
mdurl 0.1.2
mediapipe 0.10.15
meshio 5.3.5
mistune 3.0.2
ml_dtypes 0.5.0
more-itertools 10.5.0
mpmath 1.3.0
msgpack 1.1.0
multidict 6.1.0
mypy-extensions 1.0.0
narwhals 1.10.0
nbclassic 1.1.0
nbclient 0.10.0
nbconvert 7.16.4
nbformat 5.10.4
nest-asyncio 1.6.0
networkx 3.4.2
notebook 7.2.2
notebook_shim 0.2.4
numpy 1.26.4
opencv-contrib-python 4.10.0.84
openpyxl 3.1.5
opt_einsum 3.4.0
orjson 3.10.11
outcome 1.3.0.post0
overrides 7.7.0
packaging 24.1
pandas 2.2.3
pandas-flavor 0.6.0
pandocfilters 1.5.1
panel 1.5.3
param 2.1.1
parso 0.8.4
partd 1.4.2
pathspec 0.12.1
patsy 0.5.6
pexpect 4.9.0
pickleshare 0.7.5
pillow 11.0.0
pingouin 0.5.5
pip 24.3.1
pkginfo 1.11.2
platformdirs 4.3.6
pluggy 1.5.0
ply 3.11
poetry 1.8.4
poetry-core 1.9.1
poetry-plugin-export 1.8.0
pooch 1.8.2
prometheus_client 0.21.0
prompt_toolkit 3.0.48
propcache 0.2.0
protobuf 4.25.5
psutil 5.9.8
ptyprocess 0.7.0
pure_eval 0.2.3
pyccolo 0.0.67
pycodestyle 2.12.1
pycparser 2.22
pydantic 2.9.2
pydantic_core 2.23.4
pydocstyle 6.3.0
pyflakes 3.2.0
pygls 1.3.1
Pygments 2.18.0
pyinstrument 5.0.0
pylint 3.3.1
pyparsing 3.2.0
pyproject_hooks 1.2.0
PySide6 6.8.0.2
PySide6_Addons 6.8.0.2
PySide6_Essentials 6.8.0.2
pytest 8.3.3
pytest-asyncio 0.21.2
python-dateutil 2.9.0.post0
python-json-logger 2.0.7
python-lsp-jsonrpc 1.1.2
python-lsp-server 1.12.0
pytoolconfig 1.3.1
pytz 2024.2
pyvista 0.44.2
pyviz_comms 3.0.3
PyYAML 6.0.2
pyzmq 26.2.0
qtconsole 5.6.1
QtPy 2.4.2
RapidFuzz 3.10.1
referencing 0.35.1
requests 2.32.3
requests-toolbelt 1.0.0
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rich 13.9.4
rope 1.13.0
rpds-py 0.20.0
SciencePlots 2.1.1
scikit-image 0.24.0
scikit-learn 1.5.2
scikit-posthocs 0.10.0
scipy 1.14.1
scooby 0.10.0
seaborn 0.13.2
selectivesearch 0.4
Send2Trash 1.8.3
setuptools 75.2.0
shellingham 1.5.4
shiboken6 6.8.0.2
simpervisor 1.0.0
six 1.16.0
sniffio 1.3.1
snowballstemmer 2.2.0
sortedcontainers 2.4.0
sounddevice 0.5.1
soupsieve 2.6
Sphinx 8.1.3
sphinx-rtd-theme 3.0.2
sphinxcontrib-applehelp 2.0.0
sphinxcontrib-devhelp 2.0.0
sphinxcontrib-htmlhelp 2.1.0
sphinxcontrib-jquery 4.1
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 2.0.0
sphinxcontrib-serializinghtml 2.0.0
SQLAlchemy 2.0.36
stack-data 0.6.3
statsmodels 0.14.4
sympy 1.13.1
tabulate 0.9.0
tblib 3.0.0
tenacity 8.5.0
tensorboard 2.18.0
tensorboard-data-server 0.7.2
terminado 0.18.1
testpath 0.6.0
threadpoolctl 3.5.0
tifffile 2024.9.20
tinycss2 1.4.0
tokenizers 0.20.3
tomli 2.0.2
tomlkit 0.13.2
toolz 1.0.0
torch 2.5.0
torchaudio 2.5.0
torchsummary 1.5.1
torchvision 0.20.0
tornado 6.4.1
tqdm 4.66.5
traitlets 5.14.3
traittypes 0.2.1
trame 3.7.0
trame-client 3.5.0
trame-server 3.2.3
trame-vtk 2.8.12
trame-vuetify 2.7.2
trio 0.27.0
trove-classifiers 2024.10.21.16
types-python-dateutil 2.9.0.20241003
typing_extensions 4.12.2
typing-inspect 0.9.0
tzdata 2024.2
uc-micro-py 1.0.3
ujson 5.10.0
uri-template 1.3.0
urllib3 2.2.3
vedo 2024.5.2
virtualenv 20.27.0
voila 0.5.8
vtk 9.3.1
wcwidth 0.2.13
webcolors 24.8.0
webencodings 0.5.1
websocket-client 1.8.0
websockets 13.1
Werkzeug 3.0.5
whatthepatch 1.0.6
wheel 0.44.0
widgetsnbextension 3.6.10
wslink 2.2.1
xarray 2024.10.0
xattr 1.1.0
xlwings 0.33.3
xyzservices 2024.9.0
yapf 0.40.2
yarl 1.17.1
zict 3.0.0
zipp 3.20.2
```
| closed | 2024-11-30T09:19:47Z | 2024-12-28T03:27:30Z | https://github.com/marcomusy/vedo/issues/1206 | [] | YongcaiHuang | 2 |
newpanjing/simpleui | django | 192 | 通用列表和详细信息视图 | **你希望增加什么功能?**
1.希望增加 数值的区间搜索
**留下你的联系方式,以便与你取得联系**
QQ:xxxxx
邮箱:153221318@qq.com
| closed | 2019-12-03T08:54:24Z | 2019-12-04T02:40:34Z | https://github.com/newpanjing/simpleui/issues/192 | [
"enhancement"
] | mn6538 | 0 |
aio-libs/aiomysql | sqlalchemy | 589 | aiomysql does not support TLS on Python 3.8 on Windows | Due to the Python 3.8 changing the default event loop to proactor, `start_tls` does not work, therefore you cannot connect to a server using TLS.
As per https://github.com/tornadoweb/tornado/issues/2608 and https://github.com/aio-libs/aiohttp/issues/4536, this limitation should probably be documented somewhere.
The solution, change the event loop policy before the event loop is created.
```py
async def main():
# Do stuff
pass
if __name__ == "__main__":
policy = asyncio.WindowsSelectorEventLoopPolicy()
asyncio.set_event_loop_policy(policy)
asyncio.run(main())
``` | open | 2021-06-04T21:53:24Z | 2022-01-22T23:09:24Z | https://github.com/aio-libs/aiomysql/issues/589 | [
"bug",
"docs"
] | huwcbjones | 0 |
widgetti/solara | jupyter | 243 | Internal Server Error by KeyError: 'load_extensions' | Thank you for this great library.
I tried to implement auth0 private site using Docker+Poetry and got an internal server error, and after looking at the code, it looks like there is no solution.
Apparently I should be able to set use_nbextensions=False when calling read_root in server.py, but I can't set it in the part that calls it in flask.py or starlette.py.
Actually, in starlette.py, I set use_nbextensions=False in the server.read_root section and it worked fine.
If possible, I think it would be better to be able to set the same as the port when starting such as:
`solara run sol.py --no_use_jupyter_notebook`
Log
```
Hoge | Solara server is starting at http://0.0.0.0:8080
Hoge | ERROR: Exception in ASGI application
Hoge | Traceback (most recent call last):
Hoge | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
Hoge | result = await app( # type: ignore[func-returns-value]
Hoge | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Hoge | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
Hoge | return await self.app(scope, receive, send)
Hoge | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 116, in __call__
Hoge | await self.middleware_stack(scope, receive, send)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
Hoge | raise exc
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
Hoge | await self.app(scope, receive, _send)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 24, in __call__
Hoge | await responder(scope, receive, send)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 44, in __call__
Hoge | await self.app(scope, receive, self.send_with_gzip)
Hoge | File "/usr/local/lib/python3.11/site-packages/solara_enterprise/auth/middleware.py", line 127, in __call__
Hoge | await self.app(scope, receive, send_wrapper)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/authentication.py", line 48, in __call__
Hoge | await self.app(scope, receive, send)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
Hoge | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 55, in wrapped_app
Hoge | raise exc
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 44, in wrapped_app
Hoge | await app(scope, receive, sender)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 746, in __call__
Hoge | await route.handle(scope, receive, send)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
Hoge | await self.app(scope, receive, send)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 75, in app
Hoge | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 55, in wrapped_app
Hoge | raise exc
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 44, in wrapped_app
Hoge | await app(scope, receive, sender)
Hoge | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 70, in app
Hoge | response = await func(request)
Hoge | ^^^^^^^^^^^^^^^^^^^
Hoge | File "/usr/local/lib/python3.11/site-packages/solara/server/starlette.py", line 243, in root
Hoge | content = server.read_root(request_path, root_path)
Hoge | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Hoge | File "/usr/local/lib/python3.11/site-packages/solara/server/server.py", line 220, in read_root
Hoge | nbextensions = get_nbextensions()
Hoge | ^^^^^^^^^^^^^^^^^^
Hoge | File "/usr/local/lib/python3.11/site-packages/solara/cache.py", line 95, in __call__
Hoge | value = self.function(*args, **kwargs)
Hoge | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Hoge | File "/usr/local/lib/python3.11/site-packages/solara/server/server.py", line 300, in get_nbextensions
Hoge | load_extensions = jupytertools.get_config(paths, "notebook")["load_extensions"]
Hoge | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
Hoge | KeyError: 'load_extensions'
``` | closed | 2023-08-14T13:56:35Z | 2023-08-14T14:22:31Z | https://github.com/widgetti/solara/issues/243 | [] | Sanuki-073 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 737 | Is it possible to use a direct image comparison with realB in pix2pix | Hi, I'm trying to do something like this: to have zebras and horses in the same picture and switch each one into the other kind.
With CycleGAN you can get very good models that for instance get pictures from, ignore zebras and turn horses into zebras. And viceversa. I've managed to do that, and to have pictures where horses are kept and zebras are removed from the picture. I could theoretically do both things easily considering with current results.
My personal challenge now, as I mentioned, is to manage to get the same model to do the exchange at the same time. Due to my previous experiments I have a pretty good dataset with very parallel and could use pix2pix for that, but results are not as expected: the discriminator simply can't tell realA apart from realB, so fakeB ends up being just realA untouched after a few epochs.
I think pixel loss would do a good job on this, but the pixel model doesn't work like that, I understand from the annotations that it doesn't care about spatial position, so the result is that it turns horses into a black and white checkerboard because the average of the zebra version is like that. I understand the average of pixels is the same in realA and realB and it just pushes the generator that way.
Is there a way to apply a simple 256x256 image to image comparison to net_D?
Thanks.
Also thanks for this wonderful repository and your attention to the issues board. | open | 2019-08-20T14:31:28Z | 2019-08-20T16:59:12Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/737 | [] | thehardmenpath | 1 |
ultrafunkamsterdam/undetected-chromedriver | automation | 967 | TypeError: Can't instantiate abstract class Service with abstract method command_line_args | service = selenium.webdriver.common.service.Service(
patcher.executable_path, port, service_args, service_log_path
)
Got error for this line after update. | open | 2022-12-31T09:57:36Z | 2023-02-21T08:21:08Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/967 | [] | anovob | 2 |
gevent/gevent | asyncio | 1,901 | select.epoll support on gpio | So I was working with OPi.GPIO python library and having trouble the edge detection, I encountered No module found epoll, since edge detection uses epoll but epoll currently not supported. Is it ok to disable select? select=False in monkey patch all? | closed | 2022-08-26T22:33:31Z | 2022-08-27T11:02:12Z | https://github.com/gevent/gevent/issues/1901 | [] | pikonek | 0 |
inducer/pudb | pytest | 446 | No output displayed after pressing o | I ran a simple script with following command:
**python -m pudb test.py**
Press 'o' when stopping at Line 4, nothing but hints "Hit Enter to return:" displayed, as shown in image below:

Anything wrong with what I did? Thanks!
| closed | 2021-04-22T03:50:14Z | 2021-07-13T14:04:37Z | https://github.com/inducer/pudb/issues/446 | [] | dehiker | 5 |
hbldh/bleak | asyncio | 1,715 | leaking an uninitialized object of type CBCentralManager | Mac OS 10.11
Python 3.11
bleak 0.22.3
pyobjc 10.3.2
When calling
`await BleakScanner.discover(5.0, return_adv=True)`
I get the following error
```
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/bleak/backends/corebluetooth/CentralManagerDelegate.py:76: UninitializedDeallocWarning: leaking an uninitialized object of type CBCentralManager
self.central_manager = CBCentralManager.alloc().initWithDelegate_queue_(
Traceback (most recent call last):
[...]
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/bleak/backends/corebluetooth/CentralManagerDelegate.py", line 76, in init
self.central_manager = CBCentralManager.alloc().initWithDelegate_queue_(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: depythonifying 'pointer', got 'OS_dispatch_queue'
``` | open | 2025-01-22T21:53:00Z | 2025-01-23T00:06:23Z | https://github.com/hbldh/bleak/issues/1715 | [
"3rd party issue",
"Backend: Core Bluetooth"
] | paorin | 1 |
deezer/spleeter | tensorflow | 202 | [Discussion] What are the recommended CPU and memory? | What are the recommended CPU and memory?
Conventional processing music files, Example:filename-》MP3,7bm | closed | 2019-12-27T04:24:06Z | 2020-04-05T12:38:15Z | https://github.com/deezer/spleeter/issues/202 | [
"question"
] | yoorxee | 2 |
apify/crawlee-python | web-scraping | 85 | Refactor initialization of storages | ### Description
- Currently, if you want to initialize Dataset/KVS/RQ you should use `open()` constructor. And it goes like the following:
- `dataset.open()`
- `base_storage.open()`
- `dataset.__init__()`
- `base_storage.__init__()`
- In the `base_storage.open()` a specific client is selected (local - `MemoryStorageClient` or cloud - `ApifyClient`) using `StorageClientManager`.
- Refactor initialization of memory storage resource clients as well.
### Desired state
- Make it more readable, less error-prone (e.g. user uses a wrong constructor), and extensible by supporting other clients. | closed | 2024-04-03T12:19:43Z | 2024-05-10T16:06:16Z | https://github.com/apify/crawlee-python/issues/85 | [
"t-tooling",
"debt"
] | vdusek | 11 |
pyeve/eve | flask | 1,023 | settings.py search sequence unintuitive and fragile | My first time out with Eve I ran:
from eve import Eve
app = Eve()
And got the exception:
```
Traceback (most recent call last):
File "/Users/daphtdazz/.virtualenvs/py3/lib/python3.5/site-packages/eve/flaskapp.py", line 272, in validate_domain_struct
domain = self.config['DOMAIN']
KeyError: 'DOMAIN'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/daphtdazz/.virtualenvs/py3/lib/python3.5/site-packages/eve/flaskapp.py", line 140, in __init__
self.validate_domain_struct()
File "/Users/daphtdazz/.virtualenvs/py3/lib/python3.5/site-packages/eve/flaskapp.py", line 274, in validate_domain_struct
raise ConfigException('DOMAIN dictionary missing or wrong.')
eve.exceptions.ConfigException: DOMAIN dictionary missing or wrong.
```
This turned out to be because I had django also installed in that virtualenv, which has a `'settings.py'` file in one of its directories, which Eve was trying to use. So I had a look at how Eve finds the `'settings.py'` file, in [flaskapp.py](https://github.com/pyeve/eve/blob/master/eve/flaskapp.py#L229), and the logic seems to be:
1. If we were passed (either via environment, or the call to `Eve()`) an absolute path use that.
2. If we were passed a relative path, first look for it in the directory of the application (`sys.argv[0]`).
3. Then look for it recursively in each directory in sys.path.
This seems unintuitive and fragile to me. Unintuitive because I would expect it first to look in the current directory, but it doesn't at all, nor does it make sense to me to look in the application's directory, and it makes no sense to recurse through all the system paths. Fragile because any module installed that happens to have a settings.py file in it is going to conflict.
So I think there are the following sub-issues:
1. The exception thrown should be improved. Eve should at least say which settings.py it was using in the exception so that you immediately understand it's looking in the wrong file.
2. [The docs](http://eve.readthedocs.io/en/latest/config.html) should be improved, currently they do say that it will look through `sys.path`, but they are I think at worst wrong and best confusing, for example they say:
> Eve will give precedence to dictionary-based settings first, then it will try to locate a file
> passed in EVE_SETTINGS environmental variable (if set) and finally it will try to locate
> settings.py or a file with filename passed to settings flag in constructor.
Whereas actually judging by the code [if an absolute path is passed in](https://github.com/pyeve/eve/blob/master/eve/flaskapp.py#L227) via keyword argument that is prioritised over the environment.
3. I think either the resolution order could be rethought, and I'd suggest:
1. Use absolute path or dictionary passed in to `Eve()`
2. Use environment (relative or absolute).
3. Look in current directory.
2. Deprecate looking in application directory.
3. Deprecate looking in `sys.path[]`.
Obviously, these are just suggestions! I'll probably make a patch for 1 at least as it's not likely to be controversial, and the others can be discussed.
([This question](https://stackoverflow.com/questions/31396208/where-is-settings-py-supposed-to-be-with-eve/44204533) on stackoverflow suggests I'm not the only one who's done this.) | closed | 2017-05-26T15:40:07Z | 2018-05-18T17:19:54Z | https://github.com/pyeve/eve/issues/1023 | [
"enhancement",
"stale"
] | daphtdazz | 8 |
davidsandberg/facenet | computer-vision | 981 | how can i run compare.py on windows | open | 2019-02-24T19:22:52Z | 2019-04-03T22:16:27Z | https://github.com/davidsandberg/facenet/issues/981 | [] | mohammedSamirMady | 1 | |
sktime/sktime | data-science | 7,407 | [ENH] Create special case of EnsembleForecaster that consists of N copies of identical forecaster to facilitate hyperparameter tuning | Convergence of ML models such as Neural Nets is affected by the initial (random) weights. This effect is often mitigated by creating an ensemble of N instances of the ML model, where each is fitted using different initial weights. This creates a problem when trying to tune hyperparameters of the underlying ML model (e.g. the number of hidden layers in the Neural Net). There is no easy way to require that the same parameter value (e.g. the number of hidden layers) is used for each instance within the ensemble.
An ideal solution, possibly, would be the ability to set a flag that the Ensemble consists of N copies of the same forecaster. When this flag is set, parameters of each instance would be set identically. Hyperparameter tuning, e.g. via grid search, should then be easy to formulate. As importantly, hyperparameter tuning would be efficient, in the sense that the search is restricted to cases where all instances get the same parameter value.
| closed | 2024-11-19T07:12:01Z | 2024-11-23T14:55:57Z | https://github.com/sktime/sktime/issues/7407 | [
"module:forecasting",
"enhancement"
] | ericjb | 1 |
saulpw/visidata | pandas | 1,959 | Current HEAD zsh-completion.py needs option_aliases update | **Small description**
`option_aliases` was removed in ce497f444db6d2f3fc0b8309f5ca839196c33c8b but is still referred to in the zsh completion code.
https://github.com/saulpw/visidata/blob/34808745232e798b0f25e893bb444fc9f3c034eb/dev/zsh-completion.py#L11C41-L11C41
I think the script needs a slight rejig to use the (present) `vd` import instead.
I wonder whether this can be included in future CI?
**Expected result**
The command succeeds.
**Actual result**
```
> /build/visidata-src
> Traceback (most recent call last):
> File "/build/visidata-src/dev/zsh-completion.py", line 11, in <module>
> from visidata.main import option_aliases
> ImportError: cannot import name 'option_aliases' from 'visidata.main' (/build/visidata-src/visidata/main.py)
```
**Steps to reproduce**
```
python dev/zsh-completion.py
```
**Additional context**
~~Please include the version of VisiData and Python.~~
https://github.com/saulpw/visidata/tree/34808745232e798b0f25e893bb444fc9f3c034eb but I listed the commit above that causes the breakage — I suspect this is a two minute fix for somebody familiar with the codebase, though not me. I can help with extending CI, though it might just be a case of adding
```yaml
- name: Ensure VisiData can create completions
run: python dev/zsh-completion.py
```
(I guess you might want to run a linter, instead.) | closed | 2023-07-15T00:32:42Z | 2023-08-16T16:27:27Z | https://github.com/saulpw/visidata/issues/1959 | [
"bug",
"fixed"
] | dbaynard | 8 |
marshmallow-code/apispec | rest-api | 348 | nested load_only fields appears in response | Hi.
I'm using apispec 1.0.0b6 and I love responses swagger control with load, dump_only fields.
but when schema is nested, load_only fields appears in responses example value.
related #303 #119
Thanks | closed | 2018-12-20T11:04:28Z | 2019-03-04T09:51:03Z | https://github.com/marshmallow-code/apispec/issues/348 | [] | zeakd | 3 |
microsoft/MMdnn | tensorflow | 791 | Keras model loading broken | Platform (like ubuntu 16.04/win10): Redhat
Python version: 3.6.2
Source framework with version (like Tensorflow 1.4.1 with GPU): Keras 2.2.4
Destination framework with version (like CNTK 2.3 with GPU): Pytorch 1.2.0
Pre-trained model path (webpath or webdisk path): Can't provide (sorry)
Running scripts: `mmconvert -sf keras -df pytorch -iw lstm_lm.hdf5 -in model2.json -om lstm_torch_lm.pt`
This [line](https://github.com/microsoft/MMdnn/blob/master/mmdnn/conversion/keras/keras2_parser.py#L59) appears to be out of date. It should be `from tensorflow.keras.models import model_from_json`
See
https://stackoverflow.com/questions/54897851/tensorflow-cudnnlstm-keras-error-typeerror-keyword-argument-not-understood
| open | 2020-02-21T20:52:17Z | 2020-03-01T04:17:42Z | https://github.com/microsoft/MMdnn/issues/791 | [] | mortonjt | 1 |
pytest-dev/pytest-html | pytest | 395 | Fix flaky test_rerun test on Windows | [test_rerun](https://github.com/pytest-dev/pytest-html/blob/master/testing/test_pytest_html.py#L189) is flaky only for windows environments. The root cause should be identified and fixed.
I have access to a windows machine, so I'll try and take a look soon. In the meantime, please rerun pipelines if this test fails for windows environments **only**, as it's most likely due to this issue and not a real problem
FYI: @BeyondEvil @ssbarnea | open | 2020-12-01T23:52:48Z | 2020-12-13T23:04:17Z | https://github.com/pytest-dev/pytest-html/issues/395 | [
"skip-changelog",
"test",
"windows"
] | gnikonorov | 7 |
graphql-python/gql | graphql | 189 | gql 3.x.y not available as python-poetry dependency | Versions 2.x.y are available as dependencies for python-poetry, but no version 3.x.y is.
Extremely annoying for users of python-poetry, especially since docs for versions 2.x.y are apparently nowhere to be found. | closed | 2021-01-26T15:55:50Z | 2021-02-10T09:58:14Z | https://github.com/graphql-python/gql/issues/189 | [
"type: question or discussion"
] | deedf | 2 |
xlwings/xlwings | automation | 2,594 | Run-time error '13': Type mismatch (French version of Excel) | The following solved it:
```
The xlwings.conf:
I changed the attribute values “Faux” for “False”
``` | open | 2025-03-19T08:17:00Z | 2025-03-19T08:17:31Z | https://github.com/xlwings/xlwings/issues/2594 | [
"bug"
] | fzumstein | 0 |
ultralytics/yolov5 | machine-learning | 13,245 | more details about training procedure | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi,
I have a question related to the training procedure of yolov5. Specifically I was wondering if adaptive training is applied and if the validation loss has a role in this case; I need to understand if validation set is used just to verify the generalization ability of the network or it is involved also in the optimization of the training process changing for example the learning rate or other hyperparameters.
Thank you in advance!
Noemi
### Additional
_No response_ | open | 2024-08-06T10:10:48Z | 2024-10-20T19:51:25Z | https://github.com/ultralytics/yolov5/issues/13245 | [
"question"
] | NGtesig | 4 |
keras-team/keras | deep-learning | 20,952 | implement of muon optimizer | [Moun optimizer](https://github.com/KellerJordan/Muon) is an optimizer proposed by OpenAI that is stronger than AdamW. And it has been verified on the [Moonlight model](https://hf-mirror.com/moonshotai/Moonlight-16B-A3B-Instruct).
Has the Keras team implemented his plan? If not yet, I can submit a relevant PR.
If I were to provide the relevant PR, what should I pay attention to? | open | 2025-02-24T08:45:47Z | 2025-03-04T18:49:38Z | https://github.com/keras-team/keras/issues/20952 | [
"type:feature"
] | pass-lin | 8 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 541 | [BUG] API失效?是否与tiktok下线有关? | INFO 如果你不需要使用TikTok相关API,请忽略此消息。
INFO: Will watch for changes in these directories: ['/www/wwwroot/Douyin_TikTok_Download_API-main']
INFO: Uvicorn running on http://0.0.0.0:4335 (Press CTRL+C to quit)
INFO: Started reloader process [14167] using StatReload
ERROR 生成TikTok msToken API错误:timed out
INFO 当前网络无法正常访问TikTok服务器,已经使用虚假msToken以继续运行。
INFO 并且TikTok相关API大概率无法正常使用,请在(/tiktok/web/config.yaml)中更
新代理。 | closed | 2025-01-19T07:30:02Z | 2025-01-21T04:29:23Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/541 | [
"BUG"
] | shuntan | 1 |
plotly/dash | flask | 2,979 | DatePickerRange ignoring stay_open_on_select option | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.1
dash-bootstrap-components 1.6.0
dash-extensions 1.0.18
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Mac OS 14.5
- Browser Chrome
- Version 128.0.6613.85
**Describe the bug**
The DatePickerRange has an option 'stay_open_on_select' option:
stay_open_on_select (boolean; default False): If True the calendar will not close when the user has selected a value and will wait until the user clicks off the calendar.
However, the DatePickerRange always stays open until two dates are selected regardless of whether this setting is True or False. A minimal code example that demonstrates this is:
```
from dash import Dash, html, dcc, Input, Output, callback
from datetime import date
app = Dash(__name__)
app.layout = html.Div([
dcc.DatePickerRange(
id='date-range-picker',
stay_open_on_select=False
),
html.Div(id='date-range-picker')
])
@callback(
Output('output-container-date-picker-single', 'children'),
Input('my-date-picker-single', 'date'))
def update_output(date_value):
pass
if __name__ == '__main__':
app.run(debug=True)
```
**Expected behavior**
The DatePickerRange always stays open until two dates are selected regardless of whether this setting is True or False.
**Screenshots**
If applicable, add screenshots or screen recording to help explain your problem.
| open | 2024-09-04T00:59:08Z | 2024-09-04T13:23:14Z | https://github.com/plotly/dash/issues/2979 | [
"bug",
"P2"
] | brett-matson | 0 |
widgetti/solara | fastapi | 615 | More detail to how Solara works without a jupyter kernel | We have a Voila workflow that we are looking to replace with regular react/api server backends so we can have a lighter weight more scaleable backend.
From this page
https://solara.dev/documentation/advanced/understanding/voila
it looks like Solara could be a good option. Can you add more detail to that documentation page explaining how Solara converts regular ipywidgets (or ipywidgets wrapped with reacton) into a paradigm that works with a traditional server. It would help to explain the advantages of a Solara solution internally. | open | 2024-04-22T14:22:14Z | 2024-04-23T10:38:06Z | https://github.com/widgetti/solara/issues/615 | [] | paddymul | 1 |
jupyter-incubator/sparkmagic | jupyter | 525 | Can not connect to Sparkmagic Kernel in Docker | Hi,
I am unable to connect to Spark Kernel in Docker Spawner. I am installing SparkMagic in my image and tested the functionality using ipython kernel and it works fine.
But when I am starting Spark Kernel it gives me dead kernel error.
Error Message:
```The kernel has died, and the automatic restart has failed. It is possible the kernel cannot be restarted. If you are not able to restart the kernel, you will still be able to save the notebook, but running code will no longer work until the notebook is reopened.```
From the docker logs , I see that it has some port error as it is not getting the port to bind:
```[I 2019-04-11 21:44:50.384 SingleUserNotebookApp restarter:110] KernelRestarter: restarting kernel (4/5), keep random ports
kernel 534554be-1634-4986-81a7-d2511f7ced16 restarted
Traceback (most recent call last):
File "/opt/conda/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/pp_notebooks_pkgs/sparkmagic/sparkmagic/sparkmagic/kernels/sparkkernel/sparkkernel.py", line 25, in <module>
IPKernelApp.launch_instance(kernel_class=SparkKernel)
File "/opt/conda/lib/python3.5/site-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-159>", line 2, in initialize
File "/opt/conda/lib/python3.5/site-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/conda/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 469, in initialize
self.init_sockets()
File "/opt/conda/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 260, in init_sockets
self.init_iopub(context)
File "/opt/conda/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 268, in init_iopub
self.iopub_thread = IOPubThread(self.iopub_socket, pipe=True)
File "/opt/conda/lib/python3.5/site-packages/ipykernel/iostream.py", line 68, in __init__
self._setup_pipe_in()
File "/opt/conda/lib/python3.5/site-packages/ipykernel/iostream.py", line 133, in _setup_pipe_in
self._pipe_port = pipe_in.bind_to_random_port("tcp://127.0.0.1")
File "/opt/conda/lib/python3.5/site-packages/zmq/sugar/socket.py", line 260, in bind_to_random_port
return int(port_s)
ValueError: invalid literal for int() with base 10: ''
```
Would appreciate any help on this?
The problem is only with SPark Kernel, as I am successfully able to start python kernel and other custom kernels that I have developed. | closed | 2019-04-11T22:18:06Z | 2022-04-27T19:19:42Z | https://github.com/jupyter-incubator/sparkmagic/issues/525 | [] | ayushiagarwal | 0 |
xuebinqin/U-2-Net | computer-vision | 363 | How can I input video or webcam in the test.py script? | I want to get video input, how should I modify the script? | open | 2023-08-28T02:45:45Z | 2023-08-28T02:45:45Z | https://github.com/xuebinqin/U-2-Net/issues/363 | [] | Hogushake | 0 |
akfamily/akshare | data-science | 5,542 | stock_sse_deal_daily 上证交易所每日概况数据错误 | 以下涉及的是 stock_sse_deal_daily 返回的 df 中 "单日情况"列的值为"成交金额"对应的行的数据错误:
1. 官网有数据但查询失败:20060712 和 20070430,上证交易所网站可以查到数据,但是 stock_sse_deal_daily 查询这两个日期时报错。
2. 数据列错位如:20211224,对比上证交易所网站,"股票回购"列正确,其他列错位。
单日情况 股票 主板A 主板B 科创板 股票回购
成交金额 441.931463 1.320805 4342.353401 4789.323726 3.718057
!!!这个情况出现频率高,我将1990年至今的数据全下载后,筛选主板 B 比主板 A 成交金额高的记录有 3179 天。我这个筛选逻辑不够完整,实际错位的数据量还会更多。 | closed | 2025-01-20T05:13:08Z | 2025-02-21T10:16:33Z | https://github.com/akfamily/akshare/issues/5542 | [] | LiuTaolang | 2 |
PokeAPI/pokeapi | graphql | 425 | Missing Aegislash | Aegislash is not in the pokemon database | closed | 2019-04-26T14:51:05Z | 2024-05-01T09:12:34Z | https://github.com/PokeAPI/pokeapi/issues/425 | [] | 2sodiumsandwich | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.