repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets | deep-learning | 7,392 | push_to_hub payload too large error when using large ClassLabel feature | ### Describe the bug
When using `datasets.DatasetDict.push_to_hub` an `HfHubHTTPError: 413 Client Error: Payload Too Large for url` is raised if the dataset contains a large `ClassLabel` feature. Even if the total size of the dataset is small.
### Steps to reproduce the bug
``` python
import random
import sys
import datasets
random.seed(42)
def random_str(sz):
return "".join(chr(random.randint(ord("a"), ord("z"))) for _ in range(sz))
data = datasets.DatasetDict(
{
str(i): datasets.Dataset.from_dict(
{
"label": [list(range(3)) for _ in range(10)],
"abstract": [random_str(10_000) for _ in range(10)],
},
)
for i in range(3)
}
)
features = data["1"].features.copy()
features["label"] = datasets.Sequence(
datasets.ClassLabel(names=[str(i) for i in range(50_000)])
)
data = data.map(lambda examples: {}, features=features)
feat_size = sys.getsizeof(data["1"].features["label"].feature.names)
print(f"Size of ClassLabel names: {feat_size}")
# Size of ClassLabel names: 444376
data.push_to_hub("dconnell/pubtator3_test")
```
Note that this succeeds if `ClassLabel` has fewer names or if `ClassLabel` is replaced with `Value("int64")`
### Expected behavior
Should push the dataset to hub.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 3.2.0
- Platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.35
- Python version: 3.12.8
- `huggingface_hub` version: 0.28.1
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| open | 2025-02-11T17:51:34Z | 2025-02-11T18:01:31Z | https://github.com/huggingface/datasets/issues/7392 | [] | DavidRConnell | 1 |
deepspeedai/DeepSpeed | pytorch | 5,641 | [BUG] tortoise_tts.py fails on deepspeed/pydantic error | **Describe the bug**
When running `./scripts/tortoise_tts.py` it fails in `deepspeed/runtime/config_utils.py` from what looks like a pyndantic conflict.
**To Reproduce**
Install as per instructions
run `./scripts/tortoise_tts.py`
**Expected behavior**
non-fatal output
**ds_report output**
ds_report
/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/pydantic/_internal/_config.py:334: UserWarning: Valid config keys have changed in V2:
* 'allow_population_by_field_name' has been renamed to 'populate_by_name'
* 'validate_all' has been renamed to 'validate_default'
warnings.warn(message, UserWarning)
/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_persistence_threshold" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
Traceback (most recent call last):
File "/home/jw/miniforge3/envs/TTSF/bin/ds_report", line 3, in <module>
from deepspeed.env_report import cli_main
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/__init__.py", line 16, in <module>
from . import module_inject
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/module_inject/__init__.py", line 6, in <module>
from .replace_module import replace_transformer_layer, revert_transformer_layer, ReplaceWithTensorSlicing, GroupQuantizer, generic_injection
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 732, in <module>
from ..pipe import PipelineModule
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/pipe/__init__.py", line 6, in <module>
from ..runtime.pipe import PipelineModule, LayerSpec, TiedLayerSpec
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/runtime/pipe/__init__.py", line 6, in <module>
from .module import PipelineModule, LayerSpec, TiedLayerSpec
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/runtime/pipe/module.py", line 19, in <module>
from ..activation_checkpointing import checkpointing
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/runtime/activation_checkpointing/checkpointing.py", line 25, in <module>
from deepspeed.runtime.config import DeepSpeedConfig
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 28, in <module>
from .zero.config import get_zero_config, ZeroStageEnum
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/runtime/zero/__init__.py", line 6, in <module>
from .partition_parameters import ZeroParamType
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 569, in <module>
class Init(InsertPostInitMethodToModuleSubClasses):
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 571, in Init
param_persistence_threshold = get_config_default(DeepSpeedZeroConfig, "param_persistence_threshold")
File "/home/jw/miniforge3/envs/TTSF/lib/python3.10/site-packages/deepspeed/runtime/config_utils.py", line 116, in get_config_default
field_name).required, f"'{field_name}' is a required field and does not have a default value"
AttributeError: 'FieldInfo' object has no attribute 'required'. Did you mean: 'is_required'?
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System info (please complete the following information):**
- OS: Debian GNU/Linux 12
- GPU 1 Nvidia GeForce RTX 4090Ti
- Python 3.10.14
- Running in a conda env dedicated to tortoise-tts-fastest
This was a fresh install into a new environment.
I have seen related issues (https://github.com/microsoft/DeepSpeed/issues/4105), but the solution of downgrading pedantic to <2.0.0 does not work as it creates a slew of other errors.
Currently running
pydantic 2.7.3
pydantic_core 2.18.4
deepspeed 0.9.0
| closed | 2024-06-11T20:27:51Z | 2024-08-22T23:40:46Z | https://github.com/deepspeedai/DeepSpeed/issues/5641 | [
"bug",
"inference"
] | tholonia | 2 |
davidsandberg/facenet | tensorflow | 610 | ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[90,17,17,32] | when I train show: ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[90,17,17,32]. how to sovle? | closed | 2018-01-09T11:31:40Z | 2018-04-01T21:29:56Z | https://github.com/davidsandberg/facenet/issues/610 | [] | shikongy | 1 |
plotly/dash-bio | dash | 55 | Test Manhattanplot | List all stylistic or functional issues here.
When done, change issue assignment to the component creator for them to fix the issues. | closed | 2018-11-29T16:46:38Z | 2018-12-06T08:28:42Z | https://github.com/plotly/dash-bio/issues/55 | [] | VeraZab | 7 |
fastapi/sqlmodel | sqlalchemy | 890 | Field cannot autocompletion when its a SQLModel | ### Privileged issue
- [ ] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
`Sorry for create this issue without permission, because this question has been in the discussion area for a week but no one has paid attention to it.`
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from sqlmodel import SQLModel
from typing import Optional
class ModelNode(SQLModel):
id: int
name: str
class Node:
id: int
name: str
class Cluster(SQLModel):
model_node: Optional[ModelNode] = None
node: Optional[Node] = None
```
### Description
Cluster.node. can autocompletion, but Cluster.model_node. cannot
### Operating System
macOS
### Operating System Details
IDE: vscode
### SQLModel Version
0.0.16
### Python Version
3.10.5
### Additional Context


| closed | 2024-04-09T04:37:11Z | 2024-12-07T03:08:59Z | https://github.com/fastapi/sqlmodel/issues/890 | [] | zhaowcheng | 8 |
flairNLP/flair | pytorch | 3,042 | does flairNLP support mypy | I am using pycharm IDE and as such don't have tons of experience with python at present.
For this line:
```
from flair.data import Sentence
```
I get the following warning:
Mypy: Skipping analyzing "flair.data": module is installed, but missing library stubs or py.typed marker
Is there a way to fix this error? Or flairNLP does not support mypy yet? | closed | 2023-01-04T12:40:24Z | 2023-02-20T13:24:42Z | https://github.com/flairNLP/flair/issues/3042 | [
"question"
] | sillyquestion | 1 |
docarray/docarray | pydantic | 1,590 | HNSWIndex bug | Why this is not working?
```python
from docarray.index import HnswDocumentIndex
from docarray import DocList, BaseDoc
from docarray.typing import NdArray
import numpy as np
class MyDoc(BaseDoc):
text: str
embedding: NdArray[128]
docs = [MyDoc(text='hey', embedding=np.random.rand(128)) for i in range(200)]
a = HnswDocumentIndex[MyDoc](work_dir='./tmp', index_name='index')
a.index(docs=DocList[MyDoc](docs))
resp = a.find_batched(queries=DocList[MyDoc](docs[0:3]), search_field='embedding')
print(f' resp {resp}')
```
Giving this?
```TypeError: ModelMetaclass object argument after must be a mapping, not MyDoc``` | closed | 2023-05-30T16:25:47Z | 2023-05-31T09:39:52Z | https://github.com/docarray/docarray/issues/1590 | [] | JoanFM | 0 |
JaidedAI/EasyOCR | machine-learning | 721 | Output format of labels | open | 2022-05-07T04:09:01Z | 2022-05-07T04:09:26Z | https://github.com/JaidedAI/EasyOCR/issues/721 | [] | abhifanclash | 0 | |
dynaconf/dynaconf | fastapi | 845 | [RFC] Vault Approle authentication with SSH/https/tls disabled | **Is your feature request related to a problem? Please describe.**
vault authentication via approle does not offer skipping of ssl.
**Describe the solution you'd like**
vault_loader.py file, line33 had to be modified by adding verify=False, to skip https verification.
**Describe alternatives you've considered**
None
**Additional context**
Our vault sits in other network with https. Our services however reside adjacent to the vault in the same kubernetes environment. We do not need or want to use https address since it will cause hairpinning of the traffic (traffic goes to nginx ingress, dns, then nginx ingress and then the client of the vault) We directly access vault using the service name of the vault in k8s and even if it is not the case, providing an option to skip https is great. could not find the same in your documentation.
https://www.dynaconf.com/secrets/?h=vault#using-vault-server
| open | 2022-12-26T16:07:45Z | 2023-08-21T19:47:45Z | https://github.com/dynaconf/dynaconf/issues/845 | [
"Not a Bug",
"RFC"
] | MrAmbiG | 0 |
mirumee/ariadne | api | 362 | No extensions support in ariadne.contrib.django.views.GraphQLView | As far as I can tell, in order to enable an extension using the Django integration, I would pass an extensions argument to `GraphQLView.as_view()`. However, that results in a Django error:
```
GraphQLView() received an invalid keyword 'extensions'. as_view only accepts arguments that are already attributes of the class.
```
Is this just a missing feature of GraphQLView at the moment? | closed | 2020-04-20T17:15:42Z | 2020-04-22T08:50:18Z | https://github.com/mirumee/ariadne/issues/362 | [] | markedwards | 3 |
inducer/pudb | pytest | 86 | Update docs about IPython stuff | The stuff in `pudb.ipython`. See also https://github.com/inducer/pudb/pull/83
| open | 2013-08-13T04:39:57Z | 2013-08-13T04:39:57Z | https://github.com/inducer/pudb/issues/86 | [] | asmeurer | 0 |
slackapi/python-slack-sdk | asyncio | 971 | 3.4.1 compatibility break with serialized attachments parameter in chat.* method calls | Broken compatibility warning
### Reproducible in:
#### The Slack SDK version
slack-sdk==3.4.1
#### Python runtime version
Python 3.9.2
#### OS info
ProductName: Mac OS X
ProductVersion: 10.15.7
BuildVersion: 19H114
Darwin Kernel Version 19.6.0: Tue Nov 10 00:10:30 PST 2020; root:xnu-6153.141.10~1/RELEASE_X86_64
#### Steps to reproduce:
```
response = client.chat_postMessage(
channel=channel,
ts=messageTs,
text=text,
blocks=json.dumps(blocks),
attachments=json.dumps(attachments)
)
```
### Expected result:
This was working just fine until I updated the sdk from 3.4.0 to 3.4.1. I realize the json.dumps should not be required (and indeed, I prefer the version of the code without it), so it is an easy fix for me and a good lesson learned, but it might catch others by surprise since it is a patch version number, not a minor version.
### Actual result:
```
Traceback (most recent call last):
...SNIP...
File "/usr/local/lib/python3.9/site-packages/slack_sdk/web/client.py", line 1185, in chat_update
"Developer": ":large_blue_square: Developer",
_warn_if_text_is_missing("chat.update", kwargs)
File "/usr/local/lib/python3.9/site-packages/slack_sdk/web/internal_utils.py", line 237, in _warn_if_text_is_missing
[
File "/usr/local/lib/python3.9/site-packages/slack_sdk/web/internal_utils.py", line 238, in <listcomp>
attachment.get("fallback")
AttributeError: 'str' object has no attribute 'get'
```
### Requirements
I'm not sure if the acceptance of the JSON encoded parameter before was expected, accidental, or required, but in any case, it worked and now it doesn't. Rolling back to the 3.4.0 version caused it to work again. Once I removed the json.dumps for those two parameters, it began working in 3.4.1, but with a warning message about the missing "fallback" attribute. The issue seems to be caused by changes from 57716b74c3acc9be1ce7bdb1f42a8485c53ab3ab. I'm happy enough with my fixes, but figured someone else may run into this, so I'm hoping that this report can help them troubleshoot more quickly. :) | closed | 2021-03-04T20:34:20Z | 2021-03-05T05:27:10Z | https://github.com/slackapi/python-slack-sdk/issues/971 | [
"bug",
"web-client",
"Version: 3x"
] | pmarkert | 4 |
xlwings/xlwings | automation | 1,897 | PivotTable.PivotSelect in xlwings | #### OS Windows 10
#### Versions of xlwings, Excel and Python Office 365, Python 3.9)
#### Good afternoon, I can not implement the selection of values in the pivot table
I use the following code
Please tell me where is the error and how to solve it. Thanks
```python
```python
wb.sheets['Sheet_name'].select()
wb.api.ActiveSheet.PivotTables('Table_name').PivotSelect ("'date'[All]", xlLabelOnly, True)
or
wb.api.sheets('Sheet_name').PivotTables('Table_name').PivotSelection = "week[7]"
``` | open | 2022-04-15T17:00:07Z | 2022-04-15T17:00:07Z | https://github.com/xlwings/xlwings/issues/1897 | [] | Kasid82 | 0 |
explosion/spaCy | deep-learning | 13,769 | Bug in Span.sents | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
When a `Doc`'s entity is in the second to the last sentence, and the last sentence consists only of one token, `entity.sents` includes that last 1-token sentence (even though the entity is fully contained by the previous sentence.
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
```
text = "This is a sentence. This is another sentence. Third"
doc = nlp.tokenizer(text)
doc[0].is_sent_start = True
doc[5].is_sent_start = True
doc[10].is_sent_start = True
doc.ents = [('ENTITY', 7, 9)] # "another sentence" phrase in the second sentence
entity = doc.ents[0]
print(f"Entity: {entity}. Sentence: {entity.sent} Sentences: {list(entity.sents)}")
```
Output:
```
Entity: another sentence. Sentence: This is another sentence. Sentences: [This is another sentence., Third]
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System:
* Python Version Used:
* spaCy Version Used:
* Environment Information:
| open | 2025-03-12T18:03:13Z | 2025-03-12T18:03:13Z | https://github.com/explosion/spaCy/issues/13769 | [] | nrodnova | 0 |
apache/airflow | machine-learning | 47,386 | Opt out of using versioned bundle | ### Description
We need a way for both individual dags and a whole instance to opt out of using versioned bundles.
I think the most bang for the buck will be this approach:
- config options, something like `use_bundle_versioning`
- DAG kwarg that uses that config option as the default
- [conditionally add bundle_version to the dagrun](https://github.com/apache/airflow/blob/e49d7964de28f06c4ca8e28719603bb644501fad/airflow/models/dag.py#L277)
Basically, very similar interface wise to how we handle max_active_tasks.
### Use case/motivation
This allows folks to opt in to running on the latest bundle version for each task - basically how Airflow 2 operates, even if they have a versioned bundle under the hood.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-05T14:21:22Z | 2025-03-11T19:19:18Z | https://github.com/apache/airflow/issues/47386 | [
"kind:feature",
"area:core",
"AIP-66: DAG Bundle/Manifest"
] | jedcunningham | 0 |
PaddlePaddle/models | nlp | 5,197 | 关于paddle安装中的TensorRT问题 | 我是ubuntu18.04系统,显卡是1660ti,安装了CUDA10.0+cudnn7,安装的TensorRT是6.0.1.5,paddle版本是1.8.5
但我安装完之后提示:(可是我已经安装了TensorRT,库目录也加到路径中去了)
Suggestions:
1. Check if TensorRT is installed correctly and its version is matched with paddlepaddle you installed.
2. Configure TensorRT dynamic library environment variables as follows:
- Linux: set LD_LIBRARY_PATH by `export LD_LIBRARY_PATH=...`
- Windows: set PATH by `set PATH=XXX;>>> fluid.install_check.run_check()
| open | 2021-01-13T02:48:36Z | 2024-02-26T05:09:26Z | https://github.com/PaddlePaddle/models/issues/5197 | [] | mocaibupt | 1 |
ultralytics/yolov5 | deep-learning | 12,496 | Results of the YOLOv5x-seg model | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello, I have a question about the results of the YOLOv5x-seg model.
I trained this model three times using the same dataset and got identical results of precision, recall, mAP, mask mAP, etc.
The command I used to train the model is below.
python segment/train.py --data data/***.yaml --cfg models/segment/yolov5x-seg.yaml --weight yolov5x-seg.pt --img 640 --batch 16 --epochs 1000
Do you know if this is normal?
### Additional
_No response_ | closed | 2023-12-13T03:10:38Z | 2024-10-20T19:34:05Z | https://github.com/ultralytics/yolov5/issues/12496 | [
"question",
"Stale"
] | kim2429 | 5 |
statsmodels/statsmodels | data-science | 8,587 | X13ARIMA & statsmodels: X13NotFoundError: x12a and x13as not found on path | #### Describe the bug
0
I am on Window OS. I downloaded the Windows version of X13 software from https://www.census.gov/data/software/x13as.X-13ARIMA-SEATS.html#list-tab-635278563.
I would like to use it with my Python code as below.
But I get error:
X13NotFoundError: x12a and x13as not found on path. Give the path, put them on PATH, or set the X12PATH or X13PATH environmental variable.
#### Code Sample, a copy-pastable example if possible
import pandas as pd
from pandas import Timestamp
s = pd.Series(
{Timestamp('2013-03-01 00:00:00'): 838.2,
Timestamp('2013-04-01 00:00:00'): 865.17,
Timestamp('2013-05-01 00:00:00'): 763.0})
import os
os.chdir(r'C:\Users\user-name\Downloads\x13as_ascii-v1-1-b59\x13as')
import statsmodels.api as sm
sm.tsa.x13_arima_analysis(s)
```python
# Your code here that produces the bug
# This example should be self-contained, and so not rely on external data.
# It should run in a fresh ipython session, and so include all relevant imports.
```
<details>
**Note**: As you can see, there are many issues on our GitHub tracker, so it is very possible that your issue has been posted before. Please check first before submitting so that we do not have to handle and close duplicates.
**Note**: Please be sure you are using the latest released version of `statsmodels`, or a recent build of `main`. If your problem has been fixed in an unreleased version, you might be able to use `main` until a new release occurs.
**Note**: If you are using a released version, have you verified that the bug exists in the main branch of this repository? It helps the limited resources if we know problems exist in the current main branch so that they do not need to check whether the code sample produces a bug in the next release.
</details>
If the issue has not been resolved, please file it in the issue tracker.
#### Expected Output
A clear and concise description of what you expected to happen.
#### Output of ``import statsmodels.api as sm; sm.show_versions()``
<details>
[paste the output of ``import statsmodels.api as sm; sm.show_versions()`` here below this line]
</details>
| open | 2022-12-21T05:51:05Z | 2022-12-25T15:19:28Z | https://github.com/statsmodels/statsmodels/issues/8587 | [] | PL450 | 5 |
jina-ai/clip-as-service | pytorch | 455 | Not able to pass value to boolean arguments | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
My version: 1.9.6
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start -model_dir ./BERT/uncased_L-24_H-1024_A-16/ -num_worker=4 -max_seq_len=30 -pooling_strategy=NONE -show_tokens_to_client=True
```
and calling the server via:
```python
bc = BertClient(YOUR_CLIENT_ARGS)
bc.encode(['hello world!', 'this is it'], show_tokens=True)
```
Then this issue shows up on server side:
**bert-serving-start: error: argument -show_tokens_to_client: ignored explicit argument 'True'**
...
I tried different ways to set the argument and none of them worked:
- -show_tokens_to_client=True
- -show_tokens_to_client=true
- -show_tokens_to_client='True'
- -show_tokens_to_client='true'
- -show_tokens_to_client True
- -show_tokens_to_client true
- -no-show_tokens_to_client
**Besides this parameter (-show_tokens_to_client), I have the same issue with all other boolean parameters:**
-cpu
-xla
-fp16
I checked the source code and googled for this issue, but still have no idea of how to fix it.

Does anybody encounter the same problem? | closed | 2019-09-29T04:21:29Z | 2019-09-29T05:05:55Z | https://github.com/jina-ai/clip-as-service/issues/455 | [] | ZhaoWang-IIT | 2 |
huggingface/datasets | pytorch | 6,597 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_description="Convert dataset to Parquet.",
create_pr=True,
token=token,
)
```
creates the additional dataset `albertvillanova/caner`. | closed | 2024-01-16T11:27:07Z | 2024-02-05T12:29:37Z | https://github.com/huggingface/datasets/issues/6597 | [
"bug"
] | albertvillanova | 6 |
Avaiga/taipy | automation | 1,513 | Remove sql repository | Because the sql repository type does not bring much value to users, and because it is painful to maintain it we want to remove the feature.
- [ ] Remove the sql repository type from Taipy Community
- [ ] Remove the related tests
- [ ] Update the integration testing repository
- [ ] Update the documentation (directly in the `feature/709-user-man` branch or wait for it to be merged)
- [ ] Update the release notes (directly in the `feature/709-user-man` branch or wait for it to be merged) | closed | 2024-07-15T07:47:45Z | 2024-07-29T09:00:35Z | https://github.com/Avaiga/taipy/issues/1513 | [
"Core",
"📈 Improvement",
"📄 Documentation",
"⚙️Configuration",
"🟧 Priority: High",
"📝Release Notes",
"🔒 Staff only",
"Core: Repository"
] | jrobinAV | 0 |
matplotlib/mplfinance | matplotlib | 221 | how to control the lines size when add Own Technical and add legend? | hi,Daniel:
I use the mpf.make_addplot add my own Technical(MA55,MA200,....),but the line too large, I need to shrink it .
At the moment,I need add the legend based on different color to different lines.
can you give my some advices?
thanks
| closed | 2020-07-19T09:17:30Z | 2023-06-15T14:17:13Z | https://github.com/matplotlib/mplfinance/issues/221 | [
"question"
] | cvJie | 3 |
harry0703/MoneyPrinterTurbo | automation | 390 | 您好,这个项目是不是不能用GPU跑呀,我在服务器上跑,看并没有使用到GPU,求指教 | 您好,这个项目是不是不能用GPU跑呀,我在服务器上跑,看并没有使用到GPU,求指教 | closed | 2024-05-28T06:42:21Z | 2024-05-31T02:05:40Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/390 | [] | ck1123456 | 1 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 857 | Browserbase.__init__() got an unexpected keyword argument 'project_id' | **Describe the bug**
Looks like ScrapeGraphAI's implementation of the BrowserBase SDK does not match its current SDK API.
See how its currently implemented: https://github.com/ScrapeGraphAI/Scrapegraph-ai/blob/fe89ae29e6dc5f4322c25c693e2c9f6ce958d6e2/scrapegraphai/docloaders/browser_base.py#L60
See how it should be implemented: https://github.com/browserbase/sdk-python/tree/v1.0.5
```
bb = Browserbase(
# This is the default and can be omitted
api_key=BROWSERBASE_API_KEY,
)
session = client.sessions.create(
project_id=BROWSERBASE_PROJECT_ID,
)
```
Passing `project_id` like it is leads to error in title.
| closed | 2024-12-31T02:39:16Z | 2025-01-12T21:44:51Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/857 | [
"bug"
] | Kilowhisky | 2 |
ckan/ckan | api | 7,654 | Wrong facet function called when defining a custom group | ## CKAN version
2.9, 2.10, master
## Describe the bug
When developer defines a custom group with `group_type` something else than `group`, `group_facets` is not called, instead ckan calls `organization_facets`
https://github.com/ckan/ckan/blob/71d5d1be495ec9662323eb69d6f71b2ccbb894f2/ckan/views/group.py#L378-L387
### Steps to reproduce
Create a custom group with group_type something else than group. Try to update facets of the said group.
### Expected behavior
`group_facets` should be called when its not an organization.
| open | 2023-06-15T09:55:35Z | 2023-12-20T07:24:00Z | https://github.com/ckan/ckan/issues/7654 | [
"Good for Contribution",
"Beginner Friendly"
] | Zharktas | 1 |
marshmallow-code/apispec | rest-api | 631 | TimeDelta fields not generating useful documentation | Documentation for `marshmallow.fields.TimeDelta` is not capturing anything about the field type except the description. Under the hood, Marshmallow treats it as an `int` that gets converted to a `datetime.timedelta` on deserialization.
Here is the line from the schema declaration:
```py
uptime = fields.TimeDelta(precision="minutes", required=False, metadata={"description":"running time"})
```
Example output of "uptime" field (type TimeDelta) from openapi.json generated by flask-smorest:
```json
{
"uptime": {
"description": "running time"
}
}
```
And here is a screen grab of the Swagger UI:

I think it should be treated as an number, with the precision string spelled out in the documentation. Is there a way to add this? | closed | 2021-01-25T22:45:51Z | 2021-06-14T09:03:14Z | https://github.com/marshmallow-code/apispec/issues/631 | [] | camercu | 1 |
tflearn/tflearn | tensorflow | 422 | feed_dict_builder does work with traditional feed dict | in https://github.com/tflearn/tflearn/blob/master/tflearn/utils.py:feed_dict_builder if the input feed dict contains a mapping of tf.tensor to data, this is not added to the resulting feed dict.
Line 294 and line 331, continue the iteration through the input feed_dict but never update the output feed dict.
As a result when trying to predict by inputting a tensor -> value feed dict, prediction fails.
| open | 2016-10-29T20:32:15Z | 2016-10-30T19:37:04Z | https://github.com/tflearn/tflearn/issues/422 | [] | blake-varden | 3 |
jwkvam/bowtie | jupyter | 199 | cache plotly layout data | plotly_relayout doesn't always output both xaxis and yaxis info. Therefore it needs to be stored to present to the user. Waiting to hear from plotly if this is intentional, but I would guess this will need to be handled in bowtie.
https://github.com/plotly/plotly.js/issues/2330 | open | 2018-02-04T07:04:42Z | 2018-07-24T01:42:38Z | https://github.com/jwkvam/bowtie/issues/199 | [] | jwkvam | 0 |
2noise/ChatTTS | python | 263 | Error installing and running on apple silicon - M1 chip | i get this error:
```bash
Traceback (most recent call last):
File "/Users/🤓/demo-tts/main.py", line 1, in <module>
import ChatTTS
File "/Users/🤓/anaconda3/envs/ml-env/lib/python3.10/site-packages/ChatTTS/__init__.py", line 1, in <module>
from .core import Chat
File "/Users/🤓/anaconda3/envs/ml-env/lib/python3.10/site-packages/ChatTTS/core.py", line 6, in <module>
from chattts.model.dvae import DVAE
ModuleNotFoundError: No module named 'chattts'
```
when i try to run this code:
```python
import ChatTTS
from IPython.display import Audio
chat = ChatTTS.Chat()
chat.load_models(compile=False) # Set to True for better performance
texts = """
chat T T S is a text to speech model designed for dialogue applications.
[uv_break]it supports mixed language input [uv_break]and offers multi speaker
capabilities with precise control over prosodic elements [laugh]like like
[uv_break]laughter[laugh], [uv_break]pauses, [uv_break]and intonation.
[uv_break]it delivers natural and expressive speech,[uv_break]so please
[uv_break] use the project responsibly at your own risk.[uv_break]
""".replace('\n', '') # English is still experimental.
wavs = chat.infer(texts, )
torchaudio.save("output1.wav", torch.from_numpy(wavs[0]), 24000)
```
OS: macOS, M1 chip | closed | 2024-06-05T09:21:53Z | 2025-01-26T07:52:46Z | https://github.com/2noise/ChatTTS/issues/263 | [
"stale"
] | FotieMConstant | 5 |
ets-labs/python-dependency-injector | asyncio | 615 | Lazy-load pydantic settings in configuration | Hi,
Thank you for this package!
I am using a container that loads configuration from a Pydantic setting:
```
config = providers.Configuration(pydantic_settings=[Settings()])
```
The problem I am running into is that the Settings are instantiated at import time, rather than at the time the container is created.
This is an issue especially in unit tests, because the unit tests will set up some environment variables prior to instantiating the services. If the Settings class declares a mandatory setting, and it is not available until the unit test runs and injects it into the environment, the code crashes at import time. Likewise if the test, or the application code, overrides an environment variable with a custom value, it is not taken into account since the Pydantic settings have already been read.
I could work around it by removing the `pydantic_settings` parameter and calling `container.config.from_pydantic`, but that means the application code needs to have a reference to Settings, and it is also complicated by the fact that there are nested containers so I would need to pass the correct Settings to each nested container. I thought too of making the setting non-mandatory in the Pydantic declaration and then overriding the container configuration from the tests, but it would be ideal to keep the settings declaration as expressive as possible, including whether a setting is required or not.
Is it possible to defer the initialization for the container? The closest I found was using a `Resource` on the container, and it seems to work fine, but it adds a bit of boilerplate code:
```
class MyContainer(DeclarativeContainer):
__self__ = providers.Self()
config = providers.Configuration()
_load_config = providers.Resource(lambda c: c.config.from_pydantic(Settings()), c=__self__)
``` | open | 2022-08-17T06:12:45Z | 2022-08-17T06:12:45Z | https://github.com/ets-labs/python-dependency-injector/issues/615 | [] | nicocrm | 0 |
gradio-app/gradio | data-visualization | 10,252 | Browser get Out of Memory when using Plotly for plotting. | ### Describe the bug
I used Gradio to create a page for monitoring an image that needs to be refreshed continuously. When I used Plotly for plotting and set the refresh rate to 10 Hz, the browser showed an "**Out of Memory**" error after running for less than 10 minutes.
I found that the issue is caused by the `Plot.svelte` file generating new CSS, which is then continuously duplicated by `PlotlyPlot.svelte`.
This problem can be resolved by making the following change in the `Plot.svelte` file:
Replace:
```javascript
key += 1;
let type = value?.type;
```
With:
```javascript
let type = value?.type;
if (type !== "plotly") {
key += 1;
}
```
In other words, if the plot type is `plotly`, no new CSS will be generated.
Finally, I’m new to both Svelte and TypeScript, so some of my descriptions might not be entirely accurate, but this method does resolve the issue.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import numpy as np
from datetime import datetime
import plotly.express as px
def get_image(shape):
# data = caget(pv)
x = np.arange(0, shape[1])
y = np.arange(0, shape[0])
X, Y = np.meshgrid(x, y)
xc = np.random.randint(0, shape[1])
yc = np.random.randint(0, shape[0])
data = np.exp(-((X - xc) ** 2 + (Y - yc) ** 2) / (2 * 100**2)) * 1000
data = data.reshape(shape)
return data
fig: None = px.imshow(
get_image((1200, 1920)),
color_continuous_scale="jet",
)
fig["layout"]["uirevision"] = 'constant'
# fig["config"]["plotGlPixelRatio"] = 1
# fig.update_traces(hovertemplate="x: %{x} <br> y: %{y} <br> z: %{z} <br> color: %{color}")
# fig.update_layout(coloraxis_showscale=False)
fig["layout"]['hovermode']=False
fig["layout"]["annotations"]=None
def make_plot(width, height):
shape = (int(height), int(width))
img = get_image(shape)
## image plot
fig["data"][0].update(z=img)
return fig
with gr.Blocks(delete_cache=(120, 180)) as demo:
timer = gr.Timer(0.5, active=False)
with gr.Row():
with gr.Column(scale=1) as Column1:
with gr.Row():
shape_x = gr.Number(value=480, label="Width")
shape_y = gr.Number(value=320, label="Height")
with gr.Row():
start_btn = gr.Button(value="Start")
stop_btn = gr.Button(value="Stop")
with gr.Column(scale=2):
plot = gr.Plot(value=fig, label="Plot")
timer.tick(make_plot, inputs=[shape_x, shape_y], outputs=[plot])
stop_btn.click(
lambda: gr.Timer(active=False),
inputs=None,
outputs=[timer],
)
start_btn.click(
lambda: gr.Timer(0.1, active=True),
inputs=None,
outputs=[timer],
)
if __name__ == "__main__":
demo.queue(max_size=10, default_concurrency_limit=10)
demo.launch(server_name="0.0.0.0", server_port=8080, share=False, max_threads=30)
```
### Screenshot

### Logs
_No response_
### System Info
```shell
$ gradio environment
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.9.1
gradio_client version: 1.5.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.5.2 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.0.2
orjson: 3.10.11
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.7.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit: 0.12.0
typer: 0.12.5
typing-extensions: 4.11.0
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.11.0
websockets: 12.0
```
### Severity
Blocking usage of gradio | closed | 2024-12-25T05:16:18Z | 2025-02-08T00:56:23Z | https://github.com/gradio-app/gradio/issues/10252 | [
"bug"
] | Reflux00 | 0 |
reloadware/reloadium | django | 52 | Plugin 0.8.6 (with Relodium 0.9.3) breaks with PyCharm 2022.2.3 | **Describe the bug**
Relodium breaks.
I had relodium installed and upgraded both PyCharm and Relodium versions.
After the upgrade, the plugin fails when running. Because code is obfuscated I cannot see where it breaks, but I attached the log console.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows 11
- Reloadium package version: 0.8.6
- Editor: PyCharm 2022.2.3 (Professional Edition) Build #PY-222.4345.23, built on October 10, 2022
- Run mode: Run
**Additional context**
Add any other context about the problem here. | closed | 2022-10-20T11:58:54Z | 2022-10-24T12:37:28Z | https://github.com/reloadware/reloadium/issues/52 | [] | laurapons | 9 |
serengil/deepface | machine-learning | 689 | Emotion confidence | Hello all,
¿Is it possible to get the confidence of each emotion? I'm using model_name ='Facenet', detector_backend = 'retinaface'.
I know that each emotion comes with a score value, but it is not the confidence at all.
Thank you. | closed | 2023-03-02T16:06:19Z | 2023-03-02T16:08:58Z | https://github.com/serengil/deepface/issues/689 | [
"question"
] | danigh99 | 1 |
google/trax | numpy | 1,061 | Optimizer tree_init returns a slots list, but tree_update returns a slots tuple | ### Description
`tree_init` and `tree_update` are not consistent. One returns a list for the slots, the other a tuple.
It is a super minor detail but I was trying to conditionally run a `tree_update` with `jax.cond`, and this minor difference made that break, since the PyTreeDefs were different.
Casting the slots list comprehension to a tuple ([here](https://github.com/google/trax/blob/0ca17db895c7d9bb203e66e074f49e9481b87513/trax/optimizers/base.py#L119-L120)) solved this for me, but I'm not sure if you want to go with tuple or list so I raise an issue instead of PR.
### Environment information
```
OS: Ubuntu 18.04
$ pip freeze | grep trax
-e git+git@github.com:google/trax.git@0ca17db895c7d9bb203e66e074f49e9481b87513#egg=trax
(latest commit from Sep 30)
$ pip freeze | grep tensor
tensorflow==2.3.1
$ pip freeze | grep jax
jax==0.2.0
jaxlib @ https://storage.googleapis.com/jax-releases/cuda110/jaxlib-0.1.55-cp36-none-manylinux2010_x86_64.whl
$ python -V
Python 3.6.9
```
### For bugs: reproduction and error logs
You can add the following lines to `optimizers_test.py` and see the behavior.
```
# Steps to reproduce:
# Show that tree_update returns slots in a tuple not list
old_slots = opt_2.slots
grad_tree = np.zeros_like(weight_tree)
_, new_slots, _ = opt_2.tree_update(1, grad_tree, weight_tree, opt_2.slots, opt_2.opt_params)
self.assertIsInstance(old_slots, list) # PASS
self.assertIsInstance(opt_2.slots, list) # FAIL. it's a tuple
self.assertIsInstance(new_slots, list) # FAIL. it's a tuple
```
```
# Error logs:
TypeError: true_fun and false_fun output must have same type structure, got PyTreeDef(tuple, [PyTreeDef(dict[['dyn']], [PyTreeDef(dict[['ff']], [PyTreeDef(dict[['dense0', 'dense1', 'dense2', 'dense_last']], [PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*])])])]),PyTreeDef(tuple, [PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*])]),PyTreeDef(dict[['gradients_l2', 'weights_l2']], [*,*])]) and PyTreeDef(tuple, [PyTreeDef(dict[['dyn']], [PyTreeDef(dict[['ff']], [PyTreeDef(dict[['dense0', 'dense1', 'dense2', 'dense_last']], [PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*])])])]),PyTreeDef(list, [PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*]),PyTreeDef(tuple, [*,*])]),PyTreeDef(dict[['gradients_l2', 'weights_l2']], [*,*])]).
```
| closed | 2020-10-01T03:48:46Z | 2021-01-24T16:34:45Z | https://github.com/google/trax/issues/1061 | [] | matwilso | 2 |
getsentry/sentry | django | 87,305 | Allow release name with a `v` prefix to be considered semver | ### Problem Statement
<img width="354" alt="Image" src="https://github.com/user-attachments/assets/b194362f-de1d-4ff6-b0de-ecb1394d4677" />
### Solution Brainstorm
While technically not semver as per our spec, this is a common pratice.
Since it's a matter of stripping out the `v`, this could be something the product could treat it as semver
### Product Area
Unknown | open | 2025-03-18T18:39:53Z | 2025-03-18T18:40:03Z | https://github.com/getsentry/sentry/issues/87305 | [
"Product Area: Releases"
] | bruno-garcia | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,215 | Colab: training always stuck at Epoch 17 | Hello. I've done this a couple times now, and training always stops at Epoch 17. I don't think it's a problem with the instance, because I can refresh and it's still running, and I have the colab premium with 24 hour instance time. But I have left it to train overnight and it just stays at 17 epochs. | open | 2020-12-21T13:54:19Z | 2021-05-16T08:13:24Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1215 | [] | Tylersuard | 2 |
koxudaxi/datamodel-code-generator | fastapi | 2,178 | Incorrect recognition of Literal value | **Describe the bug**
Often the name is used as the value of Literal
**To Reproduce**
Example schema:
```
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-aa9b01fc0c17eb0cbc200533fc20d6a49c5e764ceaf8049e08b294532be6e9ff.yml
```
Used commandline:
```
$ datamodel-codegen \
--input openai-aa9b01fc0c17eb0cbc200533fc20d6a49c5e764ceaf8049e08b294532be6e9ff.yml \
--output models.py \
--output-model-type pydantic_v2.BaseModel \
--target-python-version 3.12 \
--enum-field-as-literal all \
--field-constraints \
--use-standard-collections \
--use-union-operator \
--field-include-all-keys \
--use-default-kwarg \
--use-exact-imports \
--use-schema-description \
--treat-dot-as-module \
--use-title-as-name \
```
**Expected behavior**
```
(part of yml)
OtherChunkingStrategyResponseParam:
type: object
title: Other Chunking Strategy
description: >-
This is returned when the chunking strategy is unknown. Typically, this is because the file was
indexed before the `chunking_strategy` concept was introduced in the API.
additionalProperties: false
properties:
type:
type: string
description: Always `other`.
enum:
- other
required:
- type
```
```
(what i expected)
class OtherChunkingStrategy(BaseModel):
"""
This is returned when the chunking strategy is unknown. Typically, this is because the file was indexed before the `chunking_strategy` concept was introduced in the API.
"""
model_config = ConfigDict(
extra='forbid',
)
type: Literal['other'] = Field(
..., description='Always `other`.'
)
```
```
(What actually happened)
class OtherChunkingStrategy(BaseModel):
"""
This is returned when the chunking strategy is unknown. Typically, this is because the file was indexed before the `chunking_strategy` concept was introduced in the API.
"""
model_config = ConfigDict(
extra='forbid',
)
type: Literal['OtherChunkingStrategyResponseParam'] = Field(
..., description='Always `other`.'
)
```
Noted that the 'Literal['OtherChunkingStrategyResponseParam']' instead of Literal['other']
**Version:**
- OS: macis
- Python version: 3.12
- datamodel-code-generator version: 0.26.0
**Additional context**
In this case, the frequency of occurrence of only one enumeration value is high。
e.g
```
class CodeInterpreterImageOutput(BaseModel):
index: int = Field(..., description='The index of the output in the outputs array.')
type: Literal['RunStepDeltaStepDetailsToolCallsCodeOutputImageObject'] = Field(
..., description='Always `image`.'
)
image: Image1 | None = None
class CodeInterpreterLogOutput(BaseModel):
"""
Text output from the Code Interpreter tool call as part of a run step.
"""
index: int = Field(..., description='The index of the output in the outputs array.')
type: Literal['RunStepDeltaStepDetailsToolCallsCodeOutputLogsObject'] = Field(
..., description='Always `logs`.'
)
logs: str | None = Field(
default=None, description='The text output from the Code Interpreter tool call.'
)
```
| open | 2024-11-21T12:27:01Z | 2024-11-21T12:27:01Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2178 | [] | steadyfirmness | 0 |
microsoft/MMdnn | tensorflow | 49 | IndexError: list index out of range - Tensorflow Model Conversion | ```python -m mmdnn.conversion._script.convertToIR -f tensorflow -d vgg19 -n xx.ckpt.meta -w xx.ckpt --dstNodeName Add```
```
Tensorflow model file [xx.ckpt.meta] loaded successfully.
Tensorflow checkpoint file [xx.ckpt] loaded successfully.
[146] variables loaded.
## traceback:
line 472, in rename_RealDiv
if scopes[-2] == 'dropout':
IndexError: list index out of range
```
Tensorflow 1.4
MMdnn 0.1.2
| closed | 2018-01-09T17:41:39Z | 2018-02-26T08:28:19Z | https://github.com/microsoft/MMdnn/issues/49 | [] | backnotprop | 1 |
ResidentMario/geoplot | matplotlib | 49 | subplot problem | I refer on [Choropleth subplots](https://residentmario.github.io/geoplot/examples/nyc-parking-tickets.html)
to subplot spatial map with data.
<br>
individually it works pretty well.
```
ax = gplt.polyplot(AB_base )
def power_scale(minval, maxval):
def scalar(val):
val = val + abs(minval) + 1
return (val/100)**3/150
return scalar
gplt.kdeplot(elevation, ax=ax,linewidth=0,
legend = True,
shade_lowest=False,
cbar = True,
clip=AB_base.geometry, cmap='summer_r',
shade=True, alpha = 0.6)
gplt.pointplot(geo_station, ax=ax,
scale= 'elev(m)', k = None,
limits =(1,30) ,
scale_func= power_scale,
hue=geo_station['elev(m)'].astype(float), cmap='viridis_r',
alpha = 0.8,
legend=True, legend_var='hue',
)
plt.title("~~~")
plt.show()
```

but when I subplot more variables like this:
```
def plot_to_ax(state, ax):
gplt.polyplot(AB_base, ax = ax)
gplt.kdeplot(elevation, linewidth=0.0, ax = ax,
shade_lowest=False,
clip=AB_base.geometry, cmap='summer_r',
shade=True, alpha = 0.6)
gplt.pointplot(geo_station, k = None, ax=ax,
scale= state, limits =(1,30),
hue= state,
cmap='viridis_r',alpha = 0.8,
legend=True, legend_var='hue'
)
# Finally, plot the data.
f, axarr = plt.subplots(2, 2, figsize=(5, 5))
plt.subplots_adjust(top=0.95)
plot_state_to_ax('ANUSPLIN_output', axarr[0][0])
axarr[0][0].set_title('ANUSPLIN (n=6679268)')
plot_state_to_ax('CaPA_output', axarr[0][1])
axarr[0][1].set_title('CaPA (n=854647)')
plot_state_to_ax('NARR_output', axarr[1][0])
axarr[1][0].set_title('NARR(n=215065)')
plot_state_to_ax('TPS_output', axarr[1][1])
axarr[1][1].set_title('TPS (n=126661)')
```

all plot in one result, I think the method should be good - as reference is from the official tutorial.
tried many ways and still get stuck in this problem for 3 days...
any advice or solutions?
thanks
| closed | 2017-11-22T19:38:38Z | 2018-01-01T23:43:16Z | https://github.com/ResidentMario/geoplot/issues/49 | [] | Questionm | 9 |
ets-labs/python-dependency-injector | flask | 606 | Use providers for keys of providers.Dict | I want to use the object given by another provider for the key of Dict provider.
However, as I have found, the value of Dict provider only can be another provider value, not key.
For example, I want to do like dict_provider of WorkersContainer. When I call summary method for dict_provider, it gives `<dependency_injector.providers.Dependency>` class for the key and 3 for the value.
```
from dependency_injector import containers, providers
class ListWorkers:
def __init__(self, machines: list[tuple[str, int]]):
self._machines = machines
def summary(self):
for m in self._machines:
print(f"[ListWorkers] {m[0]}: {m[1]}")
class DictWorkers:
def __init__(self, machines: dict[str, int]):
self._machines = machines
def summary(self):
for machine, count in self._machines.items():
print(f"[DictWorkers] {machine}: {count}")
class BaseContainer(containers.DeclarativeContainer):
machine = providers.Object("MAC")
count = providers.Object(3)
class WorkersContainer(containers.DeclarativeContainer):
base_container = providers.DependenciesContainer()
list_provider = providers.Singleton(
ListWorkers,
providers.List(providers.List(base_container.machine, base_container.count)),
)
dict_provider = providers.Singleton(
DictWorkers, providers.Dict({base_container.machine: base_container.count})
)
if __name__ == "__main__":
base_container = BaseContainer()
workers_container = WorkersContainer(base_container=base_container)
workers_container.list_provider().summary()
workers_container.dict_provider().summary()
```
How can I inject object to Dict providers's key? | open | 2022-07-11T06:37:23Z | 2022-07-11T06:37:23Z | https://github.com/ets-labs/python-dependency-injector/issues/606 | [] | swbliss | 0 |
simple-login/app | flask | 1,298 | Option to disable verification email | Would the maintainers be open to the option of disabling the verification email that gets sent when a new mailbox is created?
The scenario we are looking at, and the rationale for this is as follows:
We are looking at using SimpleLogin as part of our email infrastructure. As part of regular business ops, we have several providers, and several organisations, each with multiple end users. These parties need to communicate with eachother on a semi-regular basis, and often in an unexpected manner. Given that there are various regulations around end user email sharing (eg. GDPR), and providers are located in different jurisdictions, we're looking to avoid any issues with that by masking emails for all parties behind aliases.
As part of user onboarding we would create a mailbox in SimpleLogin for the user, and onboard them with any relevant providers. From a UX perspective, a signup email from us, followed potentially by several automatic onboarding emails from various providers, and a verification email from SimpleLogin, is not optimal. We'd prefer to be able to create a verified mailbox without additional end user input via the API.
There are a few additional features that one would disable, such as the one click unsubscribe link, as the emails are guaranteed to be transactional in nature. Additionally one may choose to disable the user dashboard.
Is this something that could be of interest? | open | 2022-09-20T14:34:14Z | 2022-10-02T16:40:50Z | https://github.com/simple-login/app/issues/1298 | [] | sashahilton00 | 3 |
serengil/deepface | machine-learning | 1,240 | [FEATURE]: filtering based on the threshold in find() function | ### Description
instead of just giving out threshold, can we automatically filter the results based on the minimum threshold set in the deepface.find function?
### Additional Info
_No response_ | closed | 2024-05-30T07:25:06Z | 2024-05-30T12:57:19Z | https://github.com/serengil/deepface/issues/1240 | [
"enhancement"
] | richardar | 2 |
jmcnamara/XlsxWriter | pandas | 936 | Some notes on autofit() | # Notes on the XlsxWriter implementation of `autofit()`.
Version 3.0.6 of XlsxWriter added a `worksheet.autofit()` function.
The implementation is a technical compromise and I wanted to write some notes about it.
First off let's start with the text that was in the FAQ:
> Q. Is there an "AutoFit" option for columns?
> --------------------------------------------
>
>Unfortunately, there is no way to specify "AutoFit" for a column in the Excel
file format. This feature is only available at runtime from within Excel. It
is possible to simulate "AutoFit" in your application by tracking the maximum
width of the data in the column as your write it and then adjusting the column
width at the end.
This is still true. There is no "autofit" flag in the Excel XLSX format that will trigger the same autofit that you get from Excel at runtime.
As a workaround I implemented a pixel calculation based on defined widths for all the characters in the ASCII range 32-126. You can see that [here](https://github.com/jmcnamara/XlsxWriter/blob/main/xlsxwriter/utility.py#L14) and [here](https://github.com/jmcnamara/XlsxWriter/blob/main/xlsxwriter/utility.py#L305).
## Fidelity with Excel
One of the main design goals of XlsxWriter is that it creates the exact same file format as Excel for the same input. At least within reason. This is one of the main reasons why I didn't tackle autofit sooner.
Anyway, with the current implementation XlsxWriter matches Excel's autofit for strings up to around 100 pixels (and there are a number of `test_autofit??.py` test cases that compare against Excel. After that Excel adds an additional pixel for every ~ 32 pixels additional length. For example in Excel the following strings have these pixel widths:
- "N" : 10
- "NNN": 30
- "NNNNN": 50
- "NNNNNNNN": 80
- "NNNNNNNNNN": 101
This may be due to some internal rounding (in Excel) or may be due to a conversion from character units to pixel widths.
Either way this isn't significant. The additional pixels added by Excel appear as extra padding and won't be substantively noticeable to the end user. At the same time the lack of these extra padding pixels from the XlsxWriter autofit shouldn't be noticeable either. I am mainly highlighting to avoid bug reports about pixel difference between Excel and XlsxWriter.
Here is a visual example from the `autofit.py` XlsxWriter program:

And the same file with an Excel autofit:

The pixels widths for the columns are:
| Program | A | B | C | D |
| ---------- | --- | --- | --- | --- |
| XlsxWriter | 50 | 63 | 113 | 147 |
| Excel | 50 | 63 | 114 | 150 |
## Difference with Excel for macOS
Excel for Windows files appear differently in Excel for macOS. For example here is the same XlsxWriter file as above on the mac:

You will notice that the widths and padding are rendered differently. This is not an XlsxWriter issue. The same happens with any Excel file generated on Windows and rendered on the mac. For example:
| Program | A | B | C | D |
| ---------------- | --- | --- | --- | --- |
| Excel Win | 50 | 63 | 114 | 150 |
| Excel Win on Mac | 43 | 54 | 98 | 129 |
| Excel Mac | 39 | 55 | 83 | 111 |
This is quite a difference and if the Excel Mac autofit file is transferred back to Windows the columns no longer appear fitted.
So if you encounter this issue it isn't due to XlsxWriter.
## Padding
Excel adds a 7 pixel padding to each cell. So a word like "Hello" has a width of 33 pixels but the column width (in Excel) will be 33+7=40 pixels.
## Known Limitations
- For large files it is an expensive calculation. Although it doesn't have to be called at the end. There may be cases where it is sufficient to call it after an initial amount of data has been added.
- Fonts and font sizes aren't taken into account.
- Non-ASCII characters are given a default width of 8 pixels.
- Dates are assumed to have a width/format of `mm/dd/yyyy`.
- Number formats aren't taken into account (and realistically probably won't be in the future).
- Merged cells aren't handled correctly. (This may get fixed).
- ~~Autofilter dropdowns aren't taken into account.~~ They are from version 3.0.7. However, centered autofilters still require additional padding.
- It is not supported in `constant_memory` mode.
## Future work
Once people have had a chance to use this and to find the limitations I may add additional configurations/options such as:
- Additional padding for user defined rows. This would allow autofilter or header rows to be fitted with some additional padding.
- User defined width for dates. This is actually currently possible via `worksheet.default_date_pixels` but it needs an API.
- User access to the character width calculation table to add in more precise calculation for non-ascii characters. This is probably already accessible. Note, I don't intend to add calculations for all UTF8 chars.
- Option to not reset manually set width. You can do this currently by setting the width **after** calling `autofit()`. And/or have an option to not reset manually set width if it is larger than the autofit width. This would also provide a workaround for the autofilter row issue.
| closed | 2023-01-04T23:37:58Z | 2023-04-27T19:10:24Z | https://github.com/jmcnamara/XlsxWriter/issues/936 | [
"feature request"
] | jmcnamara | 12 |
dask/dask | numpy | 11,595 | Supporting inconsistent schemas in read_json | If you have two (jsonl) files where one contains columns `{"id", "text"}` and the other contains `{"text", "id", "meta"}` and you wish to read the two files using `dd.read_json([file1.jsonl, file2.jsonl], lines=True)` we run into an error
```
Metadata mismatch found in `from_delayed`.
Partition type: `pandas.core.frame.DataFrame`
(or it is Partition type: `cudf.core.dataframe.DataFrame` when backend=='cudf')
+---------+-------+----------+
| Column | Found | Expected |
+---------+-------+----------+
| 'meta1' | - | object |
+---------+-------+----------+
```
For what it's worth this isn't an issue in read_parquet (cpu) and for gpu the fix is in the works https://github.com/rapidsai/cudf/pull/17554/files
## Guessing the rootcause
IIUC in both pandas and cudf, we call `read_json_file` ([here](https://github.com/dask/dask/blob/a9396a913c33de1d5966df9cc1901fd70107c99b/dask/dataframe/io/json.py#L315)).
In the pandas case, even if `dtype` is specified, pandas doesn't prune out the non-specified columns, while cudf does (assuming prune_columns=True). Therefore the pandas case continues to fail, while `cudf` case fails on a column order vs metadata column order mismatch error (since one file has `id, text`, while the other has `text, id`.
One possible hack could be supporting `columns` arg and then performing `engine(.....)[columns]`. Another could be
## MRE
```python
import dask.dataframe as dd
import dask
import tempfile
import pandas as pd
import os
records = [
{"id": 123, "text": "foo"},
{
"text": "bar",
"meta1": [{"field1": "cat"}],
"id": 456,
},
]
columns = ["text", "id"]
with tempfile.TemporaryDirectory() as tmpdir:
file1 = os.path.join(tmpdir, "part.0.jsonl")
file2 = os.path.join(tmpdir, "part.1.jsonl")
pd.DataFrame(records[:1]).to_json(file1, orient="records", lines=True)
pd.DataFrame(records[1:]).to_json(file2, orient="records", lines=True)
for backend in ["pandas", "cudf"]:
read_kwargs = dict()
if backend == "cudf":
read_kwargs["dtype"] = {"id": "str", "text": "str"}
read_kwargs["prune_columns"] = True
print("="*30)
print(f"==== {backend=} ====")
print("="*30)
try:
with dask.config.set({"dataframe.backend": backend}):
df = dd.read_json(
[file1, file2],
lines=True,
**read_kwargs,
)
print(f"{df.columns=}")
print(f"{df.compute().columns=}")
print(f"{type(df.compute())=}")
display((df.compute()))
except Exception as e:
print(f"{backend=} failed due to {e} \n")
```
cc @rjzamora
| open | 2024-12-10T18:24:48Z | 2025-02-24T02:01:24Z | https://github.com/dask/dask/issues/11595 | [
"dataframe",
"needs attention",
"feature"
] | praateekmahajan | 1 |
oegedijk/explainerdashboard | plotly | 16 | ImportError: cannot import name 'XGBExplainer' from 'explainerdashboard' | ```
ImportError: cannot import name 'XGBExplainer' from 'explainerdashboard' (C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\explainerdashboard\__init__.py)
```
I'll investigate more when I have time. I'm also on windows.
Edit:
This works: `from explainerdashboard.explainers import XGBExplainer` but it seems `RegressionExplainer` can be imported from the top directory: `from explainerdashboard import RegressionExplainer` and I would expect other explainers to be able to be imported from the top directory.
Coming from
https://github.com/oegedijk/explainerdashboard/blob/master/explainerdashboard/__init__.py#L1
Probably just a design thing (no right or wrong way) so closing.
| closed | 2020-11-13T15:27:59Z | 2020-11-13T17:22:37Z | https://github.com/oegedijk/explainerdashboard/issues/16 | [] | raybellwaves | 0 |
miguelgrinberg/flasky | flask | 265 | redundant if statement in User model | Hi, in chapter 9 you're adding a role attribution to a User model:
```def __init__(self, **kwargs):
super(User, self).__init__(**kwargs)
if self.role is None:
if self.email == current_app.config['FLASKY_ADMIN']:
self.role = Role.query.filter_by(permissions=0xff).first()
if self.role is None:
self.role = Role.query.filter_by(default=True).first()
```
Your if statement structure looks redundant:
```
if self.role is None:
...
if self.role is None:
...
```
Is there a reason to double check whether `self.role is None`? | closed | 2017-04-28T07:32:46Z | 2017-12-10T20:07:57Z | https://github.com/miguelgrinberg/flasky/issues/265 | [
"question"
] | kulichevskiy | 2 |
dgtlmoon/changedetection.io | web-scraping | 1,770 | "A NEW VERSION IS AVAILABLE" message on 0.45 | **DO NOT USE THIS FORM TO REPORT THAT A PARTICULAR WEBSITE IS NOT SCRAPING/WATCHING AS EXPECTED**
This form is only for direct bugs and feature requests todo directly with the software.
Please report watched websites (full URL and _any_ settings) that do not work with changedetection.io as expected [**IN THE DISCUSSION FORUMS**](https://github.com/dgtlmoon/changedetection.io/discussions) or your report will be deleted
CONSIDER TAKING OUT A SUBSCRIPTION FOR A SMALL PRICE PER MONTH, YOU GET THE BENEFIT OF USING OUR PAID PROXIES AND FURTHERING THE DEVELOPMENT OF CHANGEDETECTION.IO
THANK YOU
**Describe the bug**
A clear and concise description of what the bug is.
0.45 version which is the newest shows "[A NEW VERSION IS AVAILABLE](https://changedetection.io/)" message on the main page.
**Version**
*Exact version* in the top right area: 0.45
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
! ALWAYS INCLUDE AN EXAMPLE URL WHERE IT IS POSSIBLE TO RE-CREATE THE ISSUE - USE THE 'SHARE WATCH' FEATURE AND PASTE IN THE SHARE-LINK!
**Expected behavior**
A clear and concise description of what you expected to happen.
It's the newest stable version. So the app doesn't have to show it.
**Screenshots**
If applicable, add screenshots to help explain your problem.
<img width="839" alt="image" src="https://github.com/dgtlmoon/changedetection.io/assets/61624808/91fb0c66-43c5-4357-aab4-6c3540d7a3dc">
**Desktop (please complete the following information):**
- OS: macos 13.5.1 (22G90)
- Browser safari
- Version Version 16.6 (18615.3.12.11.2)
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| closed | 2023-09-06T09:16:31Z | 2023-09-06T09:21:58Z | https://github.com/dgtlmoon/changedetection.io/issues/1770 | [
"triage"
] | Constantin1489 | 1 |
tflearn/tflearn | tensorflow | 235 | customized objective function, getting error messages | Hi, I am new to tensorflow and tflearn. I would like to define my own cost function. I have tried both methods mentioned in [issue #72 ](https://github.com/tflearn/tflearn/issues/72) but in both cases I get error messages. In the first case, the error message is:
from .config import _EPSILON, _FLOATX
ValueError: Attempted relative import in non-package
and in the second case:
"./tflearn/layers/estimator.py", line 135, in regression
exit()
NameError: global name 'exit' is not defined
I have spent some time trying to fix this, to no avail. Any help would be highly appreciated.
######
Below is my code for the cost function:
``` python
def deep_clustering(y_pred,y_true):
with tf.name_scope("DeepClustering"):
n1,n2,c=y_true.shape
N=n1*n2
k=int(y_pred.shape[0]/N)
Yc=tf.reshape(y_true,[N,c]) # target labels
Vk=tf.reshape(y_pred,[N,k]) # estimated embeddings
YtY=tf.matmul(Yc.T,Yc)
VtV=tf.matmul(Vk.T,Vk)
VtY=tf.matmul(Vk.T,Yc)
norm2Y_F=tf.trace(tf.matmul(YtY,YtY.T)) # squred Frobenius norm of YtY
norm2V_F=tf.trace(tf.matmul(VtV,VtV.T)) # squared Frobenius norm of VtV
norm2VY_F = tf.trace(tf.matmul(VtY,VtY.T)) # squared Frobenius norm of VtY
loss = norm2Y_F - 2*norm2VY_F + norm2V_F
return loss
```
| closed | 2016-07-26T20:25:05Z | 2017-01-18T08:30:29Z | https://github.com/tflearn/tflearn/issues/235 | [] | fpishdadian | 6 |
piskvorky/gensim | machine-learning | 3,213 | summarize the way we update dependencies across the different files / subsystems | Thanks! Shouldn't these versions match `setup.py` though?
I forgot what exactly needs to be updated where, so everything's in sync… @mpenkov could you please summarize the way we update dependencies across the different files / subsystems in [Gensim & Compatibility](https://github.com/RaRe-Technologies/gensim/wiki/Gensim-And-Compatibility).
_Originally posted by @piskvorky in https://github.com/RaRe-Technologies/gensim/issues/3209#issuecomment-891598074_ | open | 2021-08-12T01:16:21Z | 2021-08-12T01:16:40Z | https://github.com/piskvorky/gensim/issues/3213 | [
"documentation"
] | mpenkov | 0 |
pyro-ppl/numpyro | numpy | 1,482 | Using jax.numpy versus numpy in diagnostics | Heya, more of a question versus an issue:
Currently if I want to assess jax arrays with `diagnostics`, I have been resorting to converting them back to numpy arrays then running through the diagnostics, for example in `effective_sample_size`, the item assignment flags that jax arrays are immutable.
https://github.com/pyro-ppl/numpyro/blob/22c45db47ce56266ce2fdfa87efafc7235abd946/numpyro/diagnostics.py#L175
Is there a design choice to prefer numpy over jax for this purpose? | closed | 2022-09-25T23:14:05Z | 2022-09-28T16:38:57Z | https://github.com/pyro-ppl/numpyro/issues/1482 | [] | ayhteo | 2 |
brightmart/text_classification | nlp | 138 | Are you sure the latest code of this project is working properly? | My environment: Python3.6+TensorFlow1.9
When i open the project with Visual Studio Code and run it, the console throws many errors, including syntax errors. | open | 2020-02-11T02:03:36Z | 2020-02-11T08:00:33Z | https://github.com/brightmart/text_classification/issues/138 | [] | liaooo | 2 |
miguelgrinberg/Flask-SocketIO | flask | 1,801 | SocketIO stops after Thread call | Hi, i'm using Flask-SockeIO and recently i've been implementing Threading Tasks. I'm getting a problem after the threading call, the Flask-SocketIO doesn't respond anymore. The threading call doesn't have anything about socket. I'm just using a app_context().
See the following code and how i implement it the thread call:
```python
from threading import Thread
from flask import render_template
from flask_mail import Message
from app import create_app, mail
app = create_app()
def send_async_email(app, msg):
with app.app_context():
mail.send(msg)
def send_email(subject, sender, recipients, text_body, html_body):
msg = Message(subject, sender=sender, recipients=recipients)
msg.body = text_body
msg.html = html_body
Thread(target=send_async_email, args=(app, msg)).start()
def send_password_reset_email(tutor):
new_password = tutor.reset_password()
send_email('Recuperação de Senha',
sender=('admin', 'noreply@duvi.com'),
recipients=[tutor.usuario],
text_body=render_template('email/inline/recuperar_senha.txt', tutor=tutor, new_password=new_password),
html_body=render_template('email/inline/recuperar_senha.html', tutor=tutor, new_password=new_password))
```
I dont get any erros on terminal and i can't figure it out how to debug or solve the problem. | closed | 2022-03-09T03:55:10Z | 2022-03-09T09:16:55Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1801 | [] | wduandy | 1 |
ufoym/deepo | jupyter | 46 | Reproducible container builds via explicit version annotations | It is my understanding that, in its current form, the container builds are not really reproducible as they use the most recent versions available. This is great in many cases, but undesirable in many others, e.g., if you want to build the exact same container that you would one year ago. Having explicit versions would mitigate this. Have you consider adding explicit version annotations in some form?
| closed | 2018-07-19T21:18:01Z | 2020-01-29T02:44:20Z | https://github.com/ufoym/deepo/issues/46 | [] | negrinho | 1 |
matterport/Mask_RCNN | tensorflow | 2,451 | Applying data augmentation on the training dataset. | Hello, I want to apply data augmentation techniques on my training set such as rotation, flipping during training. So, how to do it? where should I do modifications in the code? | open | 2020-12-24T16:36:18Z | 2021-07-20T14:15:54Z | https://github.com/matterport/Mask_RCNN/issues/2451 | [] | vis58 | 2 |
ydataai/ydata-profiling | jupyter | 1,529 | pandas.Series.to_dict() got an unexpected keyword argument 'orient' | ### Current Behaviour
ProfileReport._render_json method tries to use function with keyword parameter only available in pd.DataFrame in a [pd.Series](https://pandas.pydata.org/docs/reference/api/pandas.Series.to_dict.html), raising type error:
https://github.com/ydataai/ydata-profiling/blob/cdfc17ac7c01a66a2f3bbf6641112149b1d83d90/src/ydata_profiling/profile_report.py#L453
https://pandas.pydata.org/docs/reference/api/pandas.Series.to_dict.html
```
437 return {encode_it(v) for v in o}
438 elif isinstance(o, (pd.DataFrame, pd.Series)):
--> 439 return encode_it(o.to_dict(orient="records"))
440 elif isinstance(o, np.ndarray):
441 return encode_it(o.tolist())
TypeError: to_dict() got an unexpected keyword argument 'orient'
```
### Expected Behaviour
A json format from a comparison report
### Data Description
**previous_dataset**
```python
previous_dataset = pd.DataFrame(data=[(1000, 42), (900, 30), (1500, 40), (1800, 38)], columns=["rent_per_month", "total_area"])
```
**current_dataset**
```python
current_dataset = pd.DataFrame(data=[(5000, 350), (9000, 600), (5000, 400), (3500, 500), (6000, 600)], columns=["rent_per_month", "total_area"])
```
### Code that reproduces the bug
```Python
import pandas as pd
from ydata_profiling import ProfileReport
previous_dataset = pd.DataFrame(data=[(1000, 42), (900, 30), (1500, 40), (1800, 38)], columns=["rent_per_month", "total_area"])
current_dataset = pd.DataFrame(data=[(5000, 350), (9000, 600), (5000, 400), (3500, 500), (6000, 600)], columns=["rent_per_month", "total_area"])
previous_dataset_report = ProfileReport(
previous_dataset, title="Previous dataset report"
)
current_dataset_report = ProfileReport(
current_dataset, title="Current dataset report"
)
comparison_report = previous_dataset_report.compare(current_dataset_report)
comparison_report.to_json()
```
### pandas-profiling version
v4.5.1
### Dependencies
```Text
aiobotocore==1.4.2
aiohttp==3.9.1
aioitertools==0.11.0
aiosignal==1.3.1
appdirs==1.4.4
argon2-cffi==20.1.0
async-generator==1.10
async-timeout==4.0.3
attrs==20.3.0
awscli==1.32.26
backcall==0.2.0
bidict==0.21.4
bleach==3.3.0
boto3==1.17.106
botocore==1.20.106
butterfree==1.2.3
cassandra-driver==3.24.0
certifi==2020.12.5
cffi==1.14.5
chardet==4.0.0
charset-normalizer==2.0.12
click==7.1.2
cmake==3.27.2
colorama==0.4.4
cycler==0.10.0
Cython==0.29.23
dacite==1.8.1
dbus-python==1.2.16
decorator==5.0.6
defusedxml==0.7.1
distlib==0.3.4
distro==1.4.0
distro-info==0.23+ubuntu1.1
docutils==0.16
entrypoints==0.3
facets-overview==1.0.0
filelock==3.6.0
frozenlist==1.4.1
fsspec==2021.8.1
geomet==0.2.1.post1
h3==3.7.6
hierarchical-conf==1.0.2
htmlmin==0.1.12
idna==2.10
ImageHash==4.3.1
ipykernel==5.3.4
ipython==7.22.0
ipython-genutils==0.2.0
ipywidgets==7.6.3
jedi==0.17.2
Jinja2==2.11.3
jmespath==0.10.0
joblib==1.0.1
jsonschema==3.2.0
jupyter-client==6.1.12
jupyter-core==4.7.1
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
kiwisolver==1.3.1
koalas==1.8.2
MarkupSafe==2.0.1
matplotlib==3.4.2
mdutils==1.6.0
mistune==0.8.4
multidict==6.0.4
multimethod==1.10
nbclient==0.5.3
nbconvert==6.0.7
nbformat==5.1.3
nest-asyncio==1.5.1
networkx==3.1
notebook==6.3.0
numpy==1.22.4
packaging==23.2
pandas==1.3.5
pandocfilters==1.4.3
parameters-validation==1.2.0
parso==0.7.0
patsy==0.5.6
pexpect==4.8.0
phik==0.12.4
pickleshare==0.7.5
Pillow==8.2.0
pip-resolved==0.3.0
plotly==5.5.0
prometheus-client==0.10.1
prompt-toolkit==3.0.17
protobuf==3.17.2
psycopg2==2.8.5
ptyprocess==0.7.0
py4j==0.10.9
pyarrow==13.0.0
pyarrow-hotfix==0.5
pyasn1==0.5.1
pycparser==2.20
pydantic==1.9.2
pydeequ==0.1.8
Pygments==2.8.1
PyGObject==3.36.0
pyparsing==2.4.7
pyrsistent==0.17.3
pyspark==3.0.2
python-apt==2.0.1+ubuntu0.20.4.1
python-dateutil==2.8.1
python-engineio==4.3.0
python-socketio==5.4.1
pytz==2023.3
PyWavelets==1.4.1
PyYAML==5.4.1
pyzmq==20.0.0
requests==2.26.0
requests-unixsocket==0.2.0
rsa==4.7.2
s3fs==2021.8.1
s3transfer==0.4.2
scikit-learn==0.24.1
scipy==1.10.1
seaborn==0.11.1
Send2Trash==1.5.0
six==1.15.0
ssh-import-id==5.10
statsmodels==0.14.1
tangled-up-in-unicode==0.2.0
tenacity==8.0.1
terminado==0.9.4
testpath==0.4.4
threadpoolctl==2.1.0
tornado==6.1
tqdm==4.66.1
traitlets==5.0.5
typeguard==2.13.3
typer==0.3.2
typing-extensions==4.0.1
unattended-upgrades==0.1
urllib3==1.26.16
virtualenv==20.4.1
visions==0.7.5
wcwidth==0.2.5
webencodings==0.5.1
widgetsnbextension==3.5.1
wordcloud==1.9.2
wrapt==1.16.0
yamale==4.0.2
yarl==1.9.4
ydata-profiling==4.5.1
```
### OS
Ubuntu 20.04.4 LTS
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2024-01-24T19:03:43Z | 2024-02-09T11:41:34Z | https://github.com/ydataai/ydata-profiling/issues/1529 | [
"bug 🐛"
] | michellyrds | 2 |
yihong0618/running_page | data-visualization | 139 | Several suggestions to make this repo more efficient and standardized | Good afternoon guys @yihong0618 @shaonianche ,
It just occurred to me that regardless of the codes, we may need to do some optimizations to this repo to make it more standardized and efficient.
Here are my ideas in the following:
1. README.md refactoring: Since now we have a homepage, and a comprehensive wiki, we may not need the installation guide in the README so detailed anymore. So last night, I did some research and found some good README.md that we can refer to their structure. I can assign this task to myself and I am sure it can be done quickly when I am free. I will send you emails with the new README as attachment.
2. Release Control: The reason why I propose this proposal is mainly because several days ago, when I just released the initial version of the Wiki site, there were some codes in some vital parts changed. I would not noticed that if I did not watch all activities of this repo. From my perspective, I think making it release after every essential changes will be good for no matter users or developers.
3. Changelog: Maybe we should also add change log to every release and wiki.
4. Projects: I just created one this morning, a kanban can help us mange bugs and feature requests better. But what columns should it have, and how to use it, we may need some discussion, after all it is not my own repo.
5. CONTRIBUTING.md: Maybe we should add some protocols to it? For instance: Standardized the codes by using black before making any PR.
Currently these are the thoughts I have. Any better ideas are welcomed. Just wanna Running Page be better.
Meliora!
---
下午好各位 @yihong0618 @shaonianche :
我突然想到,抛开代码不谈,我们可能需要对这个 repo 做一些优化,使其更加标准化和高效。
以下是我目前的想法:
1. README.md 重构:由于现在我们有一个主页和一个全面的 wiki,我们可能不再需要 README 中的安装指南那么详细了。 所以昨晚查了一下资料,找到了一些不错的README.md,我觉得可以参考他们的结构对其重构。 我可以将这个任务分配给自己,而且我相信当我有空的时候它可以很快完成。 完成之后,我会用新的自述文件作为附件向大家发送电子邮件。
2. 规律的Release:我之所以提出这个建议,主要是因为前几天,我刚刚发布了wiki站点的初始版本之后,一些关键部分的代码发生了变化。 如果我不watch这个 repo 的所有活动,我就不会注意到。 从我的角度来看,我认为在每次重要更改后发布它对用户或开发人员都是有益的。
3. 变更日志:也许我们也应该为每个版本添加变更日志,同步Wiki。
4. Projects:我今天早上刚创建了一个,看板可以帮助我们更好地管理错误和功能请求。 但是它应该有哪些栏目,该怎么用,我们可能需要讨论一下,毕竟它不是我自己的repo。
5. CONTRIBUTING.md:也许我们应该添加一些要求? 例如:在做任何 PR 之前使用black来标准化代码之类的。
这些是我目前的想法。任何更好的想法都欢迎。只是想让 Running Page 更好。
Meliora! | closed | 2021-06-03T07:51:31Z | 2022-07-11T07:45:51Z | https://github.com/yihong0618/running_page/issues/139 | [
"Need Discussion"
] | MFYDev | 2 |
thewhiteh4t/pwnedOrNot | api | 60 | Status 503 : Service unavailable — usually returned by Cloudflare if the underlying service is not available | Created by : thewhiteh4t
[>] Version : 1.3.0.1
[+] API Key Found...
[+] Checking Breach status for myemail@gmail.com
[-] Status 503 : Service unavailable — usually returned by Cloudflare if the underlying service is not available
[+] Completed in 2.0876543521881104 seconds.
| closed | 2022-04-20T02:33:15Z | 2022-04-21T02:12:33Z | https://github.com/thewhiteh4t/pwnedOrNot/issues/60 | [] | Fabiandenise | 0 |
timkpaine/lantern | plotly | 107 | matplotlib put legend to right of plots | closed | 2017-10-25T01:20:15Z | 2017-10-25T01:31:36Z | https://github.com/timkpaine/lantern/issues/107 | [
"feature",
"matplotlib/seaborn"
] | timkpaine | 0 | |
samuelcolvin/watchfiles | asyncio | 282 | _rust_notify.WatchfilesRustInternalError: error in underlying watcher: IO error for operation on <python path>: No such file or directory (os error 2) | ### Description
## Details
This error occurs on `uvicorn` startup in a docker environment (based on python:3.11-slim) when using the uvicorn `--reload` flag.
The path is pointing to the python3 executable of my virtual environment i.e. /backend/app/.ignore/venv/bin/python3.
Full stack trace:
```
Attaching to backend-1
backend-1 | INFO: Will watch for changes in these directories: ['/backend/app/src']
backend-1 | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
backend-1 | INFO: Started reloader process [1] using WatchFiles
backend-1 | Traceback (most recent call last):
backend-1 | File "/opt/venv/bin/uvicorn", line 8, in <module>
backend-1 | sys.exit(main())
backend-1 | File "/opt/venv/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
backend-1 | return self.main(*args, **kwargs)
backend-1 | File "/opt/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
backend-1 | rv = self.invoke(ctx)
backend-1 | File "/opt/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
backend-1 | return ctx.invoke(self.callback, **ctx.params)
backend-1 | File "/opt/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
backend-1 | return __callback(*args, **kwargs)
backend-1 | File "/opt/venv/lib/python3.10/site-packages/uvicorn/main.py", line 410, in main
backend-1 | run(
backend-1 | File "/opt/venv/lib/python3.10/site-packages/uvicorn/main.py", line 572, in run
backend-1 | ChangeReload(config, target=server.run, sockets=[sock]).run()
backend-1 | File "/opt/venv/lib/python3.10/site-packages/uvicorn/supervisors/basereload.py", line 52, in run
backend-1 | for changes in self:
backend-1 | File "/opt/venv/lib/python3.10/site-packages/uvicorn/supervisors/basereload.py", line 71, in __next__
backend-1 | return self.should_restart()
backend-1 | File "/opt/venv/lib/python3.10/site-packages/uvicorn/supervisors/watchfilesreload.py", line 84, in should_restart
backend-1 | changes = next(self.watcher)
backend-1 | File "/opt/venv/lib/python3.10/site-packages/watchfiles/main.py", line 121, in watch
backend-1 | raw_changes = watcher.watch(debounce, step, rust_timeout, stop_event)
backend-1 | _rust_notify.WatchfilesRustInternalError: error in underlying watcher: IO error for operation on /backend/app/.ignore/venv/bin/python3: No such file or directory (os error 2)
```
## Workaround
Specifically adding `watchfiles==0.21.0` as a dependency to my project resolves the issue.
### Example Code
_No response_
### Watchfiles Output
_No response_
### Operating System & Architecture
docker on WSL2
### Environment
docker on WSL2
### Python & Watchfiles Version
> 0.21.0
### Rust & Cargo Version
_No response_ | closed | 2024-06-10T09:30:47Z | 2025-01-31T04:31:17Z | https://github.com/samuelcolvin/watchfiles/issues/282 | [
"bug"
] | pavdwest | 24 |
vllm-project/vllm | pytorch | 14,438 | [RFC]: Configurable multi-modal data for profiling | ### Motivation.
We can control the data used in profiling multi-modal models using `limit_mm_per_prompt`. However, this is insufficient for the following use-cases:
- Restrict models that accept multiple modalities to only accept single modality inputs to avoid unnecessary memory allocation, e.g.:
- Make Qwen2-VL only accept 10 images *or* 1 video, but not 10 images *and* 1 video per prompt
- Limit the duration of multi-modal data items with temporal components to save memory, e.g.:
- Make Whisper accept only 20s of audio instead of 30s
- Make Qwen2-VL accept only 10 frames of video instead of 16
To enable them, this RFC proposes a new engine argument: `mm_profiling_configs`, which lets users configure the multi-modal data used for profiling in more detail.
### Proposed Change.
This RFC proposes a new engine argument `mm_profiling_configs`, which accepts a list of config objects in JSON form. At a minimum, each config object specifies the maximum number of multi-modal items per prompt. This results in the following schema:
```py
class MultiModalProfilingConfig:
limit_mm_per_prompt: dict[str, int]
def get_limit_per_prompt(self, modality: str) -> int:
return self.limit_per_prompt.get(modality, 1)
class MultiModalConfig: # Add the following fields:
profiling_configs: list[MultiModalProfilingConfig]
class EngineArgs: # Add the following fields:
mm_profiling_configs: list[MultiModalProfilingConfig]
```
#### Multiple profiling runs
Each config corresponds to one profile run, during which the config is passed to the multimodal profiler. After profiling each config, we will allocate memory based on the config that results in the most memory usage.
```py
class MultiModalProfiler: # Update the following methods:
def get_dummy_encoder_data(
self,
seq_len: int,
profiling_config: MultiModalProfilingConfig,
) -> DummyData:
...
def get_dummy_decoder_data(
self,
seq_len: int,
profiling_config: MultiModalProfilingConfig,
) -> DummyData:
...
def get_dummy_processor_inputs(
self,
seq_len: int,
profiling_config: MultiModalProfilingConfig,
) -> ProcessorInputs:
...
```
#### Input validation
To prevent malicious users from crashing the server during inference time by sending too many multi-modal data items in a single prompt (causing OOM), we continue to limit the number of data items per prompt, but in a different way.
Before processing the multi-modal data, we iterate through each profiling config and accept the input as long as it fits within the bounds of at least one of the configs. By default, this check is performed by looking at the number of multi-modal data items only. This can be overridden per model, allowing the check to be based on other fields in the config.
```py
class MultiModalProcessor: # Add the following methods:
def _validate_mm_items_profiling(
self,
mm_items: MultiModalDataItems,
profiling_config: MultiModalProfilingConfig,
) -> None:
for modality, items in mm_items.items():
limit = profiling_config.get_limit_per_prompt(modality)
if len(items) > limit:
raise ValueError(
f"You set {modality}={limit} (or defaulted to 1) in "
f"`--limit-mm-per-prompt`, but passed {len(items)} "
f"{modality} items in the same prompt.")
def _validate_mm_items(self, mm_items: MultiModalDataItems) -> None:
mm_config = self.info.ctx.get_mm_config()
failures = list[Exception]()
for profiling_config in mm_config.profiling_configs:
try:
self._validate_mm_items_profiling(mm_items, profiling_config)
except Exception as e:
failures.append(e)
else:
return
if failures:
failures_str = "\n".join(str(e) for e in failures)
raise RuntimeError(f"Inputs failed to satisfy profiling requirements: {failures_str}")
```
To maintain compatibility, `limit_mm_per_prompt` will remain as a shorthand for specifying a single profiling config with the given maximum number of multi-modal items per prompt. That is:
```py
EngineArgs(limit_mm_per_prompt=...) == EngineArgs(mm_profiling_configs=MultiModalProfilingConfig(limit_mm_per_prompt=...))
````
### Feedback Period.
1 week
### CC List.
@NickLucche @ywang96 @Isotr0py @jeejeelee for multi-modality
@youkaichao @WoosukKwon @ywang96 for profiling
### Any Other Things.
@NickLucche mentioned that the naming of `mm_profiling_configs` is not ideal since it can also affect model inference. However, `mm_config` is already taken by `MultiModalConfig`. Any other suggestions?
@ywang96 reminded me that we don't need to limit the number of multi-modal items per prompt in V1 anymore.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-07T13:55:02Z | 2025-03-15T16:42:52Z | https://github.com/vllm-project/vllm/issues/14438 | [
"RFC",
"multi-modality"
] | DarkLight1337 | 1 |
python-gino/gino | sqlalchemy | 699 | Alter in create_all | * GINO version: 1.0.0
* Python version: 3.8
* asyncpg version:0.20.1
* PostgreSQL version:10
We are using db.gino.create_all() for creation of database from models.py file.
However when we make some changes to any specific table (class) of model file, we need to drop that table and then the changes are getting reflected. Is there any way where the alteration of the tables can happen while create_all()
| closed | 2020-06-10T16:40:33Z | 2020-06-21T03:40:18Z | https://github.com/python-gino/gino/issues/699 | [
"question"
] | nikhilpatil02 | 2 |
plotly/dash-table | dash | 32 | Paste from excel - cross browser reliablity | Currently, paste from excel into the table works well in firefox. However, in chrome, it looks like the clipboard data doesn't have any newlines in it and the paste event doesn't work for multiple rows.
We might need to rethink how we are doing pasting to solve this one.
Here's how it works (correctly) in firefox:

| closed | 2018-07-27T23:46:10Z | 2018-08-08T20:51:40Z | https://github.com/plotly/dash-table/issues/32 | [] | chriddyp | 2 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,224 | Make possible to enable and disable Tor Onion Services per tenant | **Describe the solution you'd like**
To avoid the system to maintain unused Tor cache, this request is about introducing a flag to toggle Tor for every tenant, in the Tor backend panel. | open | 2022-05-18T10:01:56Z | 2024-02-14T23:33:13Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3224 | [
"T: Enhancement",
"C: Client",
"C: Backend"
] | larrykind | 2 |
NullArray/AutoSploit | automation | 672 | Unhandled Exception (e86239bdf) | Autosploit version: `3.1`
OS information: `Darwin-17.4.0-x86_64-i386-64bit`
Running context: `autosploit.py -H 0.3 -C ******* 127.0.0.1 9076 -e -f etc/json/default_modules.json`
Error mesage: `this is a test exception`
Error traceback:
```
Traceback (most recent call):
File "/Users/admin/bin/tools/autosploit/autosploit/main.py", line 109, in main
AutoSploitParser().single_run_args(opts, loaded_tokens, loaded_exploits)
File "/Users/admin/bin/tools/autosploit/lib/cmdline/cmd.py", line 236, in single_run_args
compare_honey=opt.checkIfHoneypot
File "/Users/admin/bin/tools/autosploit/lib/exploitation/exploiter.py", line 75, in start_exploit
raise Exception("this is a test exception")
Exception: this is a test exception
```
Metasploit launched: `False`
| closed | 2019-04-18T16:40:45Z | 2019-04-18T16:42:55Z | https://github.com/NullArray/AutoSploit/issues/672 | [] | AutosploitReporter | 0 |
ets-labs/python-dependency-injector | asyncio | 368 | Async providers with async dependencies | It turns out that async providers currently cannot have async dependencies.
Example: in this container, both are `async` functions:
```python
class Container(containers.DeclarativeContainer):
db = providers.Factory(async_db_provider)
service = providers.Singleton(async_service, db=db)
```
Now, when in my code I request an instance of `service`:
```python
service = await container.service()
```
the expected result would be an instance of service. The actual result is its unawaited coroutine:
> <coroutine object async_service at 0x7faea530fc40>
This behavior persists with Resource, Couroutine, and other providers.
---
Full source code to reproduce:
```python
# Create two async providers
async def async_db_provider():
return {'db': 'ok'} # just some sample object
async def async_service(db = None):
return {'service': 'ok', 'db': db}
class Container(containers.DeclarativeContainer):
# Second provider, a singleton, depends on the first one
db = providers.Factory(async_db_provider)
service = providers.Singleton(async_service, db=db)
if __name__ == '__main__':
# Create the container
container = Container()
async def main():
try:
# Request the service
service = await container.service()
print(service) # <--- expected: instance of service
finally:
# Shutdown resources
shutdown_resources_awaitable = container.shutdown_resources()
if isawaitable(shutdown_resources_awaitable):
await shutdown_resources_awaitable
asyncio.run(main())
``` | closed | 2021-01-21T11:41:07Z | 2021-01-27T16:45:53Z | https://github.com/ets-labs/python-dependency-injector/issues/368 | [
"bug"
] | kolypto | 4 |
lanpa/tensorboardX | numpy | 227 | Asynchronous Events | Hey, thanks for this great library!
Currently, if we try to drop a big weight of nn to be monitored using `add_histogram` it would take quite a while, around 300ms or so for certain layer. Do you think it is possible to add extra features for doing the computation inside of the thread for writing the protobuf?
What I mean is like this, currently we have `histogram(tag, values, bins)` on `SummaryWriter` for `add_histogram` method, the current way of calling `histogram` function is in outside of the queue which resides on `EventFileWriter`. By doing this, we do the calculation of `histogram` first before passing the final value to the thread that later write the data. However, for big network that may take a while as I said before. If we can have different process to calculate the summary function without waiting (doing it asynchronously) that would be a great improvement.
Or do you have some other thought about this? Please let me know, would be happy to discuss it! | open | 2018-09-19T04:56:19Z | 2020-02-03T10:21:00Z | https://github.com/lanpa/tensorboardX/issues/227 | [] | akurniawan | 0 |
jupyter-book/jupyter-book | jupyter | 2,207 | Jinja style conditional blocks | MyST substitution allows variable values to be referenced in content using `{{ nyvar }}` syntax.
I am trying to write some generic documentation that can be conditionally customised.
In Jinja i might say something like `{% if use_jupyterlab %} ...jupyterlab docs...{% else %} ...notebook docs ...{% endif %}}` but that sort of substitution appears not to be supported? | open | 2024-09-16T14:11:58Z | 2024-09-16T14:12:14Z | https://github.com/jupyter-book/jupyter-book/issues/2207 | [] | psychemedia | 0 |
miguelgrinberg/Flask-Migrate | flask | 131 | Migrate database missing column with type = Text | Hi, I create a model with a column has type = Text, but migrate file does not have this column
My Model
```
class Post(db.Model):
id = db.Column(db.BigInteger, primary_key=True)
title= db.Column(db.String(255))
description = db.TEXT()
created_at = db.Column(db.DateTime, nullable=True, default=datetime.datetime.now())
```
Migrate file:
```
op.create_table('post',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('title', sa.String(length=255), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
```
| closed | 2016-09-30T04:33:25Z | 2016-09-30T07:14:22Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/131 | [
"question"
] | khaihkd | 2 |
google/trax | numpy | 1,749 | Does the Reformer have more parameters than the baseline? | Regarding Reformer: [paper](https://arxiv.org/pdf/2001.04451.pdf) | [code](https://github.com/google/trax/tree/master/trax/models/reformer)
From paper:
> .. show that it performs the same as the normal Transformer when using the same number of parameters; we achieve this by having both x1 and x2 have size d_model.
I see how the parameters of Attention and MLP does not increase. But what about
(1) the embedding layer and
(2) the final projection layer?
**Question 0.** Why does the parameters of the initial embedding layer not increase if we double d_model?. | open | 2022-06-06T14:56:17Z | 2022-06-06T14:56:17Z | https://github.com/google/trax/issues/1749 | [] | alexm-gc | 0 |
ageitgey/face_recognition | machine-learning | 772 | How get one face from 5 photos? | Hello!
I have 5 photos with me and my friends. Each photo have 1-4 faces.
But each photo have my face.
How get my face encoding from this 5 photos without input data?
For example:
i say "Hey script, give me my face encoding".
Script analyzes 5 photos. Finds the face is present on most photo and gives it to me.
This is possible? Thank you | open | 2019-03-13T08:37:09Z | 2019-03-17T06:17:44Z | https://github.com/ageitgey/face_recognition/issues/772 | [] | arpsyapathy | 3 |
awesto/django-shop | django | 430 | Tutorial instructions don't work | http://django-shop.readthedocs.io/en/latest/tutorial/intro.html
The problem here is an outdated pip / setuptools.
Proposed fix: document that user should update pip and setuptools.
```
rene@rene /tmp $ virtualenv -p $(which python3.5) pshoptutorial
Running virtualenv with interpreter /home/rene/local/bin/python3.5
Using base prefix '/home/rene/local'
New python executable in pshoptutorial/bin/python3.5
Also creating executable in pshoptutorial/bin/python
Installing setuptools, pip...done.
rene@rene /tmp $ source shoptutorial/bin/activate
source: no such file or directory: shoptutorial/bin/activate
rene@rene /tmp $
rene@rene /tmp $
Script done, file is typescript
rene@rene /tmp $ rm typescript
rene@rene /tmp $ script 1
Script started, file is 1
* keychain 2.7.1 ~ http://www.funtoo.org
* Found existing ssh-agent: 4602
* Found existing gpg-agent: 4627
* Known ssh key: /home/rene/.ssh/id_rsa
rene@rene /tmp $ virtualenv -p $(which python3.5) shoptutorial
Running virtualenv with interpreter /home/rene/local/bin/python3.5
Using base prefix '/home/rene/local'
New python executable in shoptutorial/bin/python3.5
Also creating executable in shoptutorial/bin/python
Installing setuptools, pip...done.
rene@rene /tmp $ source shoptutorial/bin/activate
(shoptutorial)rene@rene /tmp $ mkdir Tutorial; cd Tutorial
(shoptutorial)rene@rene /tmp/Tutorial $ git clone --depth 1 https://github.com/awesto/django-shop
Cloning into 'django-shop'...
remote: Counting objects: 521, done.
remote: Compressing objects: 100% (461/461), done.
remote: Total 521 (delta 52), reused 208 (delta 25), pack-reused 0
Receiving objects: 100% (521/521), 1.82 MiB | 1.16 MiB/s, done.
Resolving deltas: 100% (52/52), done.
Checking connectivity... done.
(shoptutorial)rene@rene /tmp/Tutorial $ cd django-shop
(shoptutorial)rene@rene /tmp/Tutorial/django-shop (master) $ pip install -e .
Obtaining file:///tmp/Tutorial/django-shop
Running setup.py (path:/tmp/Tutorial/django-shop/setup.py) egg_info for package from file:///tmp/Tutorial/django-shop
error in django-shop setup command: Invalid environment marker: python_version<"3.4"
Complete output from command python setup.py egg_info:
error in django-shop setup command: Invalid environment marker: python_version<"3.4"
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/Tutorial/django-shop
Storing debug log for failure in /home/rene/.pip/pip.log
```
| open | 2016-09-30T13:42:45Z | 2016-11-08T23:52:21Z | https://github.com/awesto/django-shop/issues/430 | [
"blocker",
"bug",
"accepted",
"documentation"
] | rfleschenberg | 1 |
graphql-python/graphene | graphql | 956 | Cannot create a enum with a deprecation reason supplied | ## How to reproduce
```python
options = {
'description': 'This my enum',
'deprecation_reason': 'For the funs'}
graphene.Enum('MyEnum', [('some', 'data')], **options)
```
## What happened
```
File "/Users/Development/saleor/saleor/graphql/core/enums.py", line 35, in to_enum
return graphene.Enum(type_name, enum_data, **options)
File "/Users/Development/saleor-venv/lib/python3.7/site-packages/graphene/types/enum.py", line 49, in __call__
return cls.from_enum(PyEnum(*args, **kwargs), description=description)
TypeError: __call__() got an unexpected keyword argument 'deprecation_reason'
``` | closed | 2019-05-02T13:09:58Z | 2019-05-06T17:18:02Z | https://github.com/graphql-python/graphene/issues/956 | [
"good first issue"
] | NyanKiyoshi | 1 |
QuivrHQ/quivr | api | 3,079 | BUG: Knowledge Syncs | * Set knowledge status to error
* Use notifier for syncs ?
* Write test for sync | closed | 2024-08-23T15:22:18Z | 2024-08-30T07:24:36Z | https://github.com/QuivrHQ/quivr/issues/3079 | [
"bug"
] | linear[bot] | 1 |
seleniumbase/SeleniumBase | pytest | 3,271 | (UC Mode + Windows): The `driver` might stay open after the test completes in the `SB()` format | ### (UC Mode + Windows): The `driver` might stay open after the test completes in the `SB()` format
----
I already figured out the cause: I was using `/` instead of `os.sep` for something.
On Windows, the standard path separator is `\\`. (`os.sep` knows the difference.)
| closed | 2024-11-15T05:28:15Z | 2024-11-15T06:31:29Z | https://github.com/seleniumbase/SeleniumBase/issues/3271 | [
"bug",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
FlareSolverr/FlareSolverr | api | 615 | FlareSolverr not by-passing CloudFlare | BUG / ERROR
* **FlareSolverr version**: 2.2.10
* **Last working FlareSolverr version**: i'm new so 2.2.10
* **Operating system**: ubuntu / ubuntu server
* **Are you using Docker**: yes
* **FlareSolverr User-Agent (see log traces or / endpoint)**: idk see logs
* **Are you using a proxy or VPN?** no
* **Are you using Captcha Solver:** no]
* **If using captcha solver, which one:**
* **URL to test this issue:** https://cpasbiens3.fr/
### Description
i just want to add flaresolverr to prowlarr so it can go on the website above but it does not by-pass cloudflare like entended
### Logged Error Messages
prawlarr error message wen i want to add the website to the indexers : Unable to access cpasbiens3.fr, blocked by CloudFlare Protection.
[prowlarr (1).txt](https://github.com/FlareSolverr/FlareSolverr/files/10202783/prowlarr.1.txt)
[flaresolverlogs.txt](https://github.com/FlareSolverr/FlareSolverr/files/10202790/flaresolverlogs.txt)
### Screenshots

<img width="536" alt="image" src="https://user-images.githubusercontent.com/86328249/206918923-767cf91c-4ce0-45ca-861a-6e113f7e9b4b.png">
Everything is setuped correctly i have even reinstalled some things please help | closed | 2022-12-11T17:28:36Z | 2023-01-06T11:53:54Z | https://github.com/FlareSolverr/FlareSolverr/issues/615 | [
"more information needed"
] | S0ly | 8 |
plotly/dash | data-science | 2,229 | [BUG] console log gets spammed | the js console gets flooded with logs from callbacks.ts: 460:
`console.log(cb.callback.output, getState().callbackJobs);`
is this intended?
```
dash 2.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
plotly 5.10.0
```
| closed | 2022-09-13T10:49:02Z | 2022-09-13T13:09:00Z | https://github.com/plotly/dash/issues/2229 | [] | luggie | 1 |
SYSTRAN/faster-whisper | deep-learning | 347 | timestamps by minute | Is there anyway to get the timestamps by minutes instead of seconds? For example, one of my output segments has a timestamp of [66s - 80s] but I would like it to be [1:06 - 1:20] instead. | closed | 2023-07-10T18:47:18Z | 2023-07-11T00:18:14Z | https://github.com/SYSTRAN/faster-whisper/issues/347 | [] | SubinPradeep | 1 |
graphql-python/gql | graphql | 223 | Fatal Python error: Segmentation fault" from a Raspberry Pi Python3.9.3 32-bit | See the trace below after adding import faulthandler; faulthandler.enable() according to https://blog.richard.do/2018/03/18/how-to-debug-segmentation-fault-in-python/
```
pi@raspberrypi:~/dev/EnergyMeterTelemetry $ /usr/local/opt/python-3.9.3/bin/python3.9 /home/pi/dev/EnergyMeterTelemetry/TibberClient.py
Fatal Python error: Segmentation fault
Thread 0xb57be460 (most recent call first):
File "/usr/local/opt/python-3.9.3/lib/python3.9/concurrent/futures/thread.py", line 75 in _worker
File "/usr/local/opt/python-3.9.3/lib/python3.9/threading.py", line 892 in run
File "/usr/local/opt/python-3.9.3/lib/python3.9/threading.py", line 954 in _bootstrap_inner
File "/usr/local/opt/python-3.9.3/lib/python3.9/threading.py", line 912 in _bootstrap
Current thread 0xb6f9a010 (most recent call first):
File "/home/pi/.local/lib/python3.9/site-packages/aiohttp/client_proto.py", line 213 in data_received
File "/usr/local/opt/python-3.9.3/lib/python3.9/asyncio/sslproto.py", line 545 in data_received
File "/usr/local/opt/python-3.9.3/lib/python3.9/asyncio/selector_events.py", line 870 in _read_ready__data_received
File "/usr/local/opt/python-3.9.3/lib/python3.9/asyncio/selector_events.py", line 813 in _read_ready
File "/usr/local/opt/python-3.9.3/lib/python3.9/asyncio/events.py", line 80 in _run
File "/usr/local/opt/python-3.9.3/lib/python3.9/asyncio/base_events.py", line 1890 in _run_once
File "/usr/local/opt/python-3.9.3/lib/python3.9/asyncio/base_events.py", line 596 in run_forever
File "/usr/local/opt/python-3.9.3/lib/python3.9/asyncio/base_events.py", line 629 in run_until_complete
File "/home/pi/.local/lib/python3.9/site-packages/gql/client.py", line 167 in execute
File "/home/pi/dev/EnergyMeterTelemetry/TibberClient.py", line 37 in <module>
Segmentation fault
```
Example code:
``` python
import faulthandler; faulthandler.enable()
from gql import Client, gql
from gql.transport.aiohttp import AIOHTTPTransport
import json
# Select your transport with a defined url endpoint
transport = AIOHTTPTransport(url="https://api.tibber.com/v1-beta/gql", headers={'Authorization': '476c477d8a039529478ebd690d35ddd80e3308ffc49b59c65b142321aee963a4',
"Content-Type" : "application/json"}) # demo authorzation token
# Create a GraphQL client using the defined transport
client = Client(transport=transport, fetch_schema_from_transport=True)
# Provide a GraphQL query
query = gql(
"""
{
viewer {
homes {
currentSubscription {
priceInfo {
today {
total
energy
tax
startsAt
}
}
}
}
}
}
"""
)
# Execute the query on the transport
data = client.execute(query)
print(data)
```
Same problem with asyncio.
Demo code works on my Ubuntu Python 3.8.10 64-bit. | closed | 2021-08-03T20:17:41Z | 2021-08-20T13:29:09Z | https://github.com/graphql-python/gql/issues/223 | [
"status: needs investigation"
] | troelde | 6 |
HIT-SCIR/ltp | nlp | 113 | too many problem with a variable 'interval' is 0 | [1](https://github.com/HIT-SCIR/ltp/blob/master/src%2Fsegmentor%2Fsegmentor_frontend.cpp#L178-L184)
[2](https://github.com/HIT-SCIR/ltp/blob/master/src%2Fsegmentor%2Fsegmentor_frontend.cpp#L263-L284)
| closed | 2015-06-02T06:35:11Z | 2015-06-03T03:30:29Z | https://github.com/HIT-SCIR/ltp/issues/113 | [
"bug"
] | pynixwang | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,093 | Context requires double click to open | Pertaining GL v. 4.4.5
We have encountered an issue with context pages which on click opens and shuts immediately here after. A second click will open the page.
The error is ramdomely occurring, and no patern is seen yet.
/soren | closed | 2021-11-09T15:14:51Z | 2021-11-11T14:58:39Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3093 | [
"T: Bug",
"C: Client"
] | schris-dk | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,660 | how to train for two input image and one output image | Hello,
Thank you for the work.
i have a case where i have two input images and which combined give me one output image how can i do this? what part should be modified.
| closed | 2024-05-31T08:54:41Z | 2024-06-03T05:51:37Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1660 | [] | roshan2024nar | 2 |
axnsan12/drf-yasg | django | 271 | Provide a way to statically add references to manually declared objects | I try to document an api, it's response has a $ref as schema, also has headers.
sth. like below.
```
200:
description: Success
headers:
X-RateLimit-Limit:
type: integer
description: Request limit per hour.
schema:
$ref: "#/definitions/User"
```
I know how to construct `openapi.Response(schema=SomeSchema, headers={...})`, but it seems there is no way to specify a SchemaRef response with headers currently.
Inside openapi._Ref
```python
def __setitem__(self, key, value):
if key == "$ref":
return super(_Ref, self).__setitem__(key, value)
raise NotImplementedError("Only $ref can be set on Reference objects (not %s)" % key)
```
Is there any workaround ? or should we loosen the restriction of setitem on _Ref object a little bit, can we directly set headers on _Ref object?
```python
ref: SchemaRef = ...
ref["headers"] = {
"X-RateLimit-Limit": {
"decsription": "...",
"type": "string"
}
}
``` | closed | 2018-12-13T03:58:38Z | 2022-07-17T17:56:00Z | https://github.com/axnsan12/drf-yasg/issues/271 | [] | mpwang | 2 |
holoviz/panel | jupyter | 7,384 | OAuth guest endpoints not working | <!--
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
panel: 1.4.5
python: 3.9.13
<details>
<summary>Software Version Info</summary>
```plaintext
Include version information here
```
</details>
#### Description of expected behavior and the observed behavior
I was following this documentation https://holoviz-dev.github.io/panel/how_to/authentication/guest_users.html#guest-endpoints to add an endpoint (`/health`) without OAuth authentication in a panel app, but it doesn't seems to work.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
# minimal.py
import panel as pn
pn.extension()
def health():
return {"message": "ok"}
def index_page():
return pn.pane.Markdown("# Hello, World!")
if __name__ == "__main__":
all_pages = {
"/": index_page,
"/health": health,
}
pn.serve(
all_pages,
port=5006,
show=False,
autoreload=True,
allow_websocket_origin=[
"localhost:5006",
],
oauth_provider="auth0",
oauth_key="random.PANEL_OAUTH_KEY",
oauth_secret="random.PANEL_OAUTH_SECRET",
oauth_guest_endpoints=[
# This allows this endpoint to be accessed without authentication
"/health"
],
cookie_secret="randomsecret",
)
```
Command: `python minimal.py`
## Expectation:
- OAuth redirect on http://localhost:5006/
- No OAuth redirect on http://localhost:5006/health
as per documentation: https://holoviz-dev.github.io/panel/how_to/authentication/guest_users.html#guest-endpoints
## Current Behaviour
- OAuth redirect on http://localhost:5006/health as well.
cc @philippjfr @tupui
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
- [ ] I may be interested in making a pull request to address this
| closed | 2024-10-09T15:44:50Z | 2024-10-10T08:18:02Z | https://github.com/holoviz/panel/issues/7384 | [] | aktech | 2 |
newpanjing/simpleui | django | 76 | 登陆页样式加载不全无法输入用户名密码 | **bug描述**
简单的描述下遇到的bug:
打开admin登陆页加载很慢,并且样式不全,只有logo和登陆,用户名密码输入框没有了。希望大佬看到了能回复我
**重现步骤**
debug = True

**环境**
1.操作系统:win7,win10
2.python版本:3.6
3.django版本:2.1
4.simpleui版本:2.1
**其他描述**
| closed | 2019-06-05T09:14:23Z | 2019-06-10T08:04:51Z | https://github.com/newpanjing/simpleui/issues/76 | [
"bug"
] | pokededan | 3 |
ipython/ipython | data-science | 14,311 | Move backend mapping to Matplotlib | I wanted to draw your attention to matplotlib/matplotlib#27663, about moving the Matplotlib backend mappings out of IPython and into Matplotlib.
The primary use case it so support Matplotlib widgets (`ipympl` and `matplotlib-inline`) registering themselves as Matplotlib backends without requiring additional code in IPython and/or Matplotlib. The secondary use case is to support backends in IPython using Matplotlib's `module://name.of.the.backend` syntax, e.g.
```
%matplotlib module://mplcairo.backend
```
which one can already do using `matplotlib.use(...)` but not directly via the `%matplotlib` magic.
Whilst doing this it seems sensible to bring all of the backend registering and mapping together in one place, and that should be Matplotlib rather than IPython. I am not sure how easy (or even possible!) it will be to remove all the related hard-coded stuff in IPython, but I am willing to start and see how it goes. | closed | 2024-01-30T12:05:56Z | 2024-04-12T12:39:34Z | https://github.com/ipython/ipython/issues/14311 | [
"matplotlib",
"magics"
] | ianthomas23 | 4 |
christabor/flask_jsondash | flask | 126 | Add embeddable mode | ### Use case:
As a user I need to be able to create more complex dashboards and pages than what is supported in the schema/tool. But I don't want the tool to be so complex that the schema and language effectively become a DSL and are as complex (or more so) than just doing it all myself.
To that end, it would be easiest to embed using a traditional iframe. This allows more complex dashboards to wrap this one.
### Tradeoffs:
* Any con associated with using iframes in general.
### Implementation:
* This would be used as an iframe.
* An `embedded=true` query parameter will be inserted into the url which is then used within the flask blueprint to toggle features off. Exactly the same implementation method as the existing `jsondash_demo_mode` option.
* No way to avoid duplicate asset loading/sharing of assets (e.g I use d3 in my site and also use it within here).
### Requirements:
* Should have a transparent bg so it fits well with other wrapped page designs
* Should hide all titles and editable elements
* Should hide all dragging/dropping/resizing ability
* Should hide all large titles and buttons, only showing the charts and their refresh buttons.
### Testing
Unit tests will suffice
### Examples
Config should be provided that creates a single full width chart that contains the embedded version as an iframe. This gives people an idea and provides fixtures for testing/demoing.
| closed | 2017-06-14T17:38:15Z | 2017-06-15T19:20:06Z | https://github.com/christabor/flask_jsondash/issues/126 | [
"enhancement",
"new chart"
] | christabor | 0 |
graphql-python/graphene | graphql | 1,348 | Is it necessary to prevent a datasclass field name from being a Python keyword? | I ran into this error when dynamically creating an dataclass:
"Field names must not be keywords: {name!r}"
https://github.com/graphql-python/graphene/blob/master/graphene/pyutils/dataclasses.py#L1196-L1197
The field names come from a defined external dataset, and one field name happens to be a Python keyword, but I don't think that causes any harm.
| open | 2021-07-14T12:16:36Z | 2021-11-17T17:31:49Z | https://github.com/graphql-python/graphene/issues/1348 | [
"✨ enhancement"
] | paulfelix | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 869 | How long should the dataset entries be for the encoder? | closed | 2021-10-07T15:46:37Z | 2021-10-07T15:48:35Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/869 | [] | fancat-programer | 0 | |
databricks/koalas | pandas | 2,188 | DataFrame.pivot does not accept list as index parameter | The following example does not work in Databricks Runtime 8.4:
```python
kdf = ks.DataFrame({"ui": ['C', 'D', 'D', 'C'],
"foo": ['one', 'one', 'two', 'two'],
"bar": ['A', 'A', 'B', 'C'],
"ar": [1, 2, 2, 2],
"baz": [1, 2, 3, 4]}, columns=['ui', 'foo', 'bar', 'baz', 'ar'])
kdf.pivot(index=['ui', 'foo'], columns='bar', values=['baz', 'ar'])
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<command-4107535394601473> in <module>
----> 1 df.pivot(index=['ui','foo'] , columns='bar', values=['baz', 'ar'])
/databricks/python/lib/python3.8/site-packages/databricks/koalas/usage_logging/__init__.py in wrapper(*args, **kwargs)
193 start = time.perf_counter()
194 try:
--> 195 res = func(*args, **kwargs)
196 logger.log_success(
197 class_name, function_name, time.perf_counter() - start, signature
/databricks/python/lib/python3.8/site-packages/databricks/koalas/frame.py in pivot(self, index, columns, values)
6274 index = df._internal.column_labels[: self._internal.index_level]
6275
-> 6276 df = df.pivot_table(index=index, columns=columns, values=values, aggfunc="first")
6277
6278 if should_use_existing_index:
/databricks/python/lib/python3.8/site-packages/databricks/koalas/usage_logging/__init__.py in wrapper(*args, **kwargs)
188 if hasattr(_local, "logging") and _local.logging:
189 # no need to log since this should be internal call.
--> 190 return func(*args, **kwargs)
191 _local.logging = True
192 try:
/databricks/python/lib/python3.8/site-packages/databricks/koalas/frame.py in pivot_table(self, values, index, columns, aggfunc, fill_value)
6048 index = [label if is_name_like_tuple(label) else (label,) for label in index]
6049 sdf = (
-> 6050 sdf.groupBy([self._internal.spark_column_name_for(label) for label in index])
6051 .pivot(pivot_col=self._internal.spark_column_name_for(columns))
6052 .agg(*agg_cols)
/databricks/python/lib/python3.8/site-packages/databricks/koalas/frame.py in <listcomp>(.0)
6048 index = [label if is_name_like_tuple(label) else (label,) for label in index]
6049 sdf = (
-> 6050 sdf.groupBy([self._internal.spark_column_name_for(label) for label in index])
6051 .pivot(pivot_col=self._internal.spark_column_name_for(columns))
6052 .agg(*agg_cols)
/databricks/python/lib/python3.8/site-packages/databricks/koalas/internal.py in spark_column_name_for(self, label_or_scol)
813 scol = label_or_scol
814 else:
--> 815 scol = self.spark_column_for(label_or_scol)
816 return self.spark_frame.select(scol).columns[0]
817
/databricks/python/lib/python3.8/site-packages/databricks/koalas/internal.py in spark_column_for(self, label)
803 """ Return Spark Column for the given column label. """
804 column_labels_to_scol = dict(zip(self.column_labels, self.data_spark_columns))
--> 805 if label in column_labels_to_scol:
806 return column_labels_to_scol[label]
807 else:
TypeError: unhashable type: 'list'
```
I am using
```python
kdf.pivot_table(index=['ui','foo'] , columns='bar', values=['baz', 'ar'], aggfunc='first')
```
to solve my problem, but I think that `pivot` should work with Multiindex.
| open | 2021-08-16T15:03:59Z | 2021-08-26T23:44:54Z | https://github.com/databricks/koalas/issues/2188 | [
"bug"
] | crucis | 0 |
quokkaproject/quokka | flask | 19 | python manage.py createsuperuser | hi,i run command:
python manage.py createsuperuser
error information:
Traceback (most recent call last):
File "manage.py", line 52, in <module>
load_blueprint_commands(manager)
File "D:\python_pro\flask_pro\quokka-env\quokka\quokka\ext\blueprints.py", lin
e 94, in load_blueprint_commands
mod = imp.load_module(fname, f, filename, descr)
File "D:\python_pro\flask_pro\quokka-env\quokka\quokka\modules\posts\commands.
py", line 4, in <module>
from .models import Post
ValueError: Attempted relative import in non-package
i found that models.py(quokka/quokka/modules/posts) doesn't compile.
| closed | 2013-08-14T01:11:09Z | 2015-07-16T02:56:57Z | https://github.com/quokkaproject/quokka/issues/19 | [] | javalurker | 4 |
gradio-app/gradio | machine-learning | 10,157 | HighlightedText select event payload not passed | ### Describe the bug
Tried to attach select listener to HighlightedText and access the selected segment in the listener.
As per documentation, https://www.gradio.app/docs/gradio/highlightedtext#event-listeners
`Event listener for when the user selects or deselects the HighlightedText. Uses event data gradio.SelectData to carry value referring to the label of the HighlightedText, and selected to refer to state of the HighlightedText`
I tried to print event payload but it appears to be None
There's a warning in the terminal suggesting that select listener accepts No parameters
```
UserWarning: Unexpected argument. Filling with None.
warnings.warn("Unexpected argument. Filling with None.")
```
Logs from terminal appear from my print statement. Log messages confirm that listener parameter is None
`on_select <class 'NoneType'> None
`
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def on_select(param):
print('on_select', type(param), param)
return param
with gr.Blocks() as demo:
h1 = gr.HighlightedText([("Hello my name is ", None), ("Abubakar", "PER"), (" and I live in ", None), ("Palo Alto", "LOC")], interactive=False)
t1 = gr.Text(interactive=False, label='Selected')
h1.select(fn=on_select, outputs=[t1])
demo.launch()
```
### Screenshot

_No response_
### Logs
_No response_
### System Info
```shell
Operating System: Windows
gradio version: 5.8.0
gradio_client version: 1.5.1
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 22.1.0
anyio: 3.6.2
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.3.2
gradio-client==1.5.1 is not installed.
httpx: 0.26.0
huggingface-hub: 0.25.2
jinja2: 3.1.2
markupsafe: 2.1.2
numpy: 1.23.5
orjson: 3.9.14
packaging: 22.0
pandas: 2.2.1
pillow: 10.2.0
pydantic: 2.6.1
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0.1
ruff: 0.2.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.12.0
typer: 0.12.3
typing-extensions: 4.9.0
urllib3: 2.2.0
uvicorn: 0.27.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2023.10.0
httpx: 0.26.0
huggingface-hub: 0.25.2
packaging: 22.0
typing-extensions: 4.9.0
websockets: 12.0
```
### Severity
Blocking usage of gradio | closed | 2024-12-09T07:03:28Z | 2024-12-09T07:49:04Z | https://github.com/gradio-app/gradio/issues/10157 | [
"bug"
] | jsaluja | 1 |
RayVentura/ShortGPT | automation | 122 | ✨ [Feature Request / Suggestion]: Implement the api from text-generation-webui project | ### Suggestion / Feature Request
Integrate the open ai api from the https://github.com/oobabooga/text-generation-webui project to make short gpt free and free of open ai official api
### Why would this be useful?
This can help multiple users that can't use short gpt because of the official apu
### Screenshots/Assets/Relevant links
https://github.com/oobabooga/text-generation-webui
https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API | open | 2023-12-16T12:10:09Z | 2023-12-16T12:10:09Z | https://github.com/RayVentura/ShortGPT/issues/122 | [] | YotomiY | 0 |
thewhiteh4t/pwnedOrNot | api | 21 | Killed | Keep getting "Killed" message when running dump search on my email address, can't find in the .py code what issue may be, is it rate limiting by haveibeenpwned.com? | closed | 2019-04-21T12:52:54Z | 2019-04-22T16:40:36Z | https://github.com/thewhiteh4t/pwnedOrNot/issues/21 | [] | JaySmith502 | 6 |
d2l-ai/d2l-en | pytorch | 1,778 | Unify hyperparameters of all frameworks in DCGAN | https://github.com/d2l-ai/d2l-en/blob/master/chapter_generative-adversarial-networks/dcgan.md
Currently the TF implementation (https://github.com/d2l-ai/d2l-en/pull/1760/files) uses a different set of hyperparameters:
#@tab mxnet, pytorch
latent_dim, lr, num_epochs = 100, 0.005, 20
train(net_D, net_G, data_iter, num_epochs, lr, latent_dim)
#@tab tensorflow
latent_dim, lr, num_epochs = 100, 0.0005, 40
train(net_D, net_G, data_iter, num_epochs, lr, latent_dim)
Increasing `num_epochs` to 40 doubles the execution time in TF. Let's unify hyperparameters across all the frameworks. | open | 2021-06-08T00:35:07Z | 2023-10-31T14:20:55Z | https://github.com/d2l-ai/d2l-en/issues/1778 | [
"tensorflow-adapt-track"
] | astonzhang | 3 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 667 | positive = torch.where(torch.ge(matched_idxs_per_image, 1))[0] | **System information**
* Have I written custom code:
* OS Platform Linux Ubuntu 18.04): Ubuntu 18.04
* Python version:3.7
* Deep learning framework and version Pytorch1.8.1
* Use GPU or not: GPU
* CUDA/cuDNN version(if you use GPU): cuda11.1
* The network you trained(e.g., Resnet34 network):fasterrcnn-resnet50_fpn
**Describe the current behavior**
Hi, when i train on the VOC2007, the error is belows:
positive = torch.where(torch.ge(matched_idxs_per_image, 1))[0]
RuntimeError: CUDA error: device-side assert triggered
**Error info / logs**
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [33,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
| closed | 2022-10-24T06:56:01Z | 2022-11-06T03:14:38Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/667 | [] | EudicL | 1 |
babysor/MockingBird | deep-learning | 156 | Create SECURITY.md | Hey there!
I belong to an open source security research community, and a member (@0xab3l) has found an issue, but doesn’t know the best way to disclose it.
If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.
Thank you for your consideration, and I look forward to hearing from you!
(cc @huntr-helper) | open | 2021-10-17T14:11:43Z | 2021-10-19T15:33:22Z | https://github.com/babysor/MockingBird/issues/156 | [] | zidingz | 1 |
ageitgey/face_recognition | python | 976 | How to get confidence score using cnn detection in batch of images? | * face_recognition version: latest
* Python version: 3.6
* Operating System: JetPack 4.2.1
Hello everyone,
I'm trying to do detection using cnn for batch of images. Calling the function to do the detection with batch is okay, but it the result doesn't include confidence scores. I take a look at the api and found out that the dlib.mmod_rectangles class does not contain confidence attribute as the dlib.mmod_rectangle class. So how to get the confidence score for each detection if I have to do detections on batches only rather than single images. Sorry for my noob question, but I'm new to dlib, and really need help.
Thank you a lot!!! | closed | 2019-11-13T14:29:16Z | 2019-11-14T08:26:16Z | https://github.com/ageitgey/face_recognition/issues/976 | [] | congphase | 1 |
davidsandberg/facenet | tensorflow | 802 | other pre-trained models as feature extractor | Hi there,
i'm interested in test the performance of other pre-trained model on ImageNet (in this case vgg-net) to extract embedding, but only the .ckpt file is available, and .pb file or metagraph file are not available! now is it possible to use classifier.py in TRAIN mode with vgg.ckpt? and how?
or how can i export metagraph or protobuf file ? | open | 2018-06-30T05:28:32Z | 2018-06-30T05:28:32Z | https://github.com/davidsandberg/facenet/issues/802 | [] | mahpou | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,216 | The handle is invalid | this is the error I get, idk why | open | 2023-04-23T12:37:07Z | 2023-04-23T12:37:07Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1216 | [] | snehccurry | 0 |
ultralytics/ultralytics | pytorch | 19,807 | How to completely disable Albumentations-based augmentations in YOLOv11 (e.g., Blur, MedianBlur etc..)? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I tried setting augment=False, but it seems this specific augmentation is still being applied. I couldn’t find clear instructions on how to fully disable it.
`DDP: debug command /user77/miniforge3/envs/user/bin/python -m torch.distributed.run --nproc_per_node 2 --master_port 40121 /user77/.config/Ultralytics/DDP/_temp_073wsljo23452078238096.py
Ultralytics 8.3.86 🚀 Python-3.13.2 torch-2.6.0+cu124 CUDA:0 (NVIDIA A100 80GB PCIe, 81229MiB)
CUDA:1 (NVIDIA A100 80GB PCIe, 81229MiB)
Overriding model.yaml nc=80 with nc=2
Freezing layer 'model.23.dfl.conv.weight'
AMP: running Automatic Mixed Precision (AMP) checks...
AMP: checks passed ✅
train: Scanning /path/2.F
train: Caching images (25.5GB Disk): 100%|██████████| 997/997 [00:00<00:00, 704
albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01, num_output_channels=3, method='weighted_average'), CLAHE(p=0.01, clip_limit=(1.0, 4.0), tile_grid_size=(8, 8))`
### Additional
_No response_ | open | 2025-03-21T02:04:05Z | 2025-03-21T10:24:02Z | https://github.com/ultralytics/ultralytics/issues/19807 | [
"question",
"detect"
] | hillsonghimire | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.