repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
StackStorm/st2 | automation | 6,128 | Eventlet retirement | ## SUMMARY
As reported by @cognifloyd in slack: the eventlet team has announced their intention to retire the project. https://github.com/eventlet/eventlet?tab=readme-ov-file#warning This will have significant implications for the StackStorm project as the code base is architected and designed around eventlet and the synchronous programming model. | open | 2024-02-07T11:26:11Z | 2024-08-14T07:55:00Z | https://github.com/StackStorm/st2/issues/6128 | [] | nzlosh | 14 |
Textualize/rich | python | 3,586 | Expired certificate on willmcgugan.com domain linked from https://pypi.org/project/rich/ | Is it just me? I see expired certificate on https://www.willmcgugan.com/blog/tech/
```shell
openssl s_client -servername willmcgugan.com -connect willmcgugan.com:443 2>/dev/null </dev/null | openssl x509 -noout -enddate
notAfter=Dec 7 11:41:55 2024 GMT
```
| open | 2024-12-13T17:12:49Z | 2024-12-13T17:13:10Z | https://github.com/Textualize/rich/issues/3586 | [] | zed | 1 |
OpenBB-finance/OpenBB | machine-learning | 6,795 | [🕹️]Starry-eyed Supporter | ### What side quest or challenge are you solving?
five friends to star our repository.
### Points
150
### Description
my five friends to star our repository.
### Provide proof that you've completed the task





| closed | 2024-10-17T05:49:34Z | 2024-10-20T18:11:17Z | https://github.com/OpenBB-finance/OpenBB/issues/6795 | [] | sohelhussain | 8 |
NullArray/AutoSploit | automation | 739 | Divided by zero exception9 | Error: Attempted to divide by zero.9 | closed | 2019-04-19T16:00:05Z | 2019-04-19T16:38:21Z | https://github.com/NullArray/AutoSploit/issues/739 | [] | AutosploitReporter | 0 |
aimhubio/aim | data-visualization | 2,503 | Add the ability to easily copy run hash with a single click | ## 🚀 Feature
As a user, it would be very helpful to have the ability to easily copy the run hash with a single click. Currently, the process of copying the run hash requires multiple steps, such as right-clicking and selecting 'Copy' or manually highlighting and copying the text. This can be time-consuming and frustrating, especially when working with a large number of experiments.
### Motivation
Adding a single-click option to copy the run hash would significantly improve the user experience and make it easier to track and reference specific runs. It would also save time, as users would not have to manually copy the run hash each time they want to reference it. This feature would be particularly useful for users who frequently need to share run hashes with others, such as in a team or collaboration setting.
| closed | 2023-01-26T12:00:21Z | 2023-02-10T12:56:15Z | https://github.com/aimhubio/aim/issues/2503 | [
"type / enhancement",
"area / Web-UI",
"phase / shipped"
] | VkoHov | 0 |
sqlalchemy/alembic | sqlalchemy | 1,146 | overload stubs missing for get_x_argument | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The `context.pyi` does not have stubs for the function overloads specified in `environment.py`
```python
@overload
def get_x_argument( # type:ignore[misc]
self, as_dictionary: Literal[False] = ...
) -> List[str]:
...
@overload
def get_x_argument( # type:ignore[misc]
self, as_dictionary: Literal[True] = ...
) -> Dict[str, str]:
...
def get_x_argument(
self, as_dictionary: bool = False
) -> Union[List[str], Dict[str, str]]:
```
This will cause typing errors if trying to use:
```python
cmd_line_path = context.get_x_argument(as_dictionary=True).get("dbpath", None)
```
```
migrations/env.py:71:21: error: Item "List[str]" of "Union[List[str], Dict[str, str]]" has no attribute "get" [union-attr]
```
Since the default typing for the function allows it to return `List[str]` even though specifying `as_dictionary` should narrow the return type.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
There should be no typing error once `@overload` stubs are added
**To Reproduce**
Please try to provide a [Minimal, Complete, and Verifiable](http://stackoverflow.com/help/mcve) example, with the migration script and/or the SQLAlchemy tables or models involved.
See also [Reporting Bugs](https://www.sqlalchemy.org/participate.html#bugs) on the website.
```py
# Insert code here
```
**Error**
```
# Copy error here. Please include the full stack trace.
```
**Versions.**
- OS:
- Python:
- Alembic:
- SQLAlchemy:
- Database:
- DBAPI:
**Additional context**
<!-- Add any other context about the problem here. -->
**Have a nice day!**
| closed | 2022-12-29T19:20:35Z | 2023-01-03T18:00:53Z | https://github.com/sqlalchemy/alembic/issues/1146 | [
"bug",
"pep 484"
] | vfazio | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 736 | 使用Transformers推理,最新的Transformers已经没有scripts/inference_hf.py脚本 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
模型量化和部署
### 基础模型
LLaMA-33B
### 操作系统
Linux
### 详细描述问题
上月还有,最近Transformers有更新。
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况
```
### 运行日志或截图
```
# 请在此处粘贴运行日志
``` | closed | 2023-07-11T06:55:25Z | 2023-07-18T22:40:37Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/736 | [
"stale"
] | aawuj | 3 |
DistrictDataLabs/yellowbrick | matplotlib | 918 | Advanced dependency test matrix on release | We've recently been having [some issues](https://github.com/DistrictDataLabs/yellowbrick/issues/902#issuecomment-508462537) ensuring that all of our dependencies are passing tests since our CI only tests the latest version. Currently, our tests have good coverage but do take a while to run, which is slowing down the PR process. Instead of updating our tests with a larger test matrix, we are proposing an advanced matrix that only runs if the version has been bumped, e.g. on a release. This matrix would include all major versions of our dependencies and potentially could expand the number of python versions and implementations (e.g. miniconda), reducing these from our day-to-day test requirements.
Also for discussion, @jklymak suggests that we [test matplotlib from master](https://github.com/DistrictDataLabs/yellowbrick/pull/914#issuecomment-509009169) so that we can better catch changes in matplotlib dependencies before a release requires a hotfix. | open | 2019-07-07T17:25:41Z | 2019-08-28T23:43:40Z | https://github.com/DistrictDataLabs/yellowbrick/issues/918 | [
"priority: low",
"type: technical debt"
] | bbengfort | 0 |
Miserlou/Zappa | flask | 2,127 | botocore.exceptions.SSLError: SSL validation failed for <s3 file> [Errno 2] No such file or directory | <!--- Provide a general summary of the issue in the Title above -->
Getting the below error while trying to access remote_env from an s3 bucket
```
[1592935276008] [DEBUG] 2020-06-23T18:01:16.8Z b8374974-f820-484a-bcc3-64a530712769 Exception received when sending HTTP request.
Traceback (most recent call last):
File "/var/task/urllib3/util/ssl_.py", line 336, in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir)
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/runtime/botocore/httpsession.py", line 254, in send
urllib_response = conn.urlopen(
File "/var/task/urllib3/connectionpool.py", line 719, in urlopen
retries = retries.increment(
File "/var/task/urllib3/util/retry.py", line 376, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/var/task/six.py", line 703, in reraise
raise value
File "/var/task/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/var/task/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "/var/task/urllib3/connectionpool.py", line 996, in _validate_conn
conn.connect()
File "/var/task/urllib3/connection.py", line 352, in connect
self.sock = ssl_wrap_socket(
File "/var/task/urllib3/util/ssl_.py", line 338, in ssl_wrap_socket
raise SSLError(e)
urllib3.exceptions.SSLError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/runtime/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/var/runtime/botocore/endpoint.py", line 244, in _send
return self.http_session.send(request)
File "/var/runtime/botocore/httpsession.py", line 281, in send
raise SSLError(endpoint_url=request.url, error=e)
botocore.exceptions.SSLError: SSL validation failed for ....... [Errno 2] No such file or directory
```
## My Environment
Zappa version used: 0.51.0
Operating System and Python version: Ubuntu , Python 3.8
Output of pip freeze
```
appdirs==1.4.3
argcomplete==1.11.1
boto3==1.14.8
botocore==1.17.8
CacheControl==0.12.6
certifi==2019.11.28
cffi==1.14.0
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
colorama==0.4.3
contextlib2==0.6.0
cryptography==2.9.2
distlib==0.3.0
distro==1.4.0
docutils==0.15.2
durationpy==0.5
Flask==1.1.2
Flask-Cors==3.0.8
future==0.18.2
h11==0.9.0
hjson==3.0.1
html5lib==1.0.1
httptools==0.1.1
idna==2.8
ipaddr==2.2.0
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
kappa==0.6.0
lockfile==0.12.2
mangum==0.9.2
MarkupSafe==1.1.1
msgpack==0.6.2
packaging==20.3
pep517==0.8.2
pip-tools==5.2.1
placebo==0.9.0
progress==1.5
pycparser==2.20
pydantic==1.5.1
PyMySQL==0.9.3
pyOpenSSL==19.1.0
pyparsing==2.4.6
python-dateutil==2.6.1
python-slugify==4.0.0
pytoml==0.1.21
PyYAML==5.3.1
requests==2.22.0
retrying==1.3.3
s3transfer==0.3.3
six==1.14.0
starlette==0.13.4
text-unidecode==1.3
toml==0.10.1
tqdm==4.46.1
troposphere==2.6.1
typing-extensions==3.7.4.2
urllib3==1.25.8
uvloop==0.14.0
webencodings==0.5.1
websockets==8.1
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.51.0
```
Your `zappa_settings.json`:
----------------------------
```
{
"dev": {
"app_function": "main.app",
"aws_region": "us-west-2",
"profile_name": "default",
"project_name": "d3c",
"runtime": "python3.8",
"keep_warm":false,
"cors": true,
"s3_bucket": "rnd-lambda-deployables",
"remote_env":"<my remote s3 file>"
}
}
```
I have confirmed that my S3 file is accessible from my local ubuntu machine however does not work on aws
| open | 2020-06-23T18:13:21Z | 2021-02-05T14:35:48Z | https://github.com/Miserlou/Zappa/issues/2127 | [] | maheshmadhusudanan | 12 |
mouredev/Hello-Python | fastapi | 558 | 如果我被网赌黑了该怎么办? | 出黑咨询+微:zdn200 飞机“@lc15688
如果你出现以下这些情况,说明你已经被黑了:↓ ↓
【网赌被黑怎么办】【网赌赢了平台不给款】【系统更新】【付款失败】【注单异常】【网络转账】【提交失败】
【单注为回归】【单注未更新】【出款通道维护】【打双倍流水】 【充值相等的金额】
关于网上赌平台赢钱了各种借口不给出款最新解决方法
切记,只要你赢钱了,遇到任何不给你提现的借口,基本说明你已经被黑了。
 | closed | 2025-03-20T12:50:00Z | 2025-03-21T08:03:54Z | https://github.com/mouredev/Hello-Python/issues/558 | [] | xiaolu460570 | 0 |
JaidedAI/EasyOCR | machine-learning | 1,386 | CRAFT training on non ocr images | Hi,
This is regarding training the CRAT model (the detection segment of EasyOCR). Apart from images containing text as part of the dataset, I also have images with no text, and I want the model to be trained on both types. While label files are provided for images containing text, I am unsure how to create labels for images without text.
Do I need to create label files for such images, or should the label file be created but left blank? | open | 2025-03-12T07:06:33Z | 2025-03-12T07:06:33Z | https://github.com/JaidedAI/EasyOCR/issues/1386 | [] | gupta9ankit5 | 0 |
pallets-eco/flask-sqlalchemy | flask | 796 | Pass instalation extras to SQLAlchemy | ### Expected Behavior
When passing extras during installation, Flask-SQLAlchemy should pass them to SQLAlchemy.
So if I `pip install flask-sqlalchemy[postgresql]`
or specify `flask-sqlalchemy = {version = "*", extras = ["postgresql"]}` in Pipfile,
the module `psycopg2` (as specified in [SQLAlchemy setup.py](https://github.com/sqlalchemy/sqlalchemy/blob/926952c4afe0b2e16c4a74f05958bded7b932760/setup.py#L168)) should get installed.
### Actual Behavior
Nothing happens, I have to specify dependency `psycopg2` manually
### Environment (imo irrelevant...)
* Operating system: Docker container, Debian 10 buster
* Python version: 3.7.4
* Flask-SQLAlchemy version: 2.4.1
* SQLAlchemy version: 1.3.11
| closed | 2019-12-11T20:08:07Z | 2020-12-05T20:21:39Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/796 | [] | mvolfik | 1 |
python-visualization/folium | data-visualization | 1,218 | Possible to have different GeoJson layer level ? | #### Problem description
I would like to make a map that display different layer level in the area that we want.
[This map](https://france-geojson.gregoiredavid.fr/) is a perfect example of the result i'm looking for.
For the moment I just have the first step, i display all region of France and i'm stuck here because i don't know how to get the click of the mouse with Folium ( or if it's not possible, i don't know how to use the highlight_function to display an another GeoJson file in the area that the mouse is ).
| closed | 2019-10-30T13:56:38Z | 2022-11-25T11:48:59Z | https://github.com/python-visualization/folium/issues/1218 | [
"question"
] | shiron263 | 1 |
miguelgrinberg/Flask-SocketIO | flask | 1,180 | Sending image to client without receiving the image from client. | Hi Sir,
I develop system which able to capture the image of detected object. The image will be store in one folder in flask server. So right now I want to display the capture image in flask-web as it's will update as real time. Real time here means the web will display the latest image in flask-web if new object is detected. Based on this scenario is't possible to use flask-socket.io to push the image to display in flask-web? Since the image is capture by the algorithm and do not receiving form client. Hope you can explain on this issue.
| closed | 2020-02-10T09:21:42Z | 2020-06-30T23:00:22Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1180 | [
"question"
] | iszzul | 11 |
ultralytics/ultralytics | computer-vision | 19,191 | How to align image preprocessing between model.val() and model.predict()? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I am getting different mAP results from these two scenarios on my custom dataset:
1) `model.val(data="path/to/yolo/dataset.yaml", imgsz=640, conf=0.001, iou=0.6, max_det=100, save_json=True)`
- This saves a COCO-style json results file.
- Use pycocotools API to load and compute AP/AR.
- Achieves 69.0 mAP.
2) `results = model.predict("path/to/val/images/", imgsz=640, conf=0.001, iou=0.6, max_det=100)`
- From here, I manually change results from YOLO-style to COCO-style and save them in a json results file.
- Use pycocotools API to load and compute AP/AR.
- Achieves 65.5 mAP.
As you can see, I am using the same parameter values (e.g., `conf`, `iou`, `max_det`) in both cases. From other threads, it seems this difference is likely due to different image preprocessing pipelines (padding, rect, etc.). I know that this is not an issue with mAP computation because I am using the exact same script (that uses pycocotools API) to conduct the evaluation in both cases.
Being new to this ultralytics repo, my simple question is: **What do I need to change in the code to make model.predict() preprocess the images the same as model.val()?**
Additional info:
- The exact same validation image directory is used in both cases.
- My raw input images are of variable size. Some are square (256x256, 512x512, 800x800, etc.), some are not (256x480, etc.)
### Additional
_No response_ | open | 2025-02-11T20:05:37Z | 2025-02-19T13:08:07Z | https://github.com/ultralytics/ultralytics/issues/19191 | [
"question",
"detect"
] | MatthewInkawhich | 6 |
nolar/kopf | asyncio | 445 | [archival placeholder] | This is a placeholder for later issues/prs archival.
It is needed now to reserve the initial issue numbers before going with actual development (PRs), so that later these placeholders could be populated with actual archived issues & prs with proper intra-repo cross-linking preserved. | closed | 2020-08-18T20:06:48Z | 2020-08-18T20:06:50Z | https://github.com/nolar/kopf/issues/445 | [
"archive"
] | kopf-archiver[bot] | 0 |
sqlalchemy/alembic | sqlalchemy | 654 | False change for unique constraint with case-sensitive names in 1.4.0 | PostgreSQL, a unique constraint with uppercase letters in name. `_compare_indexes_and_uniques` takes name as `_constraint_sig.name` (just the name without quotes) for constraint from connection, but as `_constraint_sig.md_name_to_sql_name(context)` (gives the name in double quotes) for constraint from metadata. That causes false changes detection with script that drops and creates the same constraint.
The following change solves the problem:
```
--- a/alembic/autogenerate/compare.py
+++ b/alembic/autogenerate/compare.py
@@ -541,7 +541,7 @@ def _compare_indexes_and_uniques(
conn_uniques_by_name = dict((c.name, c) for c in conn_unique_constraints)
conn_indexes_by_name = dict((c.name, c) for c in conn_indexes)
conn_names = dict(
- (c.name, c)
+ (c.md_name_to_sql_name(autogen_context), c)
for c in conn_unique_constraints.union(conn_indexes)
if c.name is not None
)
```
| closed | 2020-02-06T09:57:13Z | 2020-02-06T19:00:57Z | https://github.com/sqlalchemy/alembic/issues/654 | [
"bug",
"autogenerate - detection",
"postgresql",
"regression"
] | ods | 6 |
deepinsight/insightface | pytorch | 2,736 | transfer learning or fine tuning insightface on custom (my own) dataset | I'm currently working on transfer learning with InsightFace using the glint360k_cosface_r100_fp16_0.1 model from the ArcFace Torch section. However, I'm facing issues with either overfitting or underfitting on my dataset, and I'm not sure what I'm doing wrong. Here are the problems I'm encountering:
1. My dataset consists of 127 individuals, with only 7 images per person taken from different angles: front view, 3/4 left and right, upper view, lower view, and profile left and right. This results in a total dataset of 889 images.
2. Initially, I split the data 80% for training and 20% for validation at the folder level, meaning each person has 5 training images and 2 validation images. This led to underfitting, likely due to insufficient training data per person.
3. To address this, I performed data augmentation first and then applied the same 80/20 split. However, this resulted in overfitting, as I suspect the model was "cheating" by memorizing patterns from the augmented images rather than generalizing.
Below is the pseudocode representing my approach:
```
BEGIN
# ---- SETUP ENVIRONMENT ----
SET CUDA and OpenCV paths
SET PyTorch memory allocation config
# ---- IMPORT LIBRARIES ----
IMPORT required libraries (Torch, NumPy, OpenCV, InsightFace, etc.)
# ---- DEFINE FaceDataset CLASS ----
CLASS FaceDataset:
INITIALIZE dataset directory, transformations, and cache
IF cache exists:
LOAD dataset from cache
ELSE:
INITIALIZE face detection model (InsightFace)
SCAN dataset directory
FOR each image folder:
FOR each image:
DETECT face
IF face detected:
CROP and RESIZE to (112,112)
STORE in dataset
SAVE dataset to cache
FUNCTION _detect_face(image):
READ image
CONVERT to RGB
DETECT faces using InsightFace
IF face detected:
CROP, RESIZE, RETURN face
ELSE:
RETURN None
FUNCTION __getitem__(index):
RETURN image and label
FUNCTION __len__():
RETURN number of samples
# ---- DEFINE FaceRecognitionModel CLASS ----
CLASS FaceRecognitionModel:
INITIALIZE ResNet50 backbone
FREEZE lower layers, fine-tune upper layers
ADD fully connected classifier with dropout
FUNCTION forward(input):
PASS through backbone
PASS through classifier head
RETURN output
# ---- DEFINE TRAINING FUNCTION ----
FUNCTION train_model(model, train_loader, val_loader, criterion, optimizer, scheduler, num_epochs):
INITIALIZE metrics storage
SET early stopping threshold
FOR epoch in range(num_epochs):
IF warm-up phase:
ADJUST learning rate
# ---- TRAIN PHASE ----
SET model to training mode
FOR batch in train_loader:
LOAD input images and labels
COMPUTE predictions
CALCULATE loss
BACKPROPAGATE and update weights
# ---- VALIDATION PHASE ----
SET model to evaluation mode
FOR batch in val_loader:
COMPUTE predictions
CALCULATE validation loss
UPDATE scheduler with validation loss
CHECK for early stopping condition
RETURN best model
# ---- DEFINE VALIDATION ----
FUNCTION split_data():
EXTRACT person identity from filenames
PERFORM GroupShuffleSplit to avoid identity leakage
RETURN train and validation indices
# ---- DEFINE DATA AUGMENTATION ----
FUNCTION get_transforms():
RETURN image augmentation pipeline (flip, resize, normalize)
# ---- DEFINE ONNX EXPORT FUNCTION ----
FUNCTION export_to_onnx(model, save_path):
CONVERT PyTorch model to ONNX format
VERIFY conversion
RETURN ONNX model
# ---- MAIN FUNCTION ----
FUNCTION main():
SET dataset path, cache directory, and logging path
INITIALIZE dataset with caching enabled
SPLIT dataset ensuring unique individuals in train and validation
APPLY data augmentation
CREATE data loaders for training and validation
# ---- MODEL INITIALIZATION ----
LOAD ResNet50 backbone
INITIALIZE FaceRecognitionModel
SET loss function, optimizer, and scheduler
# ---- TRAIN THE MODEL ----
CALL train_model()
# ---- EXPORT TRAINED MODEL ----
CALL export_to_onnx()
PRINT "Training Complete!"
# ---- RUN MAIN FUNCTION ----
IF __name__ == "__main__":
CALL main()
END
```
Since I'm new to this field, I would really appreciate detailed feedback on what mistakes I might be making in my approach or code. Thanks in advance! | open | 2025-03-18T06:06:52Z | 2025-03-18T06:07:22Z | https://github.com/deepinsight/insightface/issues/2736 | [] | acel122 | 0 |
mwaskom/seaborn | matplotlib | 3,784 | [Feature Request] style parameter in `displot`, `catplot` and `lmplot` similar to `relplot` | Currently, `relplot` has `style` parameter that provides an additional way to "facet" the data beyond col, row and hue using `linestyle`. It would be nice if this was extended to the other figure plot types. This would also lead to a more consistent API across the different facet grid plots.
- `displot` - kdeplot and ecdfplot would change `linestyle`, histplot would change patch `hatching`.
- `catplot` - stripplot, swarmplot would change `linestyle`; boxplot, violinplot, boxenplot, barplot and countplot would change `hatching`, pointplot would change `linestyle` and `marker`
- `lmplot` - would change `marker` and `linestyle`
References for supporting in underlying matplotlib.
https://matplotlib.org/stable/gallery/lines_bars_and_markers/linestyles.html
https://matplotlib.org/stable/gallery/lines_bars_and_markers/marker_reference.html
https://matplotlib.org/stable/gallery/shapes_and_collections/hatch_style_reference.html | closed | 2024-11-13T21:48:27Z | 2024-11-13T22:56:40Z | https://github.com/mwaskom/seaborn/issues/3784 | [] | hguturu | 4 |
roboflow/supervision | deep-learning | 1,429 | Autodistill or Reparameterize? | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
So Robovision provides a framework with Autodistill to transfer knowledge from larger foundational models into smaller models on custom data that runs faster. https://roboflow.com/train/yolo-world-and-yolov8. I'm just curious on the differences between this framework, and that of Reparameterization of Yolo-World with the same custom dataset to improve efficiency on custom datasets (https://github.com/AILab-CVC/YOLO-World/blob/master/docs/reparameterize.md). From the Yolo-World paper, it does seem that reparameterization, at least for coco dataset's vocabulary, does seem to perform slightly better with Yolov8-fine-tuned.

Just wondering are there any merits to both of the methods? Have anybody evaluated either of the approach and which would be the recommended approach? Thanks!
### Additional
_No response_ | closed | 2024-08-05T11:32:36Z | 2024-08-06T09:37:31Z | https://github.com/roboflow/supervision/issues/1429 | [
"question"
] | adrielkuek | 1 |
ultralytics/ultralytics | computer-vision | 19,754 | How to get the orientation of the bounding box | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello everyone,
I got handed a object detection which uses RTDETR and creates bounding boxes around the objects. In the code I see the pixel coordinates x1, y1 and x2, y2, which I assume are the left top corner and the bottom right corner of the box. Can I extract the other corners as well or rather which properties can I get from the bounding box? My goal is to determine the orientation of the object (in reference to the x or y axis of the camera). So values like an angle or width or length would help, if that is possible.
Thanks in advance!
### Additional
_No response_ | closed | 2025-03-18T09:02:07Z | 2025-03-18T22:22:05Z | https://github.com/ultralytics/ultralytics/issues/19754 | [
"question",
"detect"
] | Cody-Vu | 4 |
deezer/spleeter | tensorflow | 5 | 'spleeter' is not recognized as an internal or external command | I followed the [Quick Start](https://github.com/deezer/spleeter#quick-start) directions, but I get this error when I try using spleeter. Did I miss something? Do I need to manually add spleeter to my system path? I'm on Windows 10.

| closed | 2019-11-03T01:28:58Z | 2021-05-16T14:26:54Z | https://github.com/deezer/spleeter/issues/5 | [
"bug",
"documentation",
"enhancement",
"distribution",
"windows"
] | EdjeElectronics | 8 |
521xueweihan/HelloGitHub | python | 2,123 | Python | ## 项目推荐
- 项目地址:仅收录 GitHub 的开源项目,请填写 GitHub 的项目地址
- 类别:请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Swift、其它、书籍、机器学习)
- 项目后续更新计划:
- 项目描述:
- 必写:这是个什么项目、能用来干什么、有什么特点或解决了什么痛点
- 可选:适用于什么场景、能够让初学者学到什么
- 描述长度(不包含示例代码): 10 - 256 个字符
- 推荐理由:令人眼前一亮的点是什么?解决了什么痛点?
- 示例代码:(可选)长度:1-20 行
- 截图:(可选)gif/png/jpg
## 提示(提交时请删除以下内容)
> 点击上方 “Preview” 更方便地阅读以下内容,
提高项目收录的概率方法如下:
1. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目是否被推荐过。
2. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目
3. 如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。
再次感谢您对 HelloGitHub 项目的支持!
| closed | 2022-03-12T11:59:51Z | 2022-03-12T11:59:56Z | https://github.com/521xueweihan/HelloGitHub/issues/2123 | [
"恶意issue"
] | Azathoth-su | 1 |
google-research/bert | nlp | 767 | Limit GPU usage to specific GPU device | What's the best way of limiting `run_pretraining.py` to run on a single specific GPU device?
I can't see anything in configurations, and most docs refer to in-code `tf.device`. Thanks!
Anything better than inserting the following between the `os` and tensorflow imports?
```python
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0" # specify which GPU(s) to be used
``` | open | 2019-07-16T13:58:28Z | 2020-01-17T01:15:08Z | https://github.com/google-research/bert/issues/767 | [] | andehr | 2 |
samuelcolvin/watchfiles | asyncio | 186 | Version 1? | I think watchfiles is pretty stable and we should release version 1.
Unless anyone has any problems with that, or urgent bugs to fix/features to add; I'll release v1.0.0b1 in about a week. | closed | 2022-09-11T14:24:32Z | 2024-11-29T16:30:19Z | https://github.com/samuelcolvin/watchfiles/issues/186 | [] | samuelcolvin | 3 |
mars-project/mars | scikit-learn | 3,026 | Showing current running subtasks on worker web UI | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Is your feature request related to a problem? Please describe.**
Now, we cannot know what the worker is executing from the web UI, we need to show them.
| open | 2022-05-12T07:48:16Z | 2022-05-28T09:44:15Z | https://github.com/mars-project/mars/issues/3026 | [
"type: feature",
"mod: web",
"mod: scheduling service"
] | qinxuye | 0 |
Josh-XT/AGiXT | automation | 589 | Vercel deployment | ### Feature/Improvement Description
Vercel deployment button
### Proposed Solution
Vercel deployment button
And
Readme.md with instructions
### Acknowledgements
- [X] I have searched the existing issues to make sure this feature has not been requested yet.
- [X] I have provided enough information for everyone to understand why this feature request is needed in AGiXT. | closed | 2023-06-06T14:30:21Z | 2023-09-27T14:34:29Z | https://github.com/Josh-XT/AGiXT/issues/589 | [
"type | request | enhancement"
] | johnfelipe | 1 |
paperless-ngx/paperless-ngx | machine-learning | 8,913 | [BUG] Duplicate Tags | ### Description
I add tags via workflows. If 2 workflows are to be assigned to the same tag, it is then saved twice at the document.
### Steps to reproduce
-
### Webserver logs
```bash
-
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.4
### Host OS
Synology NAS
### Installation method
Docker - official image
### System status
```json
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-01-26T14:56:17Z | 2025-01-26T15:18:16Z | https://github.com/paperless-ngx/paperless-ngx/issues/8913 | [
"not a bug"
] | MarlonGnauck | 0 |
biolab/orange3 | data-visualization | 6,796 | Text annotation : info on how to render as html... visible in the left down corner | **What's your use case?**
Better documentation, stress on some words, text formatting of annotations and settings link to content.
<!-- Is your request related to a problem, or perhaps a frustration? -->
<!-- Tell us the story that led you to write this request. -->
Since I realized in #6641 that we can render text annotation in html, I'm still frustrated how this information is hidden while being so important. This is a really nice feature but I'm pretty sure most of users ignore it.
**What's your proposed solution?**
To write in the left down corner (I don't know how you name this area) of the widget panel how to render as html, etc

**Are there any alternative solutions?**
N/A
| closed | 2024-05-04T07:36:23Z | 2024-05-27T09:58:36Z | https://github.com/biolab/orange3/issues/6796 | [] | simonaubertbd | 1 |
open-mmlab/mmdetection | pytorch | 11,106 | swin definition mismatch with weight | Dear author:
I found the naming of class SwinTransformer in mmdet/models/backbones/swin.py can not match with weight file in https://github.com/open-mmlab/mmdetection/tree/main/configs/swin, but when the config file set "init_cfg=dict(type='Pretrained', checkpoint=pretrained)", the weight can be used to initialized, where is the functionn be implemented? It seems in the baseModule, but i cann't found there, can you help me?
| closed | 2023-10-31T03:23:17Z | 2023-10-31T06:11:11Z | https://github.com/open-mmlab/mmdetection/issues/11106 | [] | TianyuLee | 2 |
zappa/Zappa | django | 584 | [Migrated] Support for generating slimmer packages | Originally from: https://github.com/Miserlou/Zappa/issues/1525 by [figelwump](https://github.com/figelwump)
Currently zappa packaging will include all pip packages installed in the virtualenv. Installing zappa in the venv brings in a ton of dependencies. Depending on the app's actual needs, most/all of these don't actually need to be packaged and shipped to lambda. This unnecessarily increases the size of the package which makes zappa deploy/update much slower than it would otherwise.
As an example, for a simple hello world app, the package is over 8MB. The vast majority of this data is unneeded.
A possible approach here is to have an option to:
- don't package up anything from venv
- use requirements.txt in a way that doesn't slow deploy down
I see #525 and #542 but they don't seem to be resolved yet. Let me know if I'm missing anything! | closed | 2021-02-20T12:23:06Z | 2024-04-13T17:09:45Z | https://github.com/zappa/Zappa/issues/584 | [
"feature-request",
"no-activity",
"auto-closed"
] | jneves | 2 |
nerfstudio-project/nerfstudio | computer-vision | 3,301 | Cannot train train with splatfacto-w | **Describe the bug**
When running splatfacto-w it will not run. I cannot work out what is meant by "To train with it, download the train/test tsv file from the bottom of [nerf-w](https://nerf-w.github.io/) and put it under the data folder (or copy them from .\splatfacto-w\dataset_split). For instance, for Brandenburg Gate the path would be splatfacto-w\data\brandenburg_gate\brandenburg.tsv." at https://docs.nerf.studio/nerfology/methods/splatw.html | open | 2024-07-11T12:04:13Z | 2025-01-30T17:48:40Z | https://github.com/nerfstudio-project/nerfstudio/issues/3301 | [] | sion3951 | 3 |
huggingface/datasets | numpy | 7,346 | OSError: Invalid flatbuffers message. | ### Describe the bug
When loading a large 2D data (1000 × 1152) with a large number of (2,000 data in this case) in `load_dataset`, the error message `OSError: Invalid flatbuffers message` is reported.
When only 300 pieces of data of this size (1000 × 1152) are stored, they can be loaded correctly.
When 2,000 2D arrays are stored in each file, about 100 files are generated, each with a file size of about 5-6GB. But when 300 2D arrays are stored in each file, **about 600 files are generated, which is too many files**.
### Steps to reproduce the bug
error:
```python
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[2], line 4
1 from datasets import Dataset
2 from datasets import load_dataset
----> 4 real_dataset = load_dataset("arrow", data_files='tensorData/real_ResidueTensor/*', split="train")#.with_format("torch") # , split="train"
5 # sim_dataset = load_dataset("arrow", data_files='tensorData/sim_ResidueTensor/*', split="train").with_format("torch")
6 real_dataset
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py:2151](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py#line=2150), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2148 return builder_instance.as_streaming_dataset(split=split)
2150 # Download and prepare data
-> 2151 builder_instance.download_and_prepare(
2152 download_config=download_config,
2153 download_mode=download_mode,
2154 verification_mode=verification_mode,
2155 num_proc=num_proc,
2156 storage_options=storage_options,
2157 )
2159 # Build dataset for splits
2160 keep_in_memory = (
2161 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2162 )
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:924](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=923), in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
922 if num_proc is not None:
923 prepare_split_kwargs["num_proc"] = num_proc
--> 924 self._download_and_prepare(
925 dl_manager=dl_manager,
926 verification_mode=verification_mode,
927 **prepare_split_kwargs,
928 **download_and_prepare_kwargs,
929 )
930 # Sync info
931 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:978](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=977), in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
976 split_dict = SplitDict(dataset_name=self.dataset_name)
977 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 978 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
980 # Checksums verification
981 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py:47](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py#line=46), in Arrow._split_generators(self, dl_manager)
45 with open(file, "rb") as f:
46 try:
---> 47 reader = pa.ipc.open_stream(f)
48 except pa.lib.ArrowInvalid:
49 reader = pa.ipc.open_file(f)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:190](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=189), in open_stream(source, options, memory_pool)
171 def open_stream(source, *, options=None, memory_pool=None):
172 """
173 Create reader for Arrow streaming format.
174
(...)
188 A reader for the given source
189 """
--> 190 return RecordBatchStreamReader(source, options=options,
191 memory_pool=memory_pool)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:52](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=51), in RecordBatchStreamReader.__init__(self, source, options, memory_pool)
50 def __init__(self, source, *, options=None, memory_pool=None):
51 options = _ensure_default_ipc_read_options(options)
---> 52 self._open(source, options=options, memory_pool=memory_pool)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi:1006](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi#line=1005), in pyarrow.lib._RecordBatchStreamReader._open()
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:155](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=154), in pyarrow.lib.pyarrow_internal_check_status()
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:92](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=91), in pyarrow.lib.check_status()
OSError: Invalid flatbuffers message.
```
reproduce:Here is just an example result, the real 2D matrix is the output of the ESM large model, and the matrix size is approximate
```python
import numpy as np
import pyarrow as pa
random_arrays_list = [np.random.rand(1000, 1152) for _ in range(2000)]
table = pa.Table.from_pydict({
'tensor': [tensor.tolist() for tensor in random_arrays_list]
})
import pyarrow.feather as feather
feather.write_feather(table, 'test.arrow')
from datasets import load_dataset
dataset = load_dataset("arrow", data_files='test.arrow', split="train")
```
### Expected behavior
`load_dataset` load the dataset as normal as `feather.read_feather`
```python
import pyarrow.feather as feather
feather.read_feather('tensorData/real_ResidueTensor/real_tensor_1.arrow')
```
Plus `load_dataset("parquet", data_files='test.arrow', split="train")` works fine
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.26.5
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| closed | 2024-12-25T11:38:52Z | 2025-01-09T14:25:29Z | https://github.com/huggingface/datasets/issues/7346 | [] | antecede | 3 |
LibreTranslate/LibreTranslate | api | 743 | This is the most basic translation. | <img width="1265" alt="Image" src="https://github.com/user-attachments/assets/ae9ddfdc-29c5-4a40-861f-851e9068758c" />
<img width="1275" alt="Image" src="https://github.com/user-attachments/assets/05aaf0b0-b8c6-4843-919e-622d356e97a8" />
How to solve this problem? | closed | 2025-02-18T03:59:15Z | 2025-02-18T03:59:29Z | https://github.com/LibreTranslate/LibreTranslate/issues/743 | [] | kingcwt | 1 |
deezer/spleeter | deep-learning | 281 | [Feature] Add a `used by` section in the README | Hi
I've used spleeter to build https://www.edityouraudio.com, a free service to split your song (and also to generate a karaoke out of it) and I've found a lot of services that had the same kind of idea.
I think it could be interesting to have a `used by` section in the readme where people using Spleeter could add their own website.
I can make a PR if you agree with this idea
Thanks again for your work!
| closed | 2020-02-27T04:54:31Z | 2020-02-27T17:31:32Z | https://github.com/deezer/spleeter/issues/281 | [
"enhancement",
"feature"
] | martinratinaud | 2 |
rougier/scientific-visualization-book | matplotlib | 7 | Visualizations | open | 2021-08-11T23:06:12Z | 2021-08-11T23:06:12Z | https://github.com/rougier/scientific-visualization-book/issues/7 | [] | morr-debug | 0 | |
jpadilla/django-rest-framework-jwt | django | 93 | TypeError: verify_signature() got an unexpected keyword argument 'algorithms' | I getting this error using version 1.4.0
``` sh
(jwt-auth) $>pip freeze
Django==1.7.5
djangorestframework==3.0.0
djangorestframework-jwt==1.4.0
PyJWT==0.4.3
```
and I've checked the code, sth should be related to this commit @jpadilla 57c63a525ac5d99c1fbfdb2cee511b067e333a70 `Set allowed algorithms when decoding`
| closed | 2015-04-01T18:05:24Z | 2015-04-04T13:42:36Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/93 | [] | ace-han | 1 |
gradio-app/gradio | machine-learning | 10,422 | gr.File list container with scroll and the upload container event overlayed the remove "x" file | ### Describe the bug
if gr.File() is in multiple or directory mode, and the file list overflow with a scroll,
the "X" on each file line is overlayered by the upload/directory/dropfiles event container
so click on "X" of the single file trigger the select directory/file event
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
file_list = gr.File(file_count='directory') # or multiple
```
### Screenshot

### Logs
```shell
```
### System Info
```shell
All
```
### Severity
I can work around it | open | 2025-01-24T00:27:14Z | 2025-01-27T04:02:53Z | https://github.com/gradio-app/gradio/issues/10422 | [
"bug"
] | ROBERT-MCDOWELL | 2 |
openapi-generators/openapi-python-client | rest-api | 483 | Dependency Dashboard | This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.<br>[View this repository on the Mend.io Web Portal](https://developer.mend.io/github/openapi-generators/openapi-python-client).
## Config Migration Needed
- [ ] <!-- create-config-migration-pr --> Select this checkbox to let Renovate create an automated Config Migration PR.
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/lock-file-maintenance -->chore(deps): lock file maintenance
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/python-3.x -->[chore(deps): update dependency python to v3.13.2](../pull/1217)
## Detected dependencies
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/checks.yml</summary>
- `actions/checkout v4.2.2`
- `actions/setup-python v5.4.0`
- `actions/cache v4`
- `actions/upload-artifact v4.6.2`
- `actions/checkout v4.2.2`
- `actions/setup-python v5.4.0`
- `actions/cache v4`
- `actions/checkout v4.2.2`
- `actions/setup-python v5`
- `actions/download-artifact v4.2.1`
- `actions/upload-artifact v4.6.2`
- `actions/checkout v4.2.2`
- `actions/setup-python v5.4.0`
- `actions/cache v4`
- `actions/cache v4`
- `python 3.9`
- `python 3.12`
- `ghcr.io/openapi-generators/openapi-test-server 0.0.1`
- `python 3.9`
</details>
<details><summary>.github/workflows/release.yml</summary>
- `actions/checkout v4.2.2`
- `pypa/gh-action-pypi-publish v1.12.4`
</details>
</blockquote>
</details>
<details><summary>pep621</summary>
<blockquote>
<details><summary>end_to_end_tests/docstrings-on-attributes-golden-record/pyproject.toml</summary>
- `poetry-core >=1.0.0`
</details>
<details><summary>end_to_end_tests/golden-record/pyproject.toml</summary>
- `poetry-core >=1.0.0`
</details>
<details><summary>end_to_end_tests/literal-enums-golden-record/pyproject.toml</summary>
- `poetry-core >=1.0.0`
</details>
<details><summary>end_to_end_tests/test-3-1-golden-record/pyproject.toml</summary>
- `poetry-core >=1.0.0`
</details>
<details><summary>integration-tests/pyproject.toml</summary>
- `httpx >=0.20.0,<0.29.0`
- `attrs >=22.2.0`
- `python-dateutil >=2.8.0`
- `pytest-asyncio >=0.23.5`
</details>
<details><summary>pyproject.toml</summary>
- `jinja2 >=3.0.0,<4.0.0`
- `typer >0.6,<0.16`
- `colorama >=0.4.3`
- `shellingham >=1.3.2,<2.0.0`
- `pydantic >=2.10,<3.0.0`
- `attrs >=22.2.0`
- `python-dateutil >=2.8.1,<3.0.0`
- `httpx >=0.20.0,<0.29.0`
- `ruamel.yaml >=0.18.6,<0.19.0`
- `ruff >=0.2,<0.12`
- `typing-extensions >=4.8.0,<5.0.0`
- `pytest >8`
- `pytest-mock >3`
- `mypy >=1.13`
- `types-pyyaml <7.0.0,>=6.0.3`
- `types-certifi <2021.10.9,>=2020.0.0`
- `types-python-dateutil <3.0.0,>=2.0.0`
- `syrupy >=4`
</details>
</blockquote>
</details>
<details><summary>pip_setup</summary>
<blockquote>
<details><summary>end_to_end_tests/metadata_snapshots/setup.py</summary>
- `httpx >= 0.20.0, < 0.29.0`
- `attrs >= 22.2.0`
- `python-dateutil >= 2.8.0, < 3`
</details>
</blockquote>
</details>
<details><summary>poetry</summary>
<blockquote>
<details><summary>end_to_end_tests/docstrings-on-attributes-golden-record/pyproject.toml</summary>
- `python ^3.9`
- `httpx >=0.20.0,<0.29.0`
- `attrs >=22.2.0`
- `python-dateutil ^2.8.0`
</details>
<details><summary>end_to_end_tests/golden-record/pyproject.toml</summary>
- `python ^3.9`
- `httpx >=0.20.0,<0.29.0`
- `attrs >=22.2.0`
- `python-dateutil ^2.8.0`
</details>
<details><summary>end_to_end_tests/literal-enums-golden-record/pyproject.toml</summary>
- `python ^3.9`
- `httpx >=0.20.0,<0.29.0`
- `attrs >=22.2.0`
- `python-dateutil ^2.8.0`
</details>
<details><summary>end_to_end_tests/test-3-1-golden-record/pyproject.toml</summary>
- `python ^3.9`
- `httpx >=0.20.0,<0.29.0`
- `attrs >=22.2.0`
- `python-dateutil ^2.8.0`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| open | 2021-08-25T04:53:06Z | 2025-03-24T07:05:10Z | https://github.com/openapi-generators/openapi-python-client/issues/483 | [] | renovate[bot] | 0 |
predict-idlab/plotly-resampler | plotly | 265 | [BUG] FigureResampler show_dash in JupyterLab fails to display plot | **Describe the bug** :crayon:
>A clear and concise description of what the bug is.
Plotly-resampler figureresampler fails to load dash figures in a jupyterhub+jupyterlab on kubernetes (i.e. z2jh) environment. With jupyterhub on kubernetes, user notebook servers are spawned as pods and user access to those servers are proxied via jupyterhub.
Dash requires requests_pathname_prefix on init() and jupyter_server_url on run for it to display inline or provide correct link for external display. `show_dash_kwargs` is only passed to dash on run() and not init() so end result is it doesn't work.
**Reproducing the bug** :mag:
> Please provide steps & minimal viable code to reproduce the behavior.
> Giving this information makes it tremendously easier to work on your issue!
Snippet taken from plotly_resampler_basic_example.ipynb, main difference is adding extra dash kwargs for it to work in jupyterhub+jupyterlab on kubernetes behind jupyterhub proxy.
```
import pandas as pd
import numpy as np
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import sys
# BEGIN Additional portion for jupyterhub
import os
import plotly.io as pio
pio.renderers.default = 'iframe' # or 'colab' or 'iframe' or 'iframe_connected' or 'sphinx_gallery'
port = 7004
service_prefix = os.getenv('JUPYTERHUB_SERVICE_PREFIX', '/')
domain = os.getenv('JUPYTERHUB_HTTP_REFERER', "https://<my jupyterhub domain>")
requests_pathname_prefix=f"{service_prefix}proxy/{port}/"
show_dash_kwargs={"requests_pathname_prefix":requests_pathname_prefix}
# END additional portion for jupyterhub
sys.path.append("..")
from plotly_resampler import FigureResampler, FigureWidgetResampler, EveryNthPoint
from plotly_resampler.aggregation import NoGapHandler, MedDiffGapHandler
n = 2_000_000
x = np.arange(n)
noisy_sine = (3 + np.sin(x / 2000) + np.random.randn(n) / 10) * x / (n / 4)
fig = FigureResampler(go.Figure(),show_dash_kwargs=show_dash_kwargs)
fig.add_trace(go.Scattergl(showlegend=True), hf_x=x, hf_y=noisy_sine, max_n_samples=300)
fig.add_trace(dict(x=x, y=noisy_sine + 1, name='sp1'), max_n_samples=2000)
fig.show_dash(mode="inline",jupyter_server_url=domain,port=port)
```
**Expected behavior** :wrench:
> Please give a clear and concise description of what you expected to happen.
Ability to pass kwargs to dash.Dash() init() and run() so that plot can display correctly in 'inline' or 'external' modes.
See workaround section for suggestion.
**Screenshots** :camera_flash:
> If applicable, add screenshots to help explain your problem.
Without workaround

With workaround/suggested solution

**Environment information**: (please complete the following information)
- OS: Rocky Linux release 8.8 (Green Obsidian) kernel 5.14.0-284.30.1.el9_2.x86_64
- Python environment:
- Python version: 3.11.6
- plotly-resampler environment: JupyterLab, tested with Chrome/Edge but problem should be browser agnostic
- plotly-resampler version: 0.9.1
```
# pip3 freeze |egrep -i "jupy|dash|plotly"
dash==2.14.0
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-table==5.0.0
jupyter_client==8.4.0
jupyter_core==5.4.0
jupyterlab-widgets==3.0.9
jupytext==1.15.2
plotly==5.17.0 #have also tried with 5.18.0
plotly-resampler==0.9.1
```
**Workaround / Solution**
Allow passing through kwargs to dash init.
Replace `app = dash.Dash("local_app")` with `app = dash.Dash("local_app",**self._show_dash_kwargs)` in `plotly_resampler/figure_resampler/figure_resampler.py`
Example one liner sed to do code replacement
```
sed -i 's/app = dash.Dash("local_app")/app = dash.Dash("local_app",**self._show_dash_kwargs)/g' ..../plotly_resampler/figure_resampler/figure_resampler.py
```
Might be cleaner to have a dash_kwargs in addition to show_dash kwargs though.
**Additional context**
Side note, I can't get other modes to work
* figureresampler + inline_persistent -> I don't have jupyter-dash installed since dash 2.11+ natively supports jupyterlab **(expected)**
* figurewidgetresampler + any mode -> get a javascript error indicating it fails to load model class from jupyter-plotly | closed | 2023-10-31T06:46:44Z | 2023-11-24T07:56:43Z | https://github.com/predict-idlab/plotly-resampler/issues/265 | [
"bug"
] | miloaissatu | 2 |
akfamily/akshare | data-science | 5,856 | option_value_analysis_em() 仅返回 100 条数据 | > 欢迎加入专注于财经数据和量化投研的【数据科学实战】知识社区,
> 获取《AKShare-财经数据宝典》,宝典随 AKShare 同步更新,
> 里面汇集了财经数据的使用经验和指南,还分享了众多国内外财经数据源。
> 欢迎加入我们,交流财经数据问题,探索量化投研的世界!
> 详细信息参考:https://akshare.akfamily.xyz/learn.html
## 重要前提
遇到任何 AKShare 使用问题,请先将您本地的 AKShare 升级到**最新版**,可以通过如下命令升级:
```
pip install akshare --upgrade # Python 版本需要大于等于 3.9
```
## 如何提交问题
请提交以下相关信息,以更精准的解决问题。**不符合提交规范的 issue 会被关闭!**
> 由于开源项目维护工作量较大,本 issue 只接受接口报错问题。
如有财经数据方面的问题,请加入【数据科学实战】知识社区提问,感谢支持!
**详细问题描述**
1. 请先详细阅读 AKShare 文档中对应接口的使用方式:https://akshare.akfamily.xyz
2. 请检查操作系统版本,目前只支持 64 位主流操作系统
3. 请检查 Python 版本,目前只支持 3.9 以上的版本
4. 请确认 AKShare 版本,升级到最新版复现问题
akshare==1.16.33
7. 请提交相关接口的名称和相应的调用代码
>>> df = ak.option_value_analysis_em()
>>> df
期权代码 期权名称 最新价 时间价值 内在价值 隐含波动率 理论价格 标的名称 标的最新价 标的近一年波动率 到期日
0 10008930 科创50沽9月1400 0.3023 0.0493 0.253 40.77 0.3041 科创50ETF 1.147 41.38 2025-09-24
1 10008929 科创50购9月1400 0.0349 0.0349 0.000 32.08 0.0614 科创50ETF 1.147 41.38 2025-09-24
2 10008924 科创板50沽9月1350 NaN 0.0433 0.235 38.33 0.2861 科创板50ETF 1.115 41.00 2025-09-24
3 10008923 科创板50购9月1350 0.0345 0.0345 0.000 31.54 0.0610 科创板50ETF 1.115 41.00 2025-09-24
4 10008916 科创板50沽9月1300 NaN 0.0532 0.185 37.65 0.2485 科创板50ETF 1.115 41.00 2025-09-24
.. ... ... ... ... ... ... ... ... ... ... ...
95 10008815 300ETF购9月3900 0.2600 0.1630 0.097 16.50 0.3218 沪深300ETF 3.997 22.00 2025-09-24
96 10008814 300ETF购9月3800 0.3230 0.1260 0.197 16.64 0.3791 沪深300ETF 3.997 22.00 2025-09-24
97 10008813 300ETF购9月3700 0.3877 0.0907 0.297 16.11 0.4426 沪深300ETF 3.997 22.00 2025-09-24
98 10008812 300ETF购9月3600 NaN 0.0842 0.397 18.29 0.5121 沪深300ETF 3.997 22.00 2025-09-24
99 10008811 300ETF购9月3500 0.5500 0.0530 0.497 16.38 0.5872 沪深300ETF 3.997 22.00 2025-09-24
[100 rows x 11 columns]
9. 接口报错的截图或描述
10. 期望获得的正确结果
期望获取全部列表,历史上获取数据大约在 7、8 百条左右
另外,#5722 提到这个问题,在 1.16.33 里重新出现了
谢谢! | closed | 2025-03-11T03:19:10Z | 2025-03-11T09:42:26Z | https://github.com/akfamily/akshare/issues/5856 | [
"bug"
] | wugifer | 3 |
zappa/Zappa | flask | 1,225 | Zappa pex support | Thank you for the awesome project.
I am using zappa for deploying my monolith django project on aws lambda.
To move to microservices, I am using pantsbuild [django mono-repo ](https://github.com/pantsbuild/example-django) example project.
This allows me to build and deploy separate mini django services as pex files. I can run the pex file on a python-3.8:slim-buster with docker-compose.
My question is can I use zappa to neatly deploy the service contained in docker-compose on aws lambda? Or is there a way to deploy pex on aws lambda ? | closed | 2023-03-13T03:25:30Z | 2023-03-18T09:32:37Z | https://github.com/zappa/Zappa/issues/1225 | [] | ganeshprasadrao | 2 |
ray-project/ray | data-science | 50,928 | [core] Fix mock dependency | Subissue for #50718 | closed | 2025-02-27T00:05:02Z | 2025-03-02T22:03:15Z | https://github.com/ray-project/ray/issues/50928 | [
"enhancement",
"core"
] | Drice1999 | 1 |
huggingface/text-generation-inference | nlp | 3,005 | Quantized BNB-4bit models are not working. | ### System Info
Testing on 2x 4090 TI Super
```
- MODEL_ID=unsloth/Qwen2.5-Coder-32B-bnb-4bit
- MODEL_ID=unsloth/Mistral-Small-24B-Instruct-2501-bnb-4bit
```
```
text-generation-inference-1 | [rank1]: │ /usr/src/server/text_generation_server/utils/weights.py:275 in get_sharded │
text-generation-inference-1 | [rank1]: │ │
text-generation-inference-1 | [rank1]: │ 272 │ │ world_size = self.process_group.size() │
text-generation-inference-1 | [rank1]: │ 273 │ │ size = slice_.get_shape()[dim] │
text-generation-inference-1 | [rank1]: │ 274 │ │ assert ( │
text-generation-inference-1 | [rank1]: │ ❱ 275 │ │ │ size % world_size == 0 │
text-generation-inference-1 | [rank1]: │ 276 │ │ ), f"The choosen size {size} is not compatible with sharding o │
text-generation-inference-1 | [rank1]: │ 277 │ │ return self.get_partial_sharded( │
text-generation-inference-1 | [rank1]: │ 278 │ │ │ tensor_name, dim, to_device=to_device, to_dtype=to_dtype │
text-generation-inference-1 | [rank1]: │ │
text-generation-inference-1 | [rank1]: │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
text-generation-inference-1 | [rank1]: │ │ dim = 1 │ │
text-generation-inference-1 | [rank1]: │ │ f = <builtins.safe_open object at 0x77297422f7b0> │ │
text-generation-inference-1 | [rank1]: │ │ filename = '/data/hub/models--unsloth--Qwen2.5-Coder-32B-bnb-4bit/sn… │ │
text-generation-inference-1 | [rank1]: │ │ self = <text_generation_server.utils.weights.Weights object at │ │
text-generation-inference-1 | [rank1]: │ │ 0x772972e7a3d0> │ │
text-generation-inference-1 | [rank1]: │ │ size = 1 │ │
text-generation-inference-1 | [rank1]: │ │ slice_ = <builtins.PySafeSlice object at 0x772973da8f80> │ │
text-generation-inference-1 | [rank1]: │ │ tensor_name = 'model.layers.0.self_attn.o_proj.weight' │ │
text-generation-inference-1 | [rank1]: │ │ to_device = True │ │
text-generation-inference-1 | [rank1]: │ │ to_dtype = True │ │
text-generation-inference-1 | [rank1]: │ │ world_size = 2 │ │
text-generation-inference-1 | [rank1]: │ ╰──────────────────────────────────────────────────────────────────────────╯ │
text-generation-inference-1 | [rank1]: ╰──────────────────────────────────────────────────────────────────────────────╯
text-generation-inference-1 | [rank1]: AssertionError: The choosen size 1 is not compatible with sharding on 2 shards rank=1
text-generation-inference-1 | 2025-02-10T12:36:10.058627Z ERROR text_generation_launcher: Shard 1 failed to start
text-generation-inference-1 | 2025-02-10T12:36:10.058637Z INFO text_generation_launcher: Shutting down shards
text-generation-inference-1 | 2025-02-10T12:36:10.065243Z INFO shard-manager: text_generation_launcher: Terminating shard rank=0
text-generation-inference-1 | 2025-02-10T12:36:10.065344Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=0
text-generation-inference-1 | 2025-02-10T12:36:10.165431Z INFO shard-manager: text_generation_launcher: shard terminated rank=0
text-generation-inference-1 | Error: ShardCannotStart
```
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
### Reproduction
```
text-generation-inference:
image: ghcr.io/huggingface/text-generation-inference:3.1.0
environment:
- HF_TOKEN=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
# - MODEL_ID=Qwen/Qwen2.5-Coder-7B-Instruct-AWQ
# - MODEL_ID=mistralai/Mistral-Small-24B-Instruct-2501
# - MODEL_ID=Qwen/Qwen2.5-Coder-32B-Instruct-AWQ
# - MODEL_ID=avoroshilov/DeepSeek-R1-Distill-Qwen-32B-GPTQ_4bit-128g
# - MODEL_ID=Valdemardi/DeepSeek-R1-Distill-Qwen-32B-AWQ
# - MODEL_ID=Qwen/Qwen2.5-Coder-32B-Instruct-GPTQ-Int4
- MODEL_ID=unsloth/Qwen2.5-Coder-32B-bnb-4bit
# - MODEL_ID=unsloth/Mistral-Small-24B-Instruct-2501-bnb-4bit
# - SHARDED=true
# - SHARDS=2
# - QUANTIZED=bitsandbytes
ports:
- "0.0.0.0:8099:80"
restart: "unless-stopped"
# command: "--quantize bitsandbytes-nf4 --max-input-tokens 30000"
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0', '1']
capabilities: [gpu]
shm_size: '90g'
volumes:
- ~/.hf-docker-data:/data
networks:
- llmhost
```
### Expected behavior
Unquant ones works fine with "--quantize bitsandbytes-nf4 --max-input-tokens 30000" | open | 2025-02-10T12:40:25Z | 2025-02-10T12:40:41Z | https://github.com/huggingface/text-generation-inference/issues/3005 | [] | v3ss0n | 0 |
the0demiurge/ShadowSocksShare | flask | 115 | 【免費电脑&手机梯子】真正无限流量且电脑和手机都能用的免费梯子软件 | **目前电脑和手机的使用市场在我们生活中占据了很大一部分,今天给大家推荐一个真正无限流量且电脑和手机都能用的免费梯子软件,不仅能帮助你科学上外网,而且还能保护你的在线隐私安全。Westworld应用程序为电脑和手机提供了令人信服的安全保障,只需轻点一下即可实现。**
官网地址:[https://xbsj4621.fun/i/ask025](https://xbsj4621.fun/i/ask025)

电脑和手机操作系统是目前全球使用最广泛的移动操作系统,每月活跃设备超过20亿台,并且这一数字还在不断增长。而就我这些年使用电脑和手机梯子软件的经验,Westworld应该算是在两个系统当中的表现最优解。
如果您不想成为网络攻击的受害者,我强烈建议您安装一个梯子软件。VPN代表虚拟专用网络,它可以创建一个安全且加密的连接,使他人难以窥视您的在线活动。大多数可靠的VPN服务提供商都使用强大的加密技术和IP屏蔽,并承诺不记录用户活动的严格政策。
并且Westworld还有一个很大的优点,就是可以通过一个系统的小漏洞实现永久免费使用的目的,通过上图我们可以看到,他对新用户有个免费试用三天的策略,而进入注册步骤发现,是可以用临时邮箱进行注册的。既然如此,我们只要三天换一个邮箱,就可以永久免费使用了。
当然不仅仅因为这个原因,还有好多优点,促成了我今天推荐Westworld:
1、超高速:Westworld是我使用过的最快速的VPN之一。
2、电池寿命无需担忧:Westworld不会耗尽您的手机电池。
3、隐私保护:使用Westworld,您可以放心上网,因为根据英属维尔京群岛的法律,他们无需记录任何在线活动。
4、无限设备:如果您家里有多台电脑和手机手机,也可以轻松使用Westworld。
5、操作简便:Westworld提供了无忧的解决方案,您几乎不会感到使用它的存在。
6、免费上网:Westworld不会限制您的网络体验,您可以轻松访问您想要的任何内容。
**Westworld应用程序为电脑和手机用户提供了严格的安全性和隐私保护,同时为他们打开了互联网的无限可能。您可以随时观看Netflix上您喜欢的节目或体育赛事,速度极快,而且可以做到超前缓冲。**
| open | 2024-05-14T03:23:04Z | 2024-05-14T03:23:04Z | https://github.com/the0demiurge/ShadowSocksShare/issues/115 | [] | stabbeen | 0 |
wkentaro/labelme | deep-learning | 786 | [Copy annotation] sometimes it is difficult to annotate same objects on different images, it will be really helpful if we can get a feature that can copy annotations from one image to another, if already exists, please explain. | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2020-10-08T12:33:56Z | 2022-06-25T04:54:15Z | https://github.com/wkentaro/labelme/issues/786 | [] | vishalmandley | 0 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 586 | [HELP WANTED]: Answers provided are incorrect && experience filter not working | ### Issue description
**Description:**
I encountered the following issues while using GPT-4O and Gemini bots:
1. **Incorrect Details Filled by the Bot:**
- The bot is filling incorrect information in the forms, such as:
- I do not have Angular experience, but the bot is filling 2 years of experience in Angular (this is not mentioned in my `plain_text_resume.yaml`).
- My notice period is 0 days, but the bot is filling 2 days.
- My total experience is 1.4 years, but the bot is filling 4 years.
Based on a suggestion in a previous issue, I changed the temperature from 0.4 to 0.7, but this did not resolve the issue. It seems like the bot is filling details from the job description (JD) rather than my actual data.
2. **Experience Filter Not Working:**
- The experience filter appears to be malfunctioning. Despite configuring my experience level as follows, the bot is not filtering correctly:
```
experienceLevel:
internship: true
entry: true
associate: false
mid-senior level: false
director: false
executive: false
```
Could you please assist in resolving these issues? Let me know if you need any additional information.
Thank you.
### Specific tasks
_No response_
### Additional resources
_No response_
### Additional context
_No response_ | closed | 2024-10-23T13:14:04Z | 2024-10-25T09:09:20Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/586 | [
"help wanted"
] | Abhishek09yadav | 5 |
davidsandberg/facenet | computer-vision | 654 | how to use for face verification | I am new to deep learning and I want to use this repository to do face verification, can you give me some advice (I am working on Ubuntu 16.04)? Thank you. | closed | 2018-03-02T14:07:50Z | 2018-04-04T04:03:37Z | https://github.com/davidsandberg/facenet/issues/654 | [] | EvergreenHZ | 1 |
microsoft/nni | pytorch | 5,305 | Questions of Model Compression: Pruning & Quantization | Thanks for the great projects. I have some questions while using NNI.
1. Quantization doesn't need ModelSpeedup, right?
2. Finetuning is necessary for pruning and quantization?
3. Models are bigger after quantization. I got some tips in other issues, but I couldn't make it out. Is this phenomenon common? How is the degree of quantization reflected? | closed | 2023-01-03T06:54:24Z | 2023-02-24T05:17:18Z | https://github.com/microsoft/nni/issues/5305 | [] | MT010104 | 4 |
openapi-generators/openapi-python-client | rest-api | 125 | Repsonse generate should handle responses of type objects | **Describe the bug**
If I fix the function to allow int or string for the status code (related #124 ) and my response schema type is object. Responses will not generate even if the status code is a valid status code of 200
**Expected behavior**
If response is an object, it the response code should generate to reference the mode of the object to validate the data in the response.
**OpenAPI Spec File**
```
responses:
"200":
description: Success
content:
application/json:
schema:
$ref: "#/components/schemas/LinkResponse"
default:
description: Unknown error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
LinkResponse:
type: object
description: The response f
properties:
data:
type: object
description: The container object for a
properties:
account-id:
type: string
format: uuid
description: unique identifier for the account
provider-id:
type: string
description: provider id.
Error:
type: object
properties:
code:
type: integer
description: An HTTP status code
example: 500
message:
type: string
description: A message describing why the error was thrown.
example: The server has exploded.
```
**Desktop (please complete the following information):**
- OS: [e.g. macOS 10.15.1]
- Python Version: [e.g. 3.8.0]
- openapi-python-client version [e.g. 0.1.0]
**Additional context**
Add any other context about the problem here.
| closed | 2020-08-04T19:41:49Z | 2020-09-26T15:30:27Z | https://github.com/openapi-generators/openapi-python-client/issues/125 | [
"🐞bug"
] | bladecoates | 1 |
statsmodels/statsmodels | data-science | 8,976 | AttributeError: 'Axes' object has no attribute 'is_first_col' | Describe the bug
the statsmodels version is 0.14.0,which was updated in Anaconda.
The scatter_ellipse() generates the massage saying that
AttributeError: 'Axes' object has no attribute 'is_first_col',
however,there's no attribute called 'is_first_col' in scatter_ellipse()
and it dosen't generate as least 4 pictures as i expected,just ONE picture.
the massage is as follows:
> File D:\ProgramData\anaconda3\Lib\site-packages\statsmodels\graphics\plot_grids.py:138, in scatter_ellipse(data, level, varnames, ell_kwds, plot_kwds, add_titles, keep_ticks, fig)
136 if add_titles:
137 ax.set_title('%s-%s' % (varnames[i], varnames[j]))
--> 138 if not ax.is_first_col():
139 if not keep_ticks:
140 ax.set_yticks([])
AttributeError: 'Axes' object has no attribute 'is_first_col'
Code Sample,
```import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from statsmodels.graphics.plot_grids import scatter_ellipse
data_raw = sns.load_dataset('iris')
labels = ['Sepal length','Sepal width',
'Petal length','Petal width']
fig = plt.figure(figsize=(8,8))
scatter_ellipse(data_raw.iloc[:,:-1],
varnames=labels, fig=fig)
for s_idx in data_raw.species.unique():
data= data_raw.loc[data_raw.species == s_idx].iloc[:,:-1]
fig = plt.figure(figsize=(8,8))
scatter_ellipse(data, varnames=labels, fig=fig,)
fig.savefig('scatter ellipse' + s_idx + '.svg', format='svg')
```
Expected Output
I just want the IDE not to generate the error message and generate the correct number of pictures.
| open | 2023-08-15T11:36:24Z | 2023-08-15T11:36:24Z | https://github.com/statsmodels/statsmodels/issues/8976 | [] | pro-dreamer | 0 |
AntonOsika/gpt-engineer | python | 452 | command-"python -m gpt_engineer.main example" throwing error | **YOU MAY DELETE THE ENTIRE TEMPLATE BELOW.**
## Issue Template
## Expected Behavior
Please describe the behavior you are expecting.
## Current Behavior
What is the current behavior?
## Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
### Steps to Reproduce
Please provide detailed steps for reproducing the issue.
1. step 1
2. step 2
3. you get it...
### Failure Logs
Please include any relevant log snippets or files here.
| closed | 2023-06-30T11:09:47Z | 2023-06-30T11:10:42Z | https://github.com/AntonOsika/gpt-engineer/issues/452 | [] | vipulpapriwal | 0 |
Textualize/rich | python | 2,800 | [BUG] Rich traceback is not working with @inject from " ets-labs / python-dependency-injector" | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
Hi I opened the same issue here also https://github.com/ets-labs/python-dependency-injector/issues/667 as I don't know which one must fix this issue.
It happens when I use the @inject decorator
**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
I may ask you to copy and paste the output of the following commands. It may save some time if you do it now.
If you're using Rich in a terminal:
```
python -m rich.diagnose
pip freeze | grep rich
```


```sh
pip freeze | grep rich
rich==13.3.1
```
If you're using Rich in a Jupyter Notebook, run the following snippet in a cell
and paste the output in your bug report.
```python
from rich.diagnose import report
report()
```
</details>
| closed | 2023-02-11T20:13:30Z | 2023-03-06T15:29:03Z | https://github.com/Textualize/rich/issues/2800 | [
"Needs triage"
] | arkanmgerges | 8 |
coqui-ai/TTS | deep-learning | 3,291 | [Bug] Custom model inference error [Unresolved] | ### Describe the bug
Inferencing custom model fails to work for various reasons (Language, unable to synthesize audio, unexpected pathing, json errors)
### To Reproduce
1.) Finetune model/Train model on Ljspeech dataset
2.) Run "tts --text "Text for TTS" --model_path path/to/model --config_path path/to/config.json --out_path speech.wav --language en"
3.) Errors [Language None is not supported. | raise TypeError("Invalid file: {0!r}".format(self.name))]
### Expected behavior
Produces a voice fil with which to evaluate the model
### Logs
```shell
(coqui) alpha78@----------:/mnt/q/Utilities/CUDA/TTS/TTS/server$ tts --text "Text for TTS" --model_path ./tts_models/en/ljspeech/ --config_path ./tts_models/en/ljspeech/config.json --out_path speech.wav --language en
> Using model: xtts
> Text: Text for TTS
> Text splitted to sentences.
['Text for TTS']
Traceback (most recent call last):
File "/home/alpha78/anaconda3/envs/coqui/bin/tts", line 8, in <module>
sys.exit(main())
File "/mnt/q/Utilities/CUDA/TTS/TTS/bin/synthesize.py", line 515, in main
wav = synthesizer.tts(
File "/mnt/q/Utilities/CUDA/TTS/TTS/utils/synthesizer.py", line 374, in tts
outputs = self.tts_model.synthesize(
File "/mnt/q/Utilities/CUDA/TTS/TTS/tts/models/xtts.py", line 392, in synthesize
return self.inference_with_config(text, config, ref_audio_path=speaker_wav, language=language, **kwargs)
File "/mnt/q/Utilities/CUDA/TTS/TTS/tts/models/xtts.py", line 400, in inference_with_config
"zh-cn" if language == "zh" else language in self.config.languages
AssertionError: ❗ Language None is not supported. Supported languages are ['en', 'es', 'fr', 'de', 'it', 'pt', 'pl', 'tr', 'ru', 'nl', 'cs', 'ar', 'zh-cn', 'hu', 'ko', 'ja']
(coqui) alpha78@----------:/mnt/q/Utilities/CUDA/TTS/TTS/server$ tts --text "Text for TTS" --model_path ./tts_models/en/ljspeech/ --config_path ./tts_models/en/ljspeech/config.json --out_path speech.wav --language en
> Using model: xtts
> Text: Text for TTS
> Text splitted to sentences.
['Text for TTS']
Traceback (most recent call last):
File "/home/alpha78/anaconda3/envs/coqui/bin/tts", line 8, in <module>
sys.exit(main())
File "/mnt/q/Utilities/CUDA/TTS/TTS/bin/synthesize.py", line 515, in main
wav = synthesizer.tts(
File "/mnt/q/Utilities/CUDA/TTS/TTS/utils/synthesizer.py", line 374, in tts
outputs = self.tts_model.synthesize(
File "/mnt/q/Utilities/CUDA/TTS/TTS/tts/models/xtts.py", line 392, in synthesize
return self.inference_with_config(text, config, ref_audio_path=speaker_wav, language=language, **kwargs)
File "/mnt/q/Utilities/CUDA/TTS/TTS/tts/models/xtts.py", line 415, in inference_with_config
return self.full_inference(text, ref_audio_path, language, **settings)
File "/home/alpha78/anaconda3/envs/coqui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/mnt/q/Utilities/CUDA/TTS/TTS/tts/models/xtts.py", line 476, in full_inference
(gpt_cond_latent, speaker_embedding) = self.get_conditioning_latents(
File "/home/alpha78/anaconda3/envs/coqui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/mnt/q/Utilities/CUDA/TTS/TTS/tts/models/xtts.py", line 351, in get_conditioning_latents
audio = load_audio(file_path, load_sr)
File "/mnt/q/Utilities/CUDA/TTS/TTS/tts/models/xtts.py", line 72, in load_audio
audio, lsr = torchaudio.load(audiopath)
File "/home/alpha78/anaconda3/envs/coqui/lib/python3.10/site-packages/torchaudio/_backend/utils.py", line 204, in load
return backend.load(uri, frame_offset, num_frames, normalize, channels_first, format, buffer_size)
File "/home/alpha78/anaconda3/envs/coqui/lib/python3.10/site-packages/torchaudio/_backend/soundfile.py", line 27, in load
return soundfile_backend.load(uri, frame_offset, num_frames, normalize, channels_first, format)
File "/home/alpha78/anaconda3/envs/coqui/lib/python3.10/site-packages/torchaudio/_backend/soundfile_backend.py", line 221, in load
with soundfile.SoundFile(filepath, "r") as file_:
File "/home/alpha78/anaconda3/envs/coqui/lib/python3.10/site-packages/soundfile.py", line 658, in __init__
self._file = self._open(file, mode_int, closefd)
File "/home/alpha78/anaconda3/envs/coqui/lib/python3.10/site-packages/soundfile.py", line 1212, in _open
raise TypeError("Invalid file: {0!r}".format(self.name))
TypeError: Invalid file: None
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 3090"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.1+cu121",
"TTS": "0.20.6",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.13",
"version": "#1 SMP Thu Oct 5 21:02:42 UTC 2023"
}
}
```
### Additional context
Documentation pages had two different ways to infer the model, neither worked. | closed | 2023-11-23T07:03:56Z | 2023-12-31T08:13:51Z | https://github.com/coqui-ai/TTS/issues/3291 | [
"bug"
] | 78Alpha | 3 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 385 | longbench error: RuntimeError: CUDA error: device-side assert triggered | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型推理
### 基础模型
Chinese-LLaMA-2 (7B/13B)
### 操作系统
Linux
### 详细描述问题
```
执行命令:
model_path=chinese-llama-2-13b
data_class=zh
output_path=output
with_inst="false" # or "false" or "auto"
max_length=4096
python pred_llama2.py \
--model_path ${model_path} \
--predict_on ${data_class} \
--output_dir ${output_path} \
--with_inst ${with_inst} \
--max_length ${max_length}
```
### 依赖情况(代码类问题务必提供)
```
bitsandbytes 0.39.0
peft 0.6.0.dev0
sentence-transformers 2.2.2
sentencepiece 0.1.99
torch 2.0.1
torchaudio 2.0.2
torchdata 0.6.1
torchelastic 0.2.2
torchtext 0.15.2
torchvision 0.15.2
transformers 4.35.0.dev0 /workspace/transformers
```
### 运行日志或截图
```

``` | closed | 2023-11-02T12:45:14Z | 2023-11-06T08:14:51Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/385 | [] | panpanli521 | 5 |
aminalaee/sqladmin | sqlalchemy | 790 | Internationalization and Localization support | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
It would be very cool to implement localization support for different languages
### Describe the solution you would like.
_No response_
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | open | 2024-07-10T10:50:41Z | 2025-03-21T10:01:01Z | https://github.com/aminalaee/sqladmin/issues/790 | [] | gatepavel | 1 |
babysor/MockingBird | pytorch | 213 | 集成tacotron2 | @babysor 现在的[例子](https://www.bilibili.com/video/BV17Q4y1B7mY/)听上去合成的痕迹还是比较明显
有考虑把synthesizer更新到[tacotron2](https://github.com/NVIDIA/tacotron2)吗
| closed | 2021-11-12T03:18:20Z | 2022-03-07T15:35:27Z | https://github.com/babysor/MockingBird/issues/213 | [] | castleKing1997 | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 1,193 | AttributeError: 'Sum' object has no attribute 'gradient_x' when using sci-kit learn RBF for sci-kit optimize gp_minimize | I still get error when using sci-kit learn RBF for sci-kit optimize gp_minimize, any idea how to solve it?
here is the code to reproduce it
```
from numpy import mean
from pandas import read_csv
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.svm import SVC
from skopt.space import Integer
from skopt.space import Real
from skopt.space import Categorical
from skopt.utils import use_named_args
from skopt import gp_minimize
from skopt import gp_minimize, dummy_minimize
from sklearn.gaussian_process.kernels import RBF
import time
import numpy as np
from skopt.learning import GaussianProcessRegressor
from skopt.learning.gaussian_process.kernels import Matern
# define the space of hyperparameters to search
search_space = list()
# search_space.append(Real(1e-6, 100.0, 'log-uniform', name='C'))
# search_space.append(Categorical(['linear', 'poly', 'rbf', 'sigmoid'], name='kernel'))
# search_space.append(Integer(1, 5, name='degree'))
search_space.append(Real(1e-6, 100.0, 'log-uniform', name='gamma'))
other_kernel = RBF(length_scale=1.0)
base_estimator = GaussianProcessRegressor(
kernel= other_kernel,
normalize_y=True, noise="gaussian",
n_restarts_optimizer=0)
# define the function used to evaluate a given configuration
@use_named_args(search_space)
def evaluate_model(**params):
return 0.7
# load dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/ionosphere.csv'
dataframe = read_csv(url, header=None)
# split into input and output elements
data = dataframe.values
X, y = data[:, :-1], data[:, -1]
print(X.shape, y.shape)
# perform optimization
result = gp_minimize(evaluate_model, search_space,base_estimator=base_estimator)
# summarizing finding:
print('Best Accuracy: %.3f' % (1.0 - result.fun))
print('Best Parameters: %s' % (result.x))
```
The error log
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-5-795400b0eace>](https://localhost:8080/#) in <cell line: 48>()
46 print(X.shape, y.shape)
47 # perform optimization
---> 48 result = gp_minimize(evaluate_model, search_space,base_estimator=base_estimator)
49 # summarizing finding:
50 print('Best Accuracy: %.3f' % (1.0 - result.fun))
18 frames
[/usr/local/lib/python3.10/dist-packages/skopt/learning/gaussian_process/gpr.py](https://localhost:8080/#) in predict(self, X, return_std, return_cov, return_mean_grad, return_std_grad)
344
345 if return_mean_grad:
--> 346 grad = self.kernel_.gradient_x(X[0], self.X_train_)
347 grad_mean = np.dot(grad.T, self.alpha_)
348 # undo normalisation
AttributeError: 'Sum' object has no attribute 'gradient_x'
```
package versions
```
scikit-learn==1.2.2
scikit-optimize==0.9.0
```
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux | open | 2023-11-26T19:59:56Z | 2023-11-26T21:48:36Z | https://github.com/scikit-optimize/scikit-optimize/issues/1193 | [] | mahdiabdollahpour | 3 |
httpie/cli | api | 1,300 | Unable to See Status Code in Output | ## Checklist
- [ X] I've searched for similar issues.
- [ X] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. Run local web service that returns 401 for endpoint :8080/get-values
2. Run HTTPie against endpoint
## Current result
**Curl output:**
```sh
$ curl -v 'http://localhost:8080/get-values'
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /get-values HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 401
< Content-Type: application/json
< Transfer-Encoding: chunked
< Date: Sat, 19 Feb 2022 00:01:22 GMT
<
* Connection #0 to host localhost left intact
* <body redacted>
```
**HTTPie output:**
```sh
$ http --verbose ':8080/get-values'
GET /get-values HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/3.0.2
HTTP/1.1
Connection: keep-alive
Content-Type: application/json
Date: Sat, 19 Feb 2022 00:02:54 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked
<body redacted>
```
## Expected result
Expecting to see `HTTP/1.1 401` instead of just `HTTP/1.1 `.
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
$ http --debug ':8080/get-values'
HTTPie 3.0.2
Requests 2.27.1
Pygments 2.11.2
Python 3.10.2 (main, Feb 2 2022, 08:42:42) [Clang 13.0.0 (clang-1300.0.29.3)]
/usr/local/Cellar/httpie/3.0.2/libexec/bin/python3.10
Darwin 20.6.0
<Environment {'as_silent': <function Environment.as_silent at 0x10d7d0f70>,
'colors': 256,
'config': {'default_options': ['--style=fruity']},
'config_dir': PosixPath('/Users/<redacted>/.config/httpie'),
'devnull': <property object at 0x10d7c4cc0>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x10d7d1000>,
'program_name': 'http',
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': RequestJSONDataDict(),
'headers': <HTTPHeadersDict('User-Agent': b'HTTPie/3.0.2')>,
'method': 'get',
'params': <generator object MultiValueOrderedDict.items at 0x10da93450>,
'url': 'http://localhost:8080/get-values'})
HTTP/1.1
Connection: keep-alive
Content-Type: application/json
Date: Sat, 19 Feb 2022 00:05:16 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked
<body redacted>
``` | closed | 2022-02-19T00:11:30Z | 2022-03-03T16:28:04Z | https://github.com/httpie/cli/issues/1300 | [
"bug"
] | EarthCitizen | 5 |
robusta-dev/robusta | automation | 1,049 | Feature Request: Multiple alerts in on_prometheus_alert trigger | It would be great if the `on_prometheus_alert` trigger supported multiple alert names as a comma separated string or an array. E.g.:
```
- triggers:
- on_prometheus_alert:
alert_name: [ MyCustomCPUAlert, MyCustomMemoryAlert, MyCustomDiskAlert ]
actions:
- default_enricher: {}
``` | closed | 2023-08-23T10:36:16Z | 2023-12-06T22:52:44Z | https://github.com/robusta-dev/robusta/issues/1049 | [
"needs-triage"
] | otherguy | 4 |
coqui-ai/TTS | deep-learning | 2,792 | [Bug] Poor model training results using GlowTTS and Vits despite using 450 clean audio files | closed | 2023-07-24T11:58:02Z | 2023-08-22T06:08:55Z | https://github.com/coqui-ai/TTS/issues/2792 | [
"bug"
] | seoldami2b | 2 | |
zappa/Zappa | flask | 446 | [Migrated] Add on Docs: event_source from S3 with key_filters | Originally from: https://github.com/Miserlou/Zappa/issues/1181 by [ebertti](https://github.com/ebertti)
Something like this:
```
{
"function": "module.my_function",
"event_source": {
"arn": "arn:aws:s3:::my-bucket",
"key_filters": [
{
"type": "prefix",
"value": "my-prefix"
},
{
"type": "sufix",
"value": ".jpg"
}
],
"events": [
"s3:ObjectCreated:*"
]
}
}
``` | closed | 2021-02-20T08:34:57Z | 2022-12-01T10:04:12Z | https://github.com/zappa/Zappa/issues/446 | [
"has-pr",
"next-release-candidate"
] | jneves | 1 |
explosion/spaCy | data-science | 12,635 | Latest version of Numpy doesn't have TypeDict. Whereas Spacy still using it wihch cause issue during import. | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## Error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-6-afbeceda568c> in <module>
1 import warnings
2 warnings.filterwarnings("ignore", category=ImportWarning)
----> 3 import spacy
4
5
~\AppData\Roaming\Python\Python38\site-packages\spacy\__init__.py in <module>
4
5 # set library-specific custom warning handling before doing anything else
----> 6 from .errors import setup_default_warnings
7
8 setup_default_warnings() # noqa: E402
~\AppData\Roaming\Python\Python38\site-packages\spacy\errors.py in <module>
1 import warnings
----> 2 from .compat import Literal
3
4
5 class ErrorsWithCodes(type):
~\AppData\Roaming\Python\Python38\site-packages\spacy\compat.py in <module>
1 """Helpers for Python and platform compatibility."""
2 import sys
----> 3 from thinc.util import copy_array
4
5 try:
~\AppData\Roaming\Python\Python38\site-packages\thinc\__init__.py in <module>
3
4 from .about import __version__
----> 5 from .config import registry
6
7
~\AppData\Roaming\Python\Python38\site-packages\thinc\config.py in <module>
2 import confection
3 from confection import Config, ConfigValidationError, Promise, VARIABLE_RE
----> 4 from .types import Decorator
5
6
~\AppData\Roaming\Python\Python38\site-packages\thinc\types.py in <module>
6 import numpy
7 import sys
----> 8 from .compat import has_cupy, cupy
9
10 if has_cupy:
~\AppData\Roaming\Python\Python38\site-packages\thinc\compat.py in <module>
72
73 try:
---> 74 import h5py
75 except ImportError: # pragma: no cover
76 h5py = None
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py in <module>
44 _errors.silence_errors()
45
---> 46 from ._conv import register_converters as _register_converters
47 _register_converters()
48
h5py\h5t.pxd in init h5py._conv()
h5py\h5t.pyx in init h5py.h5t()
~\AppData\Roaming\Python\Python38\site-packages\numpy\__init__.py in __getattr__(attr)
318 return Tester
319
--> 320 raise AttributeError("module {!r} has no attribute "
321 "{!r}".format(__name__, attr))
322
AttributeError: module 'numpy' has no attribute 'typeDict'
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Windows
* Python Version Used: 3.8
* spaCy Version Used: 3.5.2
* Numpy version Used: 1.24.2
* Environment Information: Jupyter notebook used to install
| closed | 2023-05-15T07:04:33Z | 2023-06-15T00:02:18Z | https://github.com/explosion/spaCy/issues/12635 | [
"compat"
] | Deepikakolandasamy | 4 |
Yorko/mlcourse.ai | numpy | 13 | Ссылка на 3 статью | Добавьте пожалуйста в readme ссылку на третью статью на хабр. | closed | 2017-03-17T07:43:27Z | 2017-03-17T08:05:13Z | https://github.com/Yorko/mlcourse.ai/issues/13 | [
"minor_fix"
] | loopdigga96 | 2 |
jschneier/django-storages | django | 1,410 | [s3] - Error when updating to v.1.14.3 client_config | https://github.com/jschneier/django-storages/pull/1386
Here is the error that we received:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/files/storage/handler.py", line 35, in __getitem__
return self._storages[alias]
KeyError: 'reports'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/app/./manage.py", line 22, in <module>
main()
File "/usr/src/app/./manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 416, in execute
django.setup()
File "/usr/local/lib/python3.9/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 116, in populate
app_config.import_models()
File "/usr/local/lib/python3.9/site-packages/django/apps/config.py", line 269, in import_models
self.models_module = import_module(models_module_name)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "/usr/local/lib/python3.9/site-packages/ddtrace/internal/module.py", line 220, in _exec_module
self.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/src/app/reports/models.py", line 24, in <module>
class Report(TimeStampedModel):
File "/usr/src/app/reports/models.py", line 28, in Report
storage=storages['reports'])
File "/usr/local/lib/python3.9/site-packages/django/core/files/storage/handler.py", line 43, in __getitem__
storage = self.create_storage(params)
File "/usr/local/lib/python3.9/site-packages/django/core/files/storage/handler.py", line 55, in create_storage
return storage_cls(**options)
File "/usr/local/lib/python3.9/site-packages/storages/backends/s3.py", line 330, in __init__
if self.client_config is None:
AttributeError: 'S3ReportsFileStorage' object has no attribute 'client_config'
```
See here - https://github.com/jschneier/django-storages/pull/1386/files#r1640348739
Note that we are currently using `S3Boto3Storage` which is deprecated. We have a custom storage like this that provides default settings, plus a number of custom settings via env variables:
```
from storages.backends.s3boto3 import S3Boto3Storage
from storages.utils import setting
class CustomS3Boto3Storage(S3Boto3Storage):
// default settings via env vars
class S3StaticFileStorage(CustomS3Boto3Storage):
setting_prefix = 'STATICFILES_STORAGE'
class S3ReportsFileStorage(CustomS3Boto3Storage):
setting_prefix = 'REPORTS_STORAGE'
``` | closed | 2024-06-14T20:55:44Z | 2024-06-17T20:03:55Z | https://github.com/jschneier/django-storages/issues/1410 | [] | mgzwarrior | 3 |
zihangdai/xlnet | tensorflow | 4 | Anyone wants to start a pytorch version ? | open | 2019-06-20T05:29:46Z | 2019-06-21T06:42:39Z | https://github.com/zihangdai/xlnet/issues/4 | [] | statham-stone | 4 | |
opengeos/leafmap | streamlit | 547 | add_raster doesn't work for epsg:32645 | ### Environment Information
- leafmap version: 0.20.3
- Python version: 3.10.12
- Operating System: Mac M1
### Description
The `add_raster` doesn't work for epsg:32645 with transform. Here's the [test data](https://github.com/opengeos/leafmap/files/12596364/test.tif.zip).
### What I Did
I have plotted the tiff file using cartopy and it works well:
```
import rioxarray
import xarray as xr
import matplotlib.pyplot as plt
from cartopy.crs import epsg as ccrs_from_epsg, PlateCarree
test = xr.open_dataset('/Users/xinz/Downloads/test.tif')
geoTransform = test.rio.transform()
minx = geoTransform[0]
maxy = geoTransform[3]
maxx = minx + geoTransform[1] * len(test.x)
miny = maxy + geoTransform[5] * len(test.y)
extent = [minx, maxx, miny, maxy]
test_crs = ccrs_from_epsg(test.rio.crs.to_epsg())
print(test.rio.crs.to_proj4())
fig, ax = plt.subplots(subplot_kw={'projection': PlateCarree()})
ax.set_extent(extent, crs=test_crs)
ax.imshow(test['band_data'].transpose(..., 'band'),
origin='upper',
transform=get_cartopy_crs_from_epsg(test.rio.crs.to_epsg()),
extent=extent
)
gl = ax.gridlines(draw_labels=True, dms=False, color = 'gray', linewidth=1, alpha=0.5, linestyle='--')
gl.top_labels = False
gl.right_labels = False
```
<img width="527" alt="image" src="https://github.com/opengeos/leafmap/assets/30388627/2b7c97dd-1a2f-4fb8-bfed-9165187006cc">
Then, I tried to use `add_raster` to overlay the tiff on map:
```
import leafmap.foliumap as leafmap
m = leafmap.Map()
m.add_raster('/Users/xinz/Downloads/test.tif', nodata=0)
```
I got this error:
<details>
<summary>Click to see error</summary>
```
[2023-09-13 12:24:32,502] ERROR in app: Exception on [/api/bounds](https://file+.vscode-resource.vscode-cdn.net/api/bounds) [GET]
Traceback (most recent call last):
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/flask/app.py", line 1517, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/flask/app.py", line 1503, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/flask_restx/api.py", line 404, in wrapper
resp = resource(*args, **kwargs)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/flask/views.py", line 84, in view
return current_app.ensure_sync(self.dispatch_request)(*args, **kwargs)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/flask_restx/resource.py", line 46, in dispatch_request
resp = meth(*args, **kwargs)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/localtileserver/tileserver/rest.py", line 248, in get
tile_source = self.get_tile_source()
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/localtileserver/tileserver/rest.py", line 206, in get_tile_source
return utilities.get_tile_source(filename, projection, encoding=encoding, style=sty)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/localtileserver/tileserver/utilities.py", line 79, in get_tile_source
return large_image.open(str(path), projection=projection, style=style, encoding=encoding)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/large_image/tilesource/__init__.py", line 174, in open
return getTileSource(*args, **kwargs)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/large_image/tilesource/__init__.py", line 162, in getTileSource
return getTileSourceFromDict(AvailableTileSources, *args, **kwargs)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/large_image/tilesource/__init__.py", line 145, in getTileSourceFromDict
sourceName = getSourceNameFromDict(availableSources, pathOrUri, *args, **kwargs)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/large_image/tilesource/__init__.py", line 130, in getSourceNameFromDict
if availableSources[sourceName].canRead(pathOrUri, *args, **kwargs):
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/large_image/tilesource/base.py", line 2883, in canRead
cls(path, *args, **kwargs)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/large_image/cache_util/cache.py", line 229, in __call__
raise exc
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/large_image/cache_util/cache.py", line 220, in __call__
instance = super().__call__(*args, **kwargs)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/large_image_source_gdal/__init__.py", line 139, in __init__
self._initWithProjection(unitsPerPixel)
File "/Users/xinz/miniconda3/envs/enpt/lib/python3.10/site-packages/large_image_source_gdal/__init__.py", line 245, in _initWithProjection
self.levels = int(max(int(math.ceil(
ValueError: cannot convert float NaN to integer
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
Cell In[7], line 4
1 import leafmap.foliumap as leafmap
3 m = leafmap.Map()
----> 4 m.add_raster('[/Users/xinz/Downloads/test.tif](https://file+.vscode-resource.vscode-cdn.net/Users/xinz/Downloads/test.tif)', nodata=0)
File [~/miniconda3/envs/enpt/lib/python3.10/site-packages/leafmap/foliumap.py:405](https://file+.vscode-resource.vscode-cdn.net/Users/xinz/Documents/github/enmap/notebooks/~/miniconda3/envs/enpt/lib/python3.10/site-packages/leafmap/foliumap.py:405), in Map.add_raster(self, source, band, palette, vmin, vmax, nodata, attribution, layer_name, **kwargs)
390 tile_layer, tile_client = get_local_tile_layer(
391 source,
392 band=band,
(...)
401 **kwargs,
402 )
403 self.add_layer(tile_layer)
--> 405 bounds = tile_client.bounds() # [ymin, ymax, xmin, xmax]
406 bounds = (
407 bounds[2],
408 bounds[0],
409 bounds[3],
410 bounds[1],
411 ) # [minx, miny, maxx, maxy]
412 self.zoom_to_bounds(bounds)
File [~/miniconda3/envs/enpt/lib/python3.10/site-packages/localtileserver/client.py:537](https://file+.vscode-resource.vscode-cdn.net/Users/xinz/Documents/github/enmap/notebooks/~/miniconda3/envs/enpt/lib/python3.10/site-packages/localtileserver/client.py:537), in RestfulTileClient.bounds(self, projection, return_polygon, return_wkt)
531 def bounds(
532 self, projection: str = "EPSG:4326", return_polygon: bool = False, return_wkt: bool = False
533 ):
534 r = requests.get(
535 self.create_url(f"[/api/bounds](https://file+.vscode-resource.vscode-cdn.net/api/bounds)?units={projection}&projection={self.default_projection}")
536 )
--> 537 r.raise_for_status()
538 bounds = r.json()
539 extent = (bounds["ymin"], bounds["ymax"], bounds["xmin"], bounds["xmax"])
File [~/miniconda3/envs/enpt/lib/python3.10/site-packages/requests/models.py:1021](https://file+.vscode-resource.vscode-cdn.net/Users/xinz/Documents/github/enmap/notebooks/~/miniconda3/envs/enpt/lib/python3.10/site-packages/requests/models.py:1021), in Response.raise_for_status(self)
1016 http_error_msg = (
1017 f"{self.status_code} Server Error: {reason} for url: {self.url}"
1018 )
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 500 Server Error: INTERNAL SERVER ERROR for url: http://127.0.0.1:64727/api/bounds?units=EPSG:4326&projection=EPSG:3857&filename=%2FUsers%2Fxinz%2FDownloads%2Ftest.tif
```
</details> | closed | 2023-09-13T10:29:42Z | 2023-09-18T01:53:58Z | https://github.com/opengeos/leafmap/issues/547 | [
"bug"
] | zxdawn | 1 |
xinntao/Real-ESRGAN | pytorch | 658 | Mixing output images when doing inference with multiple images simultaneosly if tiling is not 0. | Input Image 1

Input Image 2

................................................................................................................................................................................................
Output 1

Output 2

Furthermore if converted to ONNX or Tensorrt it is doing same thing without doing tiling
| open | 2023-07-07T11:45:30Z | 2023-07-07T11:47:16Z | https://github.com/xinntao/Real-ESRGAN/issues/658 | [] | ssskhan | 0 |
alteryx/featuretools | data-science | 2,715 | Update pip and tqdm per dependabot alerts | closed | 2024-05-07T17:22:40Z | 2024-05-07T18:27:49Z | https://github.com/alteryx/featuretools/issues/2715 | [] | thehomebrewnerd | 0 | |
scikit-learn/scikit-learn | python | 30,889 | RFC Make `n_outputs_` consistent across regressors | The scikit-learn API defines `classes_` as part of the API for classifier.
A similar handy thing for regressor models, IMO, would be to know if it was fit on a single or multioutput target. Currently, some regressors expose the `n_outputs_` parameter, but other not. One can infer from the `intercept_` or `coef_` the number of target for liner model.
So I'm wondering if we could extend the API by extending `n_outputs_` for all regressors the same way we have `classes_` for classifiers?
Note that the tags do not help here because they inform whether or not an estimator is supporting multioutput.
| open | 2025-02-24T13:09:08Z | 2025-02-26T08:22:46Z | https://github.com/scikit-learn/scikit-learn/issues/30889 | [
"API",
"RFC"
] | glemaitre | 5 |
jupyter-incubator/sparkmagic | jupyter | 582 | How to capture various events like livy session created, etc | Is there any way to capture sparkmagic or livy session specific events from the notebook itself? | open | 2019-10-18T11:26:21Z | 2019-11-03T16:12:37Z | https://github.com/jupyter-incubator/sparkmagic/issues/582 | [
"awaiting-submitter-response"
] | devender-yadav | 2 |
paperless-ngx/paperless-ngx | django | 7,823 | [BUG] Celery beat crashing due to wrong db file name | ### Description
Celery beat for Paperless is not working on 2.12.1. It crashes immediately after I try to start it.
It worked fine on 2.10.2 (before I made upgrade).
This is bare metal installation on VM
Data folder looks like this, so I guess the filename is the issue:
root@paperless1:/opt/paperless/paperless-v2.12.1/paperless-ngx# ls -la /opt/paperless/data/
total 28
drwxr-x--- 4 paperless paperless 4096 Oct 1 13:42 .
drwxr-x--- 8 paperless paperless 4096 Sep 29 11:06 ..
-rw-r--r-- 1 paperless paperless 12288 Oct 1 13:43 **celerybeat-schedule.db.db**
drwxr-xr-x 2 paperless paperless 4096 Oct 1 13:33 index
drwxr-xr-x 2 paperless paperless 4096 Sep 30 01:31 log
For me this is a clear duplicate of https://github.com/paperless-ngx/paperless-ngx/issues/3739
Reboots didn't help. Only after renaming the file it all started to work again. I would like to leave this information here, so other people having the same issue will be able to fix it. Not sure why the file got the strange name in the first place.
Feel free to close the case, since I managed to fix it, but you may want to consider checking the source code since I am not the only one experiencing this.
Here's the log:
```
Oct 01 13:31:58 paperless1 systemd[1]: Started paperless-scheduler.service - Paperless Celery Beat.
Oct 01 13:31:59 paperless1 celery[2860]: [2024-10-01 13:31:59,995] [INFO] [celery.beat] beat: Starting...
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,003] [CRITICAL] [celery.beat] beat raised exception <class '_dbm.error'>: error('cannot add item to database')
Oct 01 13:32:00 paperless1 celery[2860]: Traceback (most recent call last):
Oct 01 13:32:00 paperless1 celery[2860]: File "/usr/lib/python3.11/shelve.py", line 111, in __getitem__
Oct 01 13:32:00 paperless1 celery[2860]: value = self.cache[key]
Oct 01 13:32:00 paperless1 celery[2860]: ~~~~~~~~~~^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: KeyError: 'entries'
Oct 01 13:32:00 paperless1 celery[2860]: During handling of the above exception, another exception occurred:
Oct 01 13:32:00 paperless1 celery[2860]: Traceback (most recent call last):
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/beat.py", line 570, in _create_schedule
Oct 01 13:32:00 paperless1 celery[2860]: self._store['entries']
Oct 01 13:32:00 paperless1 celery[2860]: ~~~~~~~~~~~^^^^^^^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: File "/usr/lib/python3.11/shelve.py", line 113, in __getitem__
Oct 01 13:32:00 paperless1 celery[2860]: f = BytesIO(self.dict[key.encode(self.keyencoding)])
Oct 01 13:32:00 paperless1 celery[2860]: ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: KeyError: b'entries'
Oct 01 13:32:00 paperless1 celery[2860]: During handling of the above exception, another exception occurred:
Oct 01 13:32:00 paperless1 celery[2860]: Traceback (most recent call last):
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/apps/beat.py", line 113, in start_scheduler
Oct 01 13:32:00 paperless1 celery[2860]: service.start()
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/beat.py", line 634, in start
Oct 01 13:32:00 paperless1 celery[2860]: humanize_seconds(self.scheduler.max_interval))
Oct 01 13:32:00 paperless1 celery[2860]: ^^^^^^^^^^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/kombu/utils/objects.py", line 40, in __get__
Oct 01 13:32:00 paperless1 celery[2860]: return super().__get__(instance, owner)
Oct 01 13:32:00 paperless1 celery[2860]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: File "/usr/lib/python3.11/functools.py", line 1001, in __get__
Oct 01 13:32:00 paperless1 celery[2860]: val = self.func(instance)
Oct 01 13:32:00 paperless1 celery[2860]: ^^^^^^^^^^^^^^^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/beat.py", line 677, in scheduler
Oct 01 13:32:00 paperless1 celery[2860]: return self.get_scheduler()
Oct 01 13:32:00 paperless1 celery[2860]: ^^^^^^^^^^^^^^^^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/beat.py", line 668, in get_scheduler
Oct 01 13:32:00 paperless1 celery[2860]: return symbol_by_name(self.scheduler_cls, aliases=aliases)(
Oct 01 13:32:00 paperless1 celery[2860]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/beat.py", line 513, in __init__
Oct 01 13:32:00 paperless1 celery[2860]: super().__init__(*args, **kwargs)
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/beat.py", line 264, in __init__
Oct 01 13:32:00 paperless1 celery[2860]: self.setup_schedule()
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/beat.py", line 541, in setup_schedule
Oct 01 13:32:00 paperless1 celery[2860]: self._create_schedule()
Oct 01 13:32:00 paperless1 celery[2860]: File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/beat.py", line 574, in _create_schedule
Oct 01 13:32:00 paperless1 celery[2860]: self._store['entries'] = {}
Oct 01 13:32:00 paperless1 celery[2860]: ~~~~~~~~~~~^^^^^^^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: File "/usr/lib/python3.11/shelve.py", line 125, in __setitem__
Oct 01 13:32:00 paperless1 celery[2860]: self.dict[key.encode(self.keyencoding)] = f.getvalue()
Oct 01 13:32:00 paperless1 celery[2860]: ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 01 13:32:00 paperless1 celery[2860]: _dbm.error: cannot add item to database
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,005] [WARNING] [celery.redirected] Traceback (most recent call last):
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,005] [WARNING] [celery.redirected] File "/usr/lib/python3.11/shelve.py", line 111, in __getitem__
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,005] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,005] [WARNING] [celery.redirected] value = self.cache[key]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,005] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,005] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,006] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,006] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,006] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,006] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,006] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,006] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,006] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,006] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,007] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,007] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,007] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,007] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,007] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,007] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,007] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,007] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,008] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,008] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,008] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,008] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,008] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,008] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,008] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,008] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,009] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,009] [WARNING] [celery.redirected] KeyError
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,009] [WARNING] [celery.redirected] :
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,009] [WARNING] [celery.redirected] 'entries'
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,009] [WARNING] [celery.redirected] During handling of the above exception, another exception occurred:
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,009] [WARNING] [celery.redirected] Traceback (most recent call last):
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,009] [WARNING] [celery.redirected] File "/opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/celery/beat.py", line 570, in _create_schedule
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,010] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,010] [WARNING] [celery.redirected] self._store['entries']
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,010] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,010] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,010] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,010] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,010] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,010] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,011] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,011] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,011] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,011] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,011] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,011] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,011] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,011] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,012] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,012] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,012] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,012] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,012] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,012] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,012] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,012] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,013] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,013] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,013] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,013] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,013] [WARNING] [celery.redirected] File "/usr/lib/python3.11/shelve.py", line 113, in __getitem__
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,013] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,013] [WARNING] [celery.redirected] f = BytesIO(self.dict[key.encode(self.keyencoding)])
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,013] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,014] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,014] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,014] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,014] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,014] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,014] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,014] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,014] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,015] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,015] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,015] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,015] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,015] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,015] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,015] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,015] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,015] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,016] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,016] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,016] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,016] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,016] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,016] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,016] [WARNING] [celery.redirected] ~
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,016] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,017] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,017] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,017] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,017] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,017] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,017] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,017] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,017] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,018] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,018] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,018] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,018] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,018] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,018] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,018] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,018] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,018] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,019] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,019] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,019] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,019] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,019] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,019] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,019] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,019] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,020] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,020] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,020] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,020] [WARNING] [celery.redirected] ^
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,020] [WARNING] [celery.redirected] KeyError
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,020] [WARNING] [celery.redirected] :
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,020] [WARNING] [celery.redirected] b'entries'
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,020] [WARNING] [celery.redirected] During handling of the above exception, another exception occurred:
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,021] [WARNING] [celery.redirected] Traceback (most recent call last):
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,021] [WARNING] [celery.redirected] File "/opt/paperless/paperless-v2.12.1/venv/bin/celery", line 8, in <module>
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,021] [WARNING] [celery.redirected]
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,021] [WARNING] [celery.redirected] sys.exit(main())
Oct 01 13:32:00 paperless1 celery[2860]: [2024-10-01 13:32:00,021] [WARNING] [celery.redirected]
====
cut excess logs
====
Oct 01 13:32:00 paperless1 celery[2860]: celery beat v5.4.0 (opalescent) is starting.
Oct 01 13:32:00 paperless1 celery[2860]: __ - ... __ - _
Oct 01 13:32:00 paperless1 celery[2860]: LocalTime -> 2024-10-01 13:31:59
Oct 01 13:32:00 paperless1 celery[2860]: Configuration ->
Oct 01 13:32:00 paperless1 celery[2860]: . broker -> redis://localhost:6379//
Oct 01 13:32:00 paperless1 celery[2860]: . loader -> celery.loaders.app.AppLoader
Oct 01 13:32:00 paperless1 celery[2860]: . scheduler -> celery.beat.PersistentScheduler
Oct 01 13:32:00 paperless1 celery[2860]: . db -> /opt/paperless/data/celerybeat-schedule.db
Oct 01 13:32:00 paperless1 celery[2860]: . logfile -> [stderr]@%INFO
Oct 01 13:32:00 paperless1 celery[2860]: . maxinterval -> 5.00 minutes (300s)
Oct 01 13:32:00 paperless1 systemd[1]: paperless-scheduler.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 13:32:00 paperless1 systemd[1]: paperless-scheduler.service: Failed with result 'exit-code'.
Oct 01 13:32:00 paperless1 systemd[1]: paperless-scheduler.service: Consumed 2.448s CPU time.
```
### Steps to reproduce
1. Upgrade paperless from 2.10.2 to 2.12.1
2. Paperless starts showing bad behavior - Some error message on main page reffering to docs or just plain 504 error from reverse proxy.
### Webserver logs
```bash
Oct 01 13:45:30 paperless1 gunicorn[639]: /opt/paperless/paperless-v2.12.1/venv/lib/python3.11/site-packages/django/http/response.py:517: Warning: StreamingHttpResponse must consume synchronous iterators in order to serve them asynchrono>
Oct 01 13:45:30 paperless1 gunicorn[639]: warnings.warn(
```
### Browser logs
_No response_
### Paperless-ngx version
2.12.1
### Host OS
Debian 12
### Installation method
Docker - official image
### System status
```json
I don't see any System Status tab
```
### Browser
Vivaldi
### Configuration changes
Yes, but nothing outstanding. I have set my own url, language, OIDC integration and OCR arguments
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-10-01T12:18:40Z | 2024-11-12T03:05:36Z | https://github.com/paperless-ngx/paperless-ngx/issues/7823 | [
"duplicate"
] | SpiderD555 | 4 |
onnx/onnxmltools | scikit-learn | 203 | Link to appveyor CI is obsolete in readme.md | Last build on appveyor is 4 months old. It fails right now. | closed | 2018-12-19T12:26:13Z | 2019-01-16T18:57:53Z | https://github.com/onnx/onnxmltools/issues/203 | [] | xadupre | 1 |
jofpin/trape | flask | 311 | Can't get a connection to show on Control Panel of ngrok but the connection is established in terminal. | **So I figured out how to get all the fixes in order and was able to run the tool. I was able to make a connection with my windows machine and it was showing on trape local server. However I deleted the connection once it was expired and I had closed that tab in my browser. Now whenever I restart trape and generate a new link and open in that browser the connection is established in the terminal (see the screenshot)**

- **but in order for me to run any commands I also need that connection to show up in ngrok control panel to get that user interface however it won't show there (see the screen shot)**

- **Can't even find a way to reset this all too, anytime I start trape again it take me back to the same page where there is always an increase in click but no connection is make so no interface. I have also tried to reset the ngrock token but still end up on the same page with an increased click but no connection (see screenshot)**

- **But when I run the same link in the incognito mode the connection establishes and it shows up in ngrock control panel and I can get to the interface through the details button. Same works for running in Firefox here in kali every time a new connection is made and there is an option to open interface through the details button.**


**### What is the fix to this problem is there a way to reset the ngrok server so that I'm starting form beginning with no history of previous connections. Or is there a better solution please tell I have been troubleshooting all day.** | open | 2021-05-14T15:01:40Z | 2021-05-19T12:51:58Z | https://github.com/jofpin/trape/issues/311 | [] | thisisbari | 6 |
Textualize/rich | python | 3,034 | [BUG] svg Tables seem bugged when trying to generate an png from them | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
When exporting a table to svg and afterwards trying to generate a png from them (cairosvg and some online converters)
I get broken pictures. Maybe related or not Jetbrains cant display them which makes me think that something might be broken there too
i tried to manually fix it as shown in the attachments resulting in the expected pngs but breaking the view in the chrome browser.
**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
multiple, server/wsl
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=228 ColorSystem.EIGHT_BIT> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = '256' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 41 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=228, height=41), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=228, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=41, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=228, height=41) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 228 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'xterm-256color', │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Linux"
</details>
Actual:

"Expected" (this ends up in a not broken png)


| closed | 2023-07-13T14:04:16Z | 2023-08-13T10:23:34Z | https://github.com/Textualize/rich/issues/3034 | [
"Needs triage"
] | MrMatch246 | 7 |
deepfakes/faceswap | machine-learning | 1,380 | inpaint error | Automatic1111 (1.8.0)
tab img2img -> tab inpaint
with or without mask
error:
2024-03-23 14:40:19,818 - FaceSwapLab - INFO - Try to use model : D:\stable-diffusion\webui\models\faceswaplab\inswapper_128.onnx████████████████████████████████████████████████████████████████████████████████| 31/31 [00:10<00:00, 2.96it/s]
d:\stable-diffusion\system\python\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
2024-03-23 14:40:21,199 - FaceSwapLab - INFO - Finished processing image, return 1 images
2024-03-23 14:40:21,199 - FaceSwapLab - INFO - 1 images swapped
2024-03-23 14:40:21,207 - FaceSwapLab - INFO - Add swp image to processed
2024-03-23 14:40:21,207 - FaceSwapLab - ERROR - Failed to swap face in postprocess method : 'tuple' object has no attribute 'height'
Traceback (most recent call last):
File "D:\stable-diffusion\webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab.py", line 216, in postprocess
save_image(
File "D:\stable-diffusion\webui\modules\images.py", line 611, in save_image
if (image.height > 65535 or image.width > 65535) and extension.lower() in ("jpg", "jpeg") or (image.height > 16383 or image.width > 16383) and extension.lower() == "webp":
AttributeError: 'tuple' object has no attribute 'height'
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 31/31 [00:12<00:00, 2.46it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 31/31 [00:12<00:00, 2.96it/s]
in img2img it worked fine but i will change the face and dont calculate a whole image | closed | 2024-03-23T13:43:44Z | 2024-03-23T13:44:25Z | https://github.com/deepfakes/faceswap/issues/1380 | [] | kalle07 | 1 |
graphql-python/graphene-django | django | 548 | How to customise mutation validation error messages | Hi,
I've been trying to customise mutation validation errors but so far didn't succeed in doing so, having the following query I would like to be able to return `errors` if different shape
```gql
mutation CreateProject($projectName: String!, $description: String!) {
createProject(projectName: $projectName, description: $description) {
id
projectName
description
}
}
```
Standard errors look like
```json
{
"errors": [
{
"message": "Variable \"$description\" of required type \"String!\" was not provided.",
"locations": [
{
"line": 1,
"column": 47
}
]
}
]
}
```
What I would like to have is
```json
{
"errors": [
{
"$description": "Error message...",
"locations": [...]
}
}
```
So is it possible to adjust error response format?
Thanks. | closed | 2018-10-30T20:48:50Z | 2023-05-05T11:23:36Z | https://github.com/graphql-python/graphene-django/issues/548 | [
"question",
"Docs enhancement"
] | sultaniman | 13 |
huggingface/text-generation-inference | nlp | 2,703 | CUDA Error: No kernel image is available for execution on the device | ### System Info
Hardware config:
GPU : Quadro P5000 16GB VRAM
CUDA Version: 12.2
NVIDIA-SMI 535.183.01
RAM 32GB
After executing docker command :
docker run --gpusall \
--shm-size 2g \
-p 8080:80 \
-v $PWD:/data \
-e HF_TOKEN=***keyt*** \
ghcr.io/huggingface/text-generation-inference:2.3.1 \
--model-id meta-llama/Llama-3.2-1B \
--trust-remote-code
Getting Error : CUDA Error: no kernel image is available for execution on the device /usr/src/flash-attention/csrc/layer_norm/ln_fwd_kernels.cuh 236 rank=0
--------------------log-------------------------------------------
2024-10-28T09:20:28.210454Z INFO hf_hub: Token file not found "/data/token"
2024-10-28T09:20:28.210628Z INFO text_generation_launcher: Model supports up to 131072 but tgi will now set its default to 4096 instead. This is to save VRAM by refusing large prompts in order to allow more users on the same hardware. You can increase that size using `--max-batch-prefill-tokens=131122 --max-total-tokens=131072 --max-input-tokens=131071`.
2024-10-28T09:20:29.540317Z INFO text_generation_launcher: Using attention flashinfer - Prefix caching true
2024-10-28T09:20:29.540350Z INFO text_generation_launcher: Default `max_input_tokens` to 4095
2024-10-28T09:20:29.540359Z INFO text_generation_launcher: Default `max_total_tokens` to 4096
2024-10-28T09:20:29.540366Z INFO text_generation_launcher: Default `max_batch_prefill_tokens` to 4145
2024-10-28T09:20:29.540374Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32]
2024-10-28T09:20:29.540385Z WARN text_generation_launcher: `trust_remote_code` is set. Trusting that model `meta-llama/Llama-3.2-1B` do not contain malicious code.
2024-10-28T09:20:29.540598Z INFO download: text_generation_launcher: Starting check and download process for meta-llama/Llama-3.2-1B
2024-10-28T09:20:33.468493Z INFO text_generation_launcher: Files are already present on the host. Skipping download.
2024-10-28T09:20:34.166805Z INFO download: text_generation_launcher: Successfully downloaded weights for meta-llama/Llama-3.2-1B
2024-10-28T09:20:34.167174Z INFO shard-manager: text_generation_launcher: Starting shard rank=0
2024-10-28T09:20:37.275145Z INFO text_generation_launcher: Using prefix caching = True
2024-10-28T09:20:37.275202Z INFO text_generation_launcher: Using Attention = flashinfer
2024-10-28T09:20:41.969535Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0
2024-10-28T09:20:41.994397Z INFO shard-manager: text_generation_launcher: Shard ready in 7.809598311s rank=0
2024-10-28T09:20:42.074720Z INFO text_generation_launcher: Starting Webserver
2024-10-28T09:20:42.156666Z INFO text_generation_router_v3: backends/v3/src/lib.rs:90: Warming up model
2024-10-28T09:20:42.661551Z ERROR warmup{max_input_length=4095 max_prefill_tokens=4145 max_total_tokens=4096 max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:54: Server error: transport error
Error: Backend(Warmup(Generation("transport error")))
2024-10-28T09:20:42.695663Z ERROR shard-manager: text_generation_launcher: Shard complete standard error output:
2024-10-28 09:20:35.725 | INFO | text_generation_server.utils.import_utils:<module>:75 - Detected system cuda
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/selective_scan_interface.py:158: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/selective_scan_interface.py:231: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
@custom_bwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/triton/layernorm.py:507: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/triton/layernorm.py:566: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
@custom_bwd
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py:79: FutureWarning: You are using a Backend <class 'text_generation_server.utils.dist.FakeGroup'> as a ProcessGroup. This usage is deprecated since PyTorch 2.0. Please use a public API of PyTorch Distributed instead.
return func(*args, **kwargs)
CUDA Error: no kernel image is available for execution on the device /usr/src/flash-attention/csrc/layer_norm/ln_fwd_kernels.cuh 236 rank=0
2024-10-28T09:20:42.741725Z ERROR text_generation_launcher: Shard 0 crashed
2024-10-28T09:20:42.741753Z INFO text_generation_launcher: Terminating webserver
2024-10-28T09:20:42.741783Z INFO text_generation_launcher: Waiting for webserver to gracefully shutdown
Error: ShardFailed
2024-10-28T09:20:42.741824Z INFO text_generation_launcher: webserver terminated
2024-10-28T09:20:42.741839Z INFO text_generation_launcher: Shutting down shards
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
just run below docker command :
docker run --gpusall\
--shm-size 2g \
-p 8080:80 \
-v $PWD:/data \
-e HF_TOKEN=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 \
ghcr.io/huggingface/text-generation-inference:2.3.1 \
--model-id meta-llama/Llama-3.2-1B \
--trust-remote-code
having hardware
Hardware config:
GPU : Quadro P5000 16GB VRAM
CUDA Version: 12.2
NVIDIA-SMI 535.183.01
RAM 32GB
### Expected behavior
it should start local host with llm model endpoitn | open | 2024-10-28T09:21:12Z | 2024-10-28T09:21:12Z | https://github.com/huggingface/text-generation-inference/issues/2703 | [] | shubhamgajbhiye1994 | 0 |
ipython/ipython | jupyter | 14,732 | ipythonrc doesn't work on macOS | Hi.
I've added to the root of my project the `ipythonrc` file with new content:
```
exec_lines = ["%load_ext autoreload", "%autoreload 2"]
```
When I start `ipython` it doesn't load the instructions from the config file.
Maybe I need to place this file in some other place, or tell option to use it by hand. Or my format is just incorrect. Please help me to debug and fix the use case I have, with rc file. Thanks 🙏🏻
### Bonus request
I know that this is a modern way with python file places in the python directory. But I think that it is a good to allow to use normal dot-prefixed `.ipythonrc` file and define there default and couple of optional profiles. In this case when someone will clone my project, or I will clone it on another machine - I don't need manually to setup ipython with new options, as all options are difeined in the repository root directory. I want to see this change if possible. | closed | 2025-02-11T10:27:30Z | 2025-02-11T10:42:19Z | https://github.com/ipython/ipython/issues/14732 | [] | 1st | 1 |
mwaskom/seaborn | matplotlib | 2,891 | Plot.configure figsize doesn't persist if it's not the final method called | Because of this hack, which is not handled by `Plot._clone`:
https://github.com/mwaskom/seaborn/blob/v0.12.0b1/seaborn/_core/plot.py#L573 | closed | 2022-07-06T21:12:30Z | 2022-07-24T00:24:59Z | https://github.com/mwaskom/seaborn/issues/2891 | [
"bug",
"objects-plot"
] | mwaskom | 0 |
ymcui/Chinese-BERT-wwm | nlp | 4 | 我想问下全词mask的一个小细节 | 在你们的工作中,比如mask词的时候,一个词为哈利波特,那么在你们的方法中,是不是只要这个词被mask,那一定是[mask][mask][mask][mask]的形式,还是偶尔会出现[mask]利[mask][mask]的形式,不知道你们是如何设置的(不考虑那个mask80%10%10%的那个随机概率),如果是前者,那么这种完全避免局部共现的设置会不会对结果有影响。 | closed | 2019-06-20T12:02:10Z | 2021-05-19T04:25:04Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/4 | [] | fudanchenjiahao | 9 |
browser-use/browser-use | python | 676 | Cloud API - task finishes immediately after starting | ### Bug Description
I've seen several instances of a new task being created and finishing immediately, before I can see anything in the live_url.
Example ID: 9a7ab454-f968-4dc0-8112-b33b03cfa213
### Reproduction Steps
1. Call the run task API: https://docs.browser-use.com/cloud/api-v10/run-task
2. Get task: https://docs.browser-use.com/cloud/api-v10/get-task
3. Get task immediately returns a finished state
### Code Sample
```python
See repro steps above
```
### Version
API v1.0
### LLM Model
Other (specify in description)
### Operating System
N/A
### Relevant Log Output
```shell
``` | closed | 2025-02-12T02:43:54Z | 2025-02-24T19:40:22Z | https://github.com/browser-use/browser-use/issues/676 | [
"bug"
] | edwardysun | 1 |
gradio-app/gradio | machine-learning | 9,963 | Transparency Settings Not Applying Consistently in Dark Mode | ### Describe the bug
Transparency settings for background elements in dark mode are not applied consistently across all component blocks in Gradio. Specific settings, such as `block_background_fill_dark` and `checkbox_background_color_dark`, fail to apply transparency in dark mode. **This issue does not occur in light mode, where transparency settings apply uniformly across all blocks.**
## Steps to Reproduce
1. Define a custom theme with transparency settings applied to dark mode, including `checkbox_background_color_dark`, as shown in the example code below.
2. Apply the theme to a Gradio interface with various components (textbox, checkbox, image, etc.).
3. Launch the interface and enforce dark mode by navigating to `http://127.0.0.1:7860/?__theme=dark`.
4. Observe the transparency inconsistencies across different blocks and the lack of transparency in the checkbox.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
from gradio.themes import colors, sizes, Font, GoogleFont
class ForestOceanTheme(gr.themes.Ocean):
def __init__(
self,
*,
primary_hue: colors.Color | str = colors.Color(
c50="#E6F7E6", c100="#CFF0CF", c200="#A8E0A8", c300="#82D182",
c400="#5BC25B", c500="#34B134", c600="#299229", c700="#1E731E",
c800="#145514", c900="#0A370A", c950="#042704"
),
secondary_hue: colors.Color | str = colors.Color(
c50="#E6F7E6", c100="#A8E0A8", c200="#82D182", c300="#5BC25B",
c400="#34B134", c500="#299229", c600="#1E731E", c700="#145514",
c800="#0A370A", c900="#042704", c950="#001800"
),
neutral_hue: colors.Color | str = colors.zinc,
spacing_size: sizes.Size | str = sizes.spacing_md,
radius_size: sizes.Size | str = sizes.radius_xxl,
text_size: sizes.Size | str = sizes.text_md,
font: Font | str | list[Font | str] = (
GoogleFont("IBM Plex Sans"),
"ui-sans-serif",
"system-ui",
"sans-serif",
),
font_mono: Font | str | list[Font | str] = (
GoogleFont("Inter"),
"ui-monospace",
"Consolas",
"monospace",
),
):
super().__init__(
primary_hue=primary_hue,
secondary_hue=secondary_hue,
neutral_hue=neutral_hue,
spacing_size=spacing_size,
radius_size=radius_size,
text_size=text_size,
font=font,
font_mono=font_mono,
)
# Name the theme for identification
self.name = "forest_ocean_homogeneous_green"
# Set parameters for a subtle green gradient in light mode
super().set(
# More homogeneous background in light mode with subtle green
background_fill_primary="radial-gradient(circle at center, #E0F8E0 10%, #CFEFCF 40%, #D5EDD5 100%)",
# Component box styles with higher contrast and transparency in light mode
background_fill_secondary="rgba(255, 255, 255, 0.95)", # Slightly more opaque for better readability
block_border_color="#888888", # Darker gray for box border
block_border_width="1px",
block_radius="15px", # Rounded corners for a softer look
block_shadow="0 4px 10px rgba(0, 0, 0, 0.15)", # Enhanced shadow for depth
# High contrast for main text and labels
body_text_color="#1A1A1A", # Very dark gray for primary text
body_text_color_subdued="#333333", # Darker gray for subdued text
# Label (title) text for components
block_title_text_color="#000000", # Black for labels to improve contrast
block_title_text_color_dark="#FFFFFF", # White for labels in dark mode
# Input fields
input_background_fill="#FFFFFF", # Pure white for inputs
input_border_color="#555555", # Even darker gray border around input fields
input_border_width="1px",
# Primary button styling for light mode
button_primary_background_fill="linear-gradient(120deg, *primary_300 0%, *primary_400 50%, *primary_500 100%)",
button_primary_text_color="*neutral_50",
button_primary_background_fill_hover="linear-gradient(120deg, *primary_400 0%, *primary_500 60%, *primary_600 100%)",
# Dark mode settings with improved transparency and no green hue
background_fill_primary_dark="radial-gradient(circle at center, #020924 10%, #01071A 50%, #000615 100%)",
background_fill_secondary_dark="rgba(30, 30, 30, 0.2)", # Semi-transparent background for components
block_background_fill_dark="rgba(45, 45, 45, 0.2)", # Darker, more uniform transparent background
panel_background_fill_dark="rgba(45, 45, 55, 0.2)", # Additional transparency for panel-like elements
block_border_color_dark="#666666", # Darker gray border to ensure contrast
block_shadow_dark="0 4px 10px rgba(255, 255, 255, 0.1)", # Softer shadow for dark mode
checkbox_background_color_dark="rgba(30, 30, 30, 0.85)",
# Text and label settings for dark mode
body_text_color_dark="#E0E0E0", # Light gray for body text in dark mode
body_text_color_subdued_dark="#B0B0B0", # Subdued gray for secondary text in dark mode
# Primary button styling for dark mode
button_primary_background_fill_dark="linear-gradient(120deg, *secondary_600 0%, *primary_500 60%, *primary_600 100%)",
button_primary_background_fill_hover_dark="linear-gradient(120deg, *secondary_500 0%, *primary_500 60%, *primary_500 100%)",
button_primary_text_color_dark="*neutral_50",
)
# Define a dummy function to test component interactions
def process_text(text, number, mood, translate):
translation = "Translated text..." if translate else "Translation not selected."
return f"You entered: {text}\nSlider value: {number}\nMood: {mood}\n{translation}"
def process_image(image, brightness):
return image # In a real use case, apply brightness adjustment here
def play_audio_file(audio_file):
return audio_file # Simply returns the audio file for playback
with gr.Blocks(theme=ForestOceanTheme()) as demo:
gr.Markdown("## Comprehensive Web UI Test with Transparency in Dark Mode")
# Text Processing Section
gr.Markdown("### Text Processing")
with gr.Row():
text_input = gr.Textbox(label="Enter some text", placeholder="Type here...")
number_slider = gr.Slider(label="Select a number", minimum=0, maximum=100, value=50)
mood_dropdown = gr.Dropdown(
label="Select your mood",
choices=["Happy", "Sad", "Excited", "Anxious"],
value="Happy"
)
translate_checkbox = gr.Checkbox(label="Translate to another language", value=False)
process_button = gr.Button("Process Text")
output_text = gr.Textbox(label="Output", placeholder="Processed output will appear here")
process_button.click(
fn=process_text,
inputs=[text_input, number_slider, mood_dropdown, translate_checkbox],
outputs=output_text
)
# Image Upload Section
gr.Markdown("### Image Upload")
with gr.Row():
image_input = gr.Image(label="Upload an image", type="pil")
brightness_slider = gr.Slider(label="Adjust Brightness", minimum=0.5, maximum=1.5, value=1.0)
process_image_button = gr.Button("Process Image")
image_output = gr.Image(label="Processed Image")
process_image_button.click(
fn=process_image,
inputs=[image_input, brightness_slider],
outputs=image_output
)
# Audio Upload Section
gr.Markdown("### Audio Upload")
audio_input = gr.Audio(label="Upload an audio file", type="filepath")
play_audio = gr.Button("Play Audio")
audio_output = gr.Audio(label="Playback")
play_audio.click(
fn=play_audio_file,
inputs=audio_input,
outputs=audio_output
)
demo.launch()
```
### Screenshot


### Logs
_No response_
### System Info
```shell
Gradio Version: 5.5.0
Gradio Client Version: 1.4.2
Operating System: Ubuntu 22.04
Python Version: 3.12.7
Browsers Tested: Chrome, Firefox
```
### Severity
I can work around it | open | 2024-11-15T08:34:31Z | 2024-11-15T08:34:31Z | https://github.com/gradio-app/gradio/issues/9963 | [
"bug"
] | JSchmie | 0 |
Lightning-AI/pytorch-lightning | data-science | 20,565 | Max batches float(inf) handled incorrectly | ### Bug description
When using a dataloader which doesn't have `__len__` implemented, lightning adds a `max_batches` as `float("inf")` [here](https://github.com/Lightning-AI/pytorch-lightning/blob/a944e7744e57a5a2c13f3c73b9735edf2f71e329/src/lightning/pytorch/loops/evaluation_loop.py#L201) which then breaks [further on](https://github.com/Lightning-AI/pytorch-lightning/blob/a944e7744e57a5a2c13f3c73b9735edf2f71e329/src/lightning/pytorch/loops/evaluation_loop.py#L271).
### What version are you seeing the problem on?
v2.5
### How to reproduce the bug
Struggling to provide a simple repro but it happens when loading a checkpoint i.e. any time we have `self.resetting` as `True` in the eval loop.
### Error messages and logs
```
trainer.fit(
File "/venv/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 539, in fit
call._call_and_handle_interrupt(
File "/venv/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 47, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/venv/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 575, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/venv/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 982, in _run
results = self._run_stage()
File "/venv/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1026, in _run_stage
self.fit_loop.run()
File "/venv/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 216, in run
self.advance()
File "/venv/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 455, in advance
self.epoch_loop.run(self._data_fetcher)
File "/venv/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 150, in run
self.advance(data_fetcher)
File "/venv/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 270, in advance
self.val_loop.increment_progress_to_evaluation_end()
File "/venv/lib/python3.10/site-packages/lightning/pytorch/loops/evaluation_loop.py", line 271, in increment_progress_to_evaluation_end
max_batch = int(max(self.max_batches))
OverflowError: cannot convert float infinity to integer
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.5.0): 2.5.0
#- PyTorch Version (e.g., 2.5): 2.5
#- Python version (e.g., 3.12): 3.10
#- OS (e.g., Linux): Ubuntu
#- CUDA/cuDNN version: CUDA12, cuDNN9
#- GPU models and configuration: A100
#- How you installed Lightning(`conda`, `pip`, source): pip
```
</details>
### More info
_No response_ | closed | 2025-01-29T13:30:08Z | 2025-03-14T10:48:34Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20565 | [
"bug",
"needs triage",
"ver: 2.5.x"
] | dannyfriar | 0 |
agronholm/anyio | asyncio | 153 | Doctests with all possible backends | Hi!
I am using `anyio` to test our library: https://github.com/dry-python/returns
We do write a lot of unit tests in docs. So, I am wondering if it is possible to run the same way as test marked with `pytest.mark.anyio`?
Basically, I want three test (asyncio, trio, curio) runs for a single doctest.
Example: https://github.com/dry-python/returns/blob/master/returns/future.py#L47-L61 | closed | 2020-08-25T09:13:24Z | 2020-08-26T08:23:55Z | https://github.com/agronholm/anyio/issues/153 | [
"wontfix"
] | sobolevn | 6 |
lanpa/tensorboardX | numpy | 400 | API docs for previous versions and tags are missed for versions>1.2? | Firstly, thanks for your project!
I noticed that APIs vary with the different versions. But we can only access to the latest version's API docs on [the project site](https://tensorboardx.readthedocs.io/en/latest/index.html#). Could you generate the API docs for the previous versions (e.g., 1.2, 1.4)?
Moreover, I noticed the release tags are only updated to version==1.2. Is it possible to update the release tags for later versions? | closed | 2019-04-03T12:13:39Z | 2019-04-04T12:43:53Z | https://github.com/lanpa/tensorboardX/issues/400 | [] | lijiaqi | 2 |
microsoft/MMdnn | tensorflow | 640 | Exception thrown: the train_val.prototxt should be provided with the input shape | Hi,
I am running the docker container to do Caffe model conversion to CoreML and I received the following assert message in the terminal:
Traceback (most recent call last):
File "/usr/local/bin/mmconvert", line 11, in <module>
sys.exit(_main())
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
ret = convertToIR._convert(ir_args)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 9, in _convert
transformer = CaffeTransformer(args.network, args.weights, "tensorflow", args.inputShape, phase = args.caffePhase)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/caffe/transformer.py", line 315, in __init__
raise ConversionError('the train_val.prototxt should be provided with the input shape')
mmdnn.conversion.caffe.errors.ConversionError: the train_val.prototxt should be provided with the input shape
In my network deploy.prototxt definition, instead of using the "input_shape" to define the input, it actually defines a input layer like the following
name: "GoogleNet"
layer {
name: "data"
type: "MemoryData"
top: "data"
top: "label"
memory_data_param {
batch_size: 1
channels: 3
height: 224
width: 224
}
transform_param {
crop_size: 224
mirror: false
mean_value: 104
mean_value: 117
mean_value: 123
}
}
Is this a known issue, how can I workaround the issue? Thanks for your support.
| open | 2019-04-11T02:05:43Z | 2019-04-11T23:45:48Z | https://github.com/microsoft/MMdnn/issues/640 | [] | chenshousehold | 1 |
omar2535/GraphQLer | graphql | 126 | [FEATURE] Add Sphinx documentation | I already got started on a [branch ](https://github.com/omar2535/GraphQLer/tree/sphinx) that contains some boilerplate to generate [spinx](https://www.sphinx-doc.org/en/master/) documentation. Since GraphQLer has a lot of handy functions, it would be useful to have a [read-the-docs](https://docs.readthedocs.io/en/stable/) for GraphQLer built-ins. | open | 2024-11-10T15:09:33Z | 2024-11-10T15:09:33Z | https://github.com/omar2535/GraphQLer/issues/126 | [
"📃documentation",
"🥇good first issue"
] | omar2535 | 0 |
3b1b/manim | python | 1,457 | Chained animations don't work | ### Describe the bug
When running multiple animations in on self.play call, only the last one is executed.
**Code**:
Minimal code example:
```
class CoordinateSystemExample(Scene):
def construct(self):
dot = Dot(color=RED)
self.play(FadeIn(dot, scale=0.5))
self.play(
dot.animate.scale(75),
dot.animate.to_corner(UL),
run_time=2,
)
```
**Wrong display or Error traceback**:
What I get is:

What I expect is a a huge dot in the top right corner.
### Additional context
The CoordinateSystemExample example exhibits the same issue: the axes and the dot get moved to top right corner, but they don't get scaled, like in the video.
| closed | 2021-04-02T13:17:49Z | 2021-04-08T21:22:03Z | https://github.com/3b1b/manim/issues/1457 | [
"bug"
] | rolisz | 1 |
Textualize/rich | python | 3,248 | [REQUEST] Support Console.input() for Live | **How would you improve Rich?**
Support Console.input() for Live Display
**What problem does it solve for you?**
Fix #1848
| closed | 2024-01-06T12:54:00Z | 2024-01-06T12:56:26Z | https://github.com/Textualize/rich/issues/3248 | [
"Needs triage"
] | rsp4jack | 3 |
cupy/cupy | numpy | 8,691 | Replace `testing.assert_warns` with `pytest.warns` | ### MEMO
- Not planning to remove `testing.assert_warns` itself for now, because it is a part of public APIs. | closed | 2024-10-22T11:58:11Z | 2024-10-24T07:10:56Z | https://github.com/cupy/cupy/issues/8691 | [
"cat:test",
"prio:low"
] | EarlMilktea | 2 |
sergree/matchering | numpy | 10 | Getting the mastered track | Hello i'm back again
is there a way to get the mastered tracks outside the mg.process function | closed | 2020-02-10T16:50:05Z | 2020-02-18T06:50:02Z | https://github.com/sergree/matchering/issues/10 | [] | GoodnessEzeokafor | 4 |
plotly/dash | jupyter | 2,326 | Pasting into a DataTable overwrites `data_previous` with new data | ## Environment
- Python 3.10.7
- `pip list | grep dash`
```
dash 2.7.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- same on all browsers I tested:
- on MacOS 12.6:
- Chrome 107.0.5304.110
- Version 16.0 (17614.1.25.9.10, 17614)
- Firefox 105.0.3 (64-bit)
- on Windows 10 Pro
- Microsoft Edge 107.0.1418.52 (Official build) (64-bit)
## Describe the bug
When doing a **paste** action (as in copy&paste), two things happen:
* `data` property gets updated
* ☝️ this is as expected
* `data_previous` property takes on the _pasted_ value
* ☝️ this seems to be a bug
### Expected behavior
After a paste, I expect `data_previous` property to contain last the data _before_ the paste.
### Why is this Important
Normally, comparing `data` with `data_previous` makes it is easy to see what has changed. This is important when values in different columns are inter-dependent. This bug makes it impossible (without non-pretty workarounds) to tell which values were pasted into the table.
### Screen Recording
https://user-images.githubusercontent.com/1173748/202705073-96d56e50-b59f-4bab-8a77-df9e9bd1f352.mp4
### Example App
``` python
import dash
import pandas as pd
from dash import dash_table, dcc, html, Output, Input, State
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/solar.csv')
app = dash.Dash(__name__)
app.layout = html.Div([
dash_table.DataTable(
id='table',
columns=[{"name": i, "id": i, "editable": True} for i in df.columns],
data=df.to_dict('records'),
editable=True,
),
# counter for callback calls, to make sure the callback is not running multiple times:
html.H4("Callback count:"),
dcc.Input(id="count", value=0),
# current and old data for inspection:
html.H4("Current data:"),
html.Pre(id="data", children=""),
html.H4("Previous data:"),
html.Pre(id="prev_data", children=""),
])
@app.callback(
Output("data", "children"),
Output("prev_data", "children"),
Output("count", "value"),
Input("table", "data"),
State("table", "data_previous"),
State("count", "value"),
)
def show_data_and_prev_data(data, prev_data, count):
return str(pd.DataFrame(data)), str(pd.DataFrame(prev_data)), int(count) + 1
if __name__ == '__main__':
app.run_server(debug=True)
```
| open | 2022-11-18T12:45:48Z | 2024-08-13T19:22:44Z | https://github.com/plotly/dash/issues/2326 | [
"bug",
"dash-data-table",
"P3"
] | frnhr | 1 |
Gozargah/Marzban | api | 1,052 | Support FlClash client app | FlClash (uses clash meta core) send request with "clash.meta" user agent , but marzban in response send clash template ( without vless ) instead of ClashMeta template .
FlClash github : https://github.com/chen08209/FlClash
Thanks... | closed | 2024-06-18T18:36:03Z | 2024-07-03T17:05:27Z | https://github.com/Gozargah/Marzban/issues/1052 | [
"Feature"
] | wikm360 | 0 |
mwaskom/seaborn | pandas | 2,907 | Width computation after histogram slightly wrong with log scale | Note the slight overlap here:
```python
(
so.Plot(tips, "total_bill")
.add(so.Bars(alpha=.3, edgewidth=0), so.Hist(bins=4))
.scale(x="log")
)
```

It becomes nearly imperceptible with more bins:
```
(
so.Plot(tips, "total_bill")
.add(so.Bars(alpha=.3, edgewidth=0), so.Hist(bins=8))
.scale(x="log")
)
```

This is not about `Bars`; `Bar` has it too:
```python
(
so.Plot(tips, "total_bill")
.add(so.Bar(alpha=.3, edgewidth=0, width=1), so.Hist(bins=4))
.scale(x="log")
)
```

| closed | 2022-07-14T11:51:32Z | 2023-01-21T01:43:20Z | https://github.com/mwaskom/seaborn/issues/2907 | [
"bug",
"objects-plot"
] | mwaskom | 0 |
vitalik/django-ninja | rest-api | 804 | Exclude field from depth (nested relation) on create_schema function | Is there a way to exclude a field from a nested relation using the `create_schema` function? Or is there another way to generate a schema that excludes a field from a nested relation?
Consider the following models:
```python
class Allergy(models.Model):
name = models.CharField(max_length=100)
class Profile(models.Model):
allergies = models.ManyToManyField(Allergy, blank=True, null=True)
ProfileInput = create_schema(
Profile,
depth=1,
exclude=["allergies__id"] # I want something like this or allergies.id
)
```
Is there a way to do this at the moment using `create_schema` or some other means? | closed | 2023-07-29T00:20:33Z | 2023-07-29T10:37:15Z | https://github.com/vitalik/django-ninja/issues/804 | [] | Abdoulrasheed | 2 |
jupyterlab/jupyter-ai | jupyter | 1,265 | [2.x] Disable the copy button in insecure contexts | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue.
Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
Parent Issue Link: [#1259 ](https://github.com/jupyterlab/jupyter-ai/issues/1259)
## Description
Currently, the copy button in JupyterLab's interface (in the jupyter-ai extension) does not work when the JupyterLab server is served over HTTP (insecure context). Clipboard functionality requires a secure context (HTTPS or localhost). When the page is served via HTTP, copying from the clipboard is disabled.
This can cause confusion for users, as they may expect the copy button to function even in insecure contexts. To prevent this issue from affecting users, it would be helpful to disable the copy button when the JupyterLab environment is served via HTTP and provide a tooltip indicating that copying to the clipboard requires a secure context.
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
1.Launch JupyterLab (version 4.3.5) in an insecure context (over HTTP, not HTTPS).
2.Open a notebook or chat with cells containing information that can be copied.
3.Observe that the Copy button appears to be clickable, but it doesn't perform any action when clicked.
4.Try copying content and pasting it into an external document (e.g., Markdown file) and observe that it doesn't work.
5.Check that the copy button is still active (not grayed out or disabled), even though clipboard access is blocked in an insecure context.
<!--Describe how you diagnosed the issue. See the guidelines at
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html -->
## Output
`268.89677276b0cf81d3f244.js?v=89677276b0cf81d3f244:1 Failed to copy text: TypeError: Cannot read properties of undefined (reading 'writeText')
at 268.89677276b0cf81d3f244.js?v=89677276b0cf81d3f244:1:19034
at onClick (268.89677276b0cf81d3f244.js?v=89677276b0cf81d3f244:1:21661)`
## Proposed solution:
- Disable the copy button in insecure contexts (HTTP).
- Display a tooltip on hover for the disabled button that reads: "Copying to clipboard requires a secure context, which requires HTTPS if not on localhost."
## Expected behavior
- When JupyterLab is served over HTTP (insecure context), the copy button should be disabled.
- The disabled copy button should have a tooltip that reads: "Copying to clipboard requires a secure context, which requires HTTPS if not on localhost."
- In secure contexts (served over HTTPS or localhost), the copy button should remain enabled and functional.
<!--Describe what you expected to happen-->
</pre>
</details>
| open | 2025-02-27T07:17:43Z | 2025-03-17T00:55:41Z | https://github.com/jupyterlab/jupyter-ai/issues/1265 | [
"enhancement",
"good first issue"
] | keerthi-swarna | 3 |
vimalloc/flask-jwt-extended | flask | 175 | TypeError: jwt_required() missing 1 required positional argument: 'fn' | Whenever I use the decorator jwt_required() I get this error
`
Traceback (most recent call last):
File "app.py", line 2, in <module>
from api.db import *
File "/home/d4vinci/******/******/Project/******/api/db.py", line 51, in <module>
@jwt_required()
TypeError: jwt_required() missing 1 required positional argument: 'fn'
`
and this is my auth function
`
@app.route('/api/db/auth',methods=['POST'])
def auth():
if not request.is_json:
return jsonify({"msg": "Missing JSON in request"}), 400
email = request.json.get('email', None)
password = request.json.get('password', None)
if not username:
return jsonify({"msg": "Missing email parameter"}), 400
if not password:
return jsonify({"msg": "Missing password parameter"}), 400
if email in [ u.email for u in User.objects.all()]:
return jsonify({"msg": "Bad username or password"}), 401
elif verify_hash(password,User.objects.filter(email=email).password):
return jsonify({"msg": "Bad username or password"}), 401
access_token = create_access_token(identity=username+password+"*"*5)
return jsonify(access_token=access_token), 200
`
and the function with the decorator is
`
@app.route("/api/db/whoami")
@jwt_required()
def who():
current_user = get_jwt_identity()
return jsonify(logged_in_as=current_user), 200`
` | closed | 2018-07-19T10:11:50Z | 2022-04-11T15:31:17Z | https://github.com/vimalloc/flask-jwt-extended/issues/175 | [] | D4Vinci | 11 |
fastapi/sqlmodel | sqlalchemy | 258 | How to query View in sqlmodel | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional
from sqlmodel import Field, Session, SQLModel, create_engine, select
class HeroTeamView(SQLModel):
name: str
secret_name: str
age: Optional[int] = None
sqlite_file_name = "my.db"
db_url = f"mysql+mysqldb://{db_user}:{db_password}@{db_host}:{db_port}/{db_name}"
engine = create_engine(db_url, echo=True)
with Session(engine) as session:
statement = select(HeroTeamView)
orgs = session.exec(statement)
print(f"orgs::{orgs}")
org_list = orgs.fetchall()
```
### Description
I have a view(Lets say HeroTeamView) created in mysql db. I want to read this. This view is essentially a left join of Hero and Teams table Joined on `Hero.Id`.
As shown in example above as soon as I try to select This view I get error
`HeroTeamView` is not a `'SQLModelMetaclass' object is not iterable`
I am not quiet sure I understand how to access rows created by view
Any pointers appreciated
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.06
### Python Version
3.9.7
### Additional Context
I dont want to use Hero and Team tables directly to write a select query as there are multiple tables and joins in "real" world problem for me. Using Views provides me some obvious benefits like mentioned [here](https://stackoverflow.com/questions/7450423/what-are-benefits-of-using-view-in-database) | open | 2022-02-28T15:44:50Z | 2025-01-20T05:37:26Z | https://github.com/fastapi/sqlmodel/issues/258 | [
"question"
] | mrudulp | 15 |
ipython/ipython | data-science | 14,376 | How to call `InteractiveShellEmbed`-based custom excepthook via `jupyter-console --existing` |
**Main question:** How can I run an embedded IPython shell with a full-featured prompt (`simple_prompt=False`) in the namespace of a frame where an exception occurred in the context of a running IPython kernel of a Jupyter notebook?
**Explanation:**
I have the following usecase (which is almost solved, up to one major flaw):
In Jupyter notebook (python kernel) I run some nested function call `failing_function()`. Deep down in the call stack an exception happens.
I want to debug the situation *on the commandline* with my custom excepthook (`ipydex.ips_excepthook`; it opens an emebedded IPython shell right in the frame where the exception occurs and offers the possibility to move up in the frame stack, which I find more useful than the classical IPython debugger).
I can achieve this with the following steps: Within an ordinary python script:
```python
import sys
from jupyter_console.app import ZMQTerminalIPythonApp
# mimic jupyter-console --existing: attach shell to running kernel
sys.argv = [sys.argv[0], "--existing"]
app = ZMQTerminalIPythonApp.instance()
app.initialize(None)
super(ZMQTerminalIPythonApp, app).start()
code = """
import ipydex
import traceback
try:
failing_function()
except Exception as ex:
value, tb = traceback._parse_value_tb(ex, traceback._sentinel, traceback._sentinel)
ipydex.ips_excepthook(type(ex), ex, tb)
"""
# run this code in the context of the running kernel
app.shell.run_cell(code)
```
**Problem:**
This works, but with one flaw: It only offers the `simple_prompt`-mode of the embedded IPython shell. Background: My custom excepthook internally creates an instance of `from IPython.terminal.embed.InteractiveShellEmbed` and calls its mainloop (as intended). This mainloop has two different prompt-modes: default and simple. The default mode has many desired features (colored output, TAB-completion, ...), which I desire but the simple mode does not offer. The mode is determined by this line [IPython/terminal/interactiveshell.py#L134](https://github.com/ipython/ipython/blob/43f9010d15eff67ef6a63fd162c86a96ba57c376/IPython/terminal/interactiveshell.py#L134).
I tried to force the class variable `InteractiveShellEmbed.simple_prompt` to `False` but then I get `RuntimeError: This event loop is already running` (would have been too easy).
I see three possible ways to achieve the desired behavior:
- a) Let `InteractiveShellEmbed` run in default (i.e. not simple) mode inside the running `ZMQTerminalIPythonApp`.
- b) Run `ZMQTerminalIPythonApp.shell.mainloop()` directly in the namespace of the frame where the exception occurs (and offer the possibility to go up and down again in the frame stack).
- c) Temporarily overwrite `sys.excepthook` with my custom `ipydex.ips_excepthook` and just execute `failing_function()` without `try ... except`.
For a): The whole story with the custom excepthook is just my motivation. The problem boils down to achieving that `app.shell.run_cell('IPython.embed(colors="neutral")')` runs in default (not simple) prompt mode.
For b): I dont know whether that is possible because `InteractiveShellEmbed.mainloop` accepts key word args `local_ns` and `module` (global name space), whereas `ZMQTerminalIPythonApp.shell.mainloop` does not.
For c): This works perfectly for normal scripts but not when "injecting" code in a running jupyter kernel, because overwriting `sys.excepthook` does not seem to have an effect.
Derived questions:
1. Which of the directions (a, b, c) is the most promising?
2. What would be a good next step?
3. What other possibilities do I have? | open | 2024-03-28T10:42:30Z | 2024-03-28T10:42:30Z | https://github.com/ipython/ipython/issues/14376 | [] | cknoll | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.