repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
satwikkansal/wtfpython | python | 364 | Опечатка в тексте | https://github.com/satwikkansal/wtfpython/tree/master/translations/ru-russian
Опечатка — вместо:
> Подобная оптимизация применима и к другим **изменяемым** объектам, таким как пустые кортежи.
Должно быть:
> Подобная оптимизация применима и к другим **неизменяемым** объектам, таким как пустые кортежи. | open | 2024-12-26T14:46:45Z | 2025-02-06T09:12:28Z | https://github.com/satwikkansal/wtfpython/issues/364 | [] | mvdomrachev | 3 |
ranaroussi/yfinance | pandas | 1,381 | previous_close not working on some stock symbols | not working stock icon
`import yfinance as yf
ticker = yf.Ticker('SOKE.IS').fast_info
previous_close = ticker['previous_close']
print(previous_close)`
working stock icon
`import yfinance as yf
ticker = yf.Ticker('MAVI.IS').fast_info
previous_close = ticker['previous_close']
print(previous_close)`
traceback:
`Traceback (most recent call last):
File "D:\Python\PyCharm\yfinancetesting\main.py", line 4, in <module>
previous_close = ticker['previous_close']
~~~~~~^^^^^^^^^^^^^^^^^^
File "D:\Python\PyCharm\yfinancetesting\venv\Lib\site-packages\yfinance\base.py", line 110, in __getitem__
return getattr(self, k)
^^^^^^^^^^^^^^^^
File "D:\Python\PyCharm\yfinancetesting\venv\Lib\site-packages\yfinance\base.py", line 247, in previous_close
self._prev_close = float(prices["Close"].iloc[-2])
~~~~~~~~~~~~~~~~~~~~^^^^
File "D:\Python\PyCharm\yfinancetesting\venv\Lib\site-packages\pandas\core\indexing.py", line 1073, in __getitem__
return self._getitem_axis(maybe_callable, axis=axis)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python\PyCharm\yfinancetesting\venv\Lib\site-packages\pandas\core\indexing.py", line 1625, in _getitem_axis
self._validate_integer(key, axis)
File "D:\Python\PyCharm\yfinancetesting\venv\Lib\site-packages\pandas\core\indexing.py", line 1557, in _validate_integer
raise IndexError("single positional indexer is out-of-bounds")
IndexError: single positional indexer is out-of-bounds`
- `yfinance` version = 0.2.9
- Python version = 3.11.1
- Windows 11
| closed | 2023-01-30T12:25:57Z | 2023-04-11T20:04:45Z | https://github.com/ranaroussi/yfinance/issues/1381 | [] | FrknKAYA | 7 |
ultralytics/ultralytics | computer-vision | 19,231 | Calculating AP and mAP for yolov8 model with some postprocessing techniques | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,
I have a yolov8 model with postprocessing techniques like TTA.
I want to calculate the AP and mAP metrics for it but when i use the below code for it, i take less mAP then only yolov8 model. I know that i should take more mAP with postprocessing techniques like TTA than only yolov8 model.
I analyze the results of model with TTA, I see successful iou results.
I can't understand why. I thought that maybe the yolov8's algorithm about calculating mAP is different.
Can you help me?
```
import numpy as np
def compute_iou(box1, box2):
"""
Compute Intersection over Union (IoU) between two bounding boxes.
Box format: [x_min, y_min, x_max, y_max]
"""
x_min_inter = max(box1[0], box2[0])
y_min_inter = max(box1[1], box2[1])
x_max_inter = min(box1[2], box2[2])
y_max_inter = min(box1[3], box2[3])
inter_width = max(0, x_max_inter - x_min_inter)
inter_height = max(0, y_max_inter - y_min_inter)
intersection = inter_width * inter_height
box1_area = (box1[2] - box1[0]) * (box1[3] - box1[1])
box2_area = (box2[2] - box2[0]) * (box2[3] - box2[1])
union = box1_area + box2_area - intersection
iou = intersection / union if union > 0 else 0
print("IOU", iou)
return iou
def calculate_ap(true_boxes, pred_boxes, confidence_scores, iou_threshold=0.3):
"""
Calculate Average Precision (AP) for object detection.
Parameters:
- true_boxes: List of ground truth boxes in [x_min, y_min, x_max, y_max] format.
- pred_boxes: List of predicted boxes in [x_min, y_min, x_max, y_max] format.
- confidence_scores: List of confidence scores corresponding to pred_boxes.
- iou_threshold: IoU threshold for a prediction to be considered a true positive.
Returns:
- AP (Average Precision) value.
"""
# Eğer prediction box sayısı confidence score sayısıyla eşleşmiyorsa hata ver
if len(pred_boxes) != len(confidence_scores):
raise ValueError("Number of prediction boxes and confidence scores must be equal.")
# Tahmin kutularını confidence skoruna göre sırala (büyükten küçüğe)
pred_data = sorted(zip(pred_boxes, confidence_scores), key=lambda x: x[1], reverse=True)
pred_boxes = [x[0] for x in pred_data] # Sıralanmış prediction boxes
confidence_scores = [x[1] for x in pred_data] # Sıralanmış confidence scores
tp = [] # True Positives
fp = [] # False Positives
matched_gt = set() # Eşleşen True Boxes'ları takip et
for pred in pred_boxes:
pred_box = pred # Koordinatlar [x_min, y_min, x_max, y_max]
max_iou = 0
matched_gt_idx = -1
for i, gt_box in enumerate(true_boxes):
iou = compute_iou(pred_box, gt_box)
if iou > max_iou:
max_iou = iou
matched_gt_idx = i
# Eğer IoU eşik değerinin üzerindeyse ve daha önce eşleşmemişse TP, yoksa FP
if max_iou >= iou_threshold and matched_gt_idx not in matched_gt:
tp.append(1)
fp.append(0)
matched_gt.add(matched_gt_idx)
else:
tp.append(0)
fp.append(1)
# Kümülatif Precision ve Recall hesapla
tp = np.cumsum(tp)
fp = np.cumsum(fp)
total_true = len(true_boxes)
precisions = tp / (tp + fp)
recalls = tp / total_true
# AP hesaplama (trapezoidal rule ile integral hesaplama)
ap = np.trapz(precisions, recalls)
return ap
```
### Additional
_No response_ | closed | 2025-02-13T11:19:13Z | 2025-02-17T12:26:30Z | https://github.com/ultralytics/ultralytics/issues/19231 | [
"question",
"detect"
] | teengineer | 3 |
apify/crawlee-python | automation | 1,012 | Add a separate `crawlee-cli` PyPI package | - It should depend on `crawlee[cli]` and expose a `crawlee` script (CLI entrypoint)
- It is more user friendly to run than the square-brackets variant
- Follow up to https://github.com/apify/crawlee-python/pull/1011 | open | 2025-02-24T10:35:59Z | 2025-02-24T10:40:19Z | https://github.com/apify/crawlee-python/issues/1012 | [
"t-tooling"
] | janbuchar | 0 |
streamlit/streamlit | data-science | 9,998 | will streamlit components support different language later? | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
recently i upgrade streamlit to version 1.41.0, and use st.date_input component. i found the month and week display in English but my users can only read Chinese. Is there any way to display Chinese in streamlit components?
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | closed | 2024-12-11T02:06:04Z | 2024-12-11T18:44:36Z | https://github.com/streamlit/streamlit/issues/9998 | [
"type:enhancement"
] | phoenixor | 3 |
streamlit/streamlit | python | 10,165 | `st.logo` makes printing look bad | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
When printing an app that uses `st.logo` (e.g. [our mega tester app](https://mega-tester.streamlit.app/)), the logo is on the left side of the page and creates a wide left padding on all pages, even beyond the first one:

And when opening the sidebar, the logo shows up twice:

### Reproducible Code Example
```Python
import streamlit as st
st.logo("https://streamlit.io/images/brand/streamlit-mark-color.png")
```
### Steps To Reproduce
Run this app, then click on "Print" in the app menu.
### Expected Behavior
If the sidebar is closed, the logo should show up on the top left of the first page. There should be no padding on the left side.
If the sidebar is open, the logo should show up only on top of the sidebar.
### Current Behavior
See screenshots above.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.0
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | closed | 2025-01-11T19:35:16Z | 2025-01-14T13:26:26Z | https://github.com/streamlit/streamlit/issues/10165 | [
"type:bug",
"status:confirmed",
"priority:P2",
"feature:st.logo"
] | jrieke | 1 |
Lightning-AI/LitServe | fastapi | 431 | Failing tests with `wrap_litserve_start` Never exit | ## 🐛 Bug: Failing tests with `wrap_litserve_start` Never exit
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
1. Modify any test that uses `wrap_litserve_start` to force a failure.
2. Run the test and observe that it never exits.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Code sample

```sh
pytest tests/test_simple.py::test_workers_health_custom_path
```
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behaviour
<!-- A clear and concise description of what you expected to happen. -->
The test should exit gracefully once it completes, even if it fails.
### Environment
If you published a Studio with your bug report, we can automatically get this information. Otherwise, please describe:
- PyTorch/Jax/Tensorflow Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
| closed | 2025-02-18T05:47:19Z | 2025-02-18T10:18:10Z | https://github.com/Lightning-AI/LitServe/issues/431 | [
"bug",
"help wanted"
] | bhimrazy | 0 |
jonaswinkler/paperless-ng | django | 753 | [Errno 13] Permission denied: '/usr/src/paperless/src/../media/documents/archive' | hi!
i installed paperless-ng via docker script on my RPi4, and web interface, mail import etc. works just fine!
But processing Documents produces an error: PermissionError: [Errno 13] Permission denied: '/usr/src/paperless/src/../media/documents/archive'
```
[Errno 13] Permission denied: '/usr/src/paperless/src/../media/documents/archive' :
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 436, in worker
res = f(*task["args"], **task["kwargs"])
File "/usr/src/paperless/src/documents/tasks.py", line 81, in consume_file
task_id=task_id
File "/usr/src/paperless/src/documents/consumer.py", line 186, in try_consume_file
self.pre_check_directories()
File "/usr/src/paperless/src/documents/consumer.py", line 106, in pre_check_directories
os.makedirs(settings.ARCHIVE_DIR, exist_ok=True)
File "/usr/local/lib/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/usr/src/paperless/src/../media/documents/archive'
```
It seems like an permission problem, so i followed [Troubleshooting](https://paperless-ng.readthedocs.io/en/latest/troubleshooting.html#permission-denied-errors-in-the-consumption-directory)
My User (pi) is USERMAP_UID and USERMAP_GID 1000.
Is this a problem with docker-compose? Where do i need to change the permissions?
Thank you for bringing paperless back on track!
| closed | 2021-03-13T20:58:31Z | 2021-03-13T22:14:19Z | https://github.com/jonaswinkler/paperless-ng/issues/753 | [] | kopfpolster | 3 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,126 | [Bug]: Stable diffusion model failed to load | ### Checklist
- [x] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [x] The issue has been reported before but has not been fixed yet
### What happened?
Stable diffusion model failed to load
### Steps to reproduce the problem
I just installed it and encountered a problem with model loading failure. I tried multiple versions but the issue persists.
### What should have happened?
Stable diffusion model failed to load
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
win11 chorme 3060
### Console logs
```Shell
venv "D:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Launching Web UI with arguments: --xformers
Loading weights [6ce0161689] from D:\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 22.4s (prepare environment: 5.8s, import torch: 7.7s, import gradio: 1.7s, setup paths: 2.1s, initialize shared: 0.4s, other imports: 1.2s, list SD models: 0.2s, load scripts: 1.7s, create ui: 0.7s, gradio launch: 0.7s).
creating model quickly: SSLError
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 466, in _make_request
self._validate_conn(conn)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 1095, in _validate_conn
conn.connect()
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 652, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 805, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 465, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 509, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1071, in _create
self.do_handshake()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
response = self._make_request(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 490, in _make_request
raise new_e
urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:997)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 667, in send
resp = conn.urlopen(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 843, in urlopen
retries = retries.increment(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\retry.py", line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 85, in transformers_utils_hub_get_file_from_cache
res = original(url, *args, local_files_only=False, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1282, in _hf_hub_download_to_cache_dir
(url_to_download, etag, commit_hash, expected_size, head_call_error) = _get_metadata_or_catch_error(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1722, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1645, in get_hf_file_metadata
r = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 372, in _request_wrapper
response = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 395, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 66, in send
return super().send(request, *args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 698, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))"), '(Request ID: 82ee8644-e864-4181-aecb-34d1c85551f6)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 466, in _make_request
self._validate_conn(conn)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 1095, in _validate_conn
conn.connect()
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 652, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 805, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 465, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 509, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1071, in _create
self.do_handshake()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
response = self._make_request(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 490, in _make_request
raise new_e
urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:997)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 667, in send
resp = conn.urlopen(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 843, in urlopen
retries = retries.increment(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\retry.py", line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\stable-diffusion-webui\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "D:\stable-diffusion-webui\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "D:\stable-diffusion-webui\modules\sd_models.py", line 620, in get_sd_model
load_model()
File "D:\stable-diffusion-webui\modules\sd_models.py", line 723, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 103, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1784, in from_pretrained
resolved_vocab_files[file_id] = cached_file(
File "D:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 94, in transformers_tokenization_utils_base_cached_file
return transformers_utils_hub_get_file_from_cache(self.transformers_tokenization_utils_base_cached_file, url, *args, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 88, in transformers_utils_hub_get_file_from_cache
return original(url, *args, local_files_only=False, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1282, in _hf_hub_download_to_cache_dir
(url_to_download, etag, commit_hash, expected_size, head_call_error) = _get_metadata_or_catch_error(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1722, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1645, in get_hf_file_metadata
r = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 372, in _request_wrapper
response = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 395, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 66, in send
return super().send(request, *args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 698, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))"), '(Request ID: 1bf98f0e-72c0-40e1-9a7f-8d1f383da075)')
Failed to create model quickly; will retry using slow method.
loading stable diffusion model: SSLError
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 466, in _make_request
self._validate_conn(conn)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 1095, in _validate_conn
conn.connect()
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 652, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 805, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 465, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 509, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1071, in _create
self.do_handshake()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
response = self._make_request(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 490, in _make_request
raise new_e
urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:997)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 667, in send
resp = conn.urlopen(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 843, in urlopen
retries = retries.increment(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\retry.py", line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\stable-diffusion-webui\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "D:\stable-diffusion-webui\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "D:\stable-diffusion-webui\modules\sd_models.py", line 620, in get_sd_model
load_model()
File "D:\stable-diffusion-webui\modules\sd_models.py", line 732, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 103, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1784, in from_pretrained
resolved_vocab_files[file_id] = cached_file(
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1282, in _hf_hub_download_to_cache_dir
(url_to_download, etag, commit_hash, expected_size, head_call_error) = _get_metadata_or_catch_error(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1722, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1645, in get_hf_file_metadata
r = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 372, in _request_wrapper
response = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 395, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 66, in send
return super().send(request, *args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 698, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))"), '(Request ID: f816923e-3cc5-4044-917e-88d76ef4c1f2)')
Stable diffusion model failed to load
Applying attention optimization: xformers... done.
Loading weights [6ce0161689] from D:\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml
Exception in thread Thread-18 (load_model):
Traceback (most recent call last):
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\stable-diffusion-webui\modules\initialize.py", line 154, in load_model
devices.first_time_calculation()
File "D:\stable-diffusion-webui\modules\devices.py", line 271, in first_time_calculation
conv2d(x)
TypeError: 'NoneType' object is not callable
creating model quickly: SSLError
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 466, in _make_request
self._validate_conn(conn)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 1095, in _validate_conn
conn.connect()
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 652, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 805, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 465, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 509, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1071, in _create
self.do_handshake()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
response = self._make_request(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 490, in _make_request
raise new_e
urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:997)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 667, in send
resp = conn.urlopen(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 843, in urlopen
retries = retries.increment(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\retry.py", line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 85, in transformers_utils_hub_get_file_from_cache
res = original(url, *args, local_files_only=False, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1282, in _hf_hub_download_to_cache_dir
(url_to_download, etag, commit_hash, expected_size, head_call_error) = _get_metadata_or_catch_error(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1722, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1645, in get_hf_file_metadata
r = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 372, in _request_wrapper
response = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 395, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 66, in send
return super().send(request, *args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 698, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))"), '(Request ID: 610fc321-cc9c-4d0a-98e4-9df09ea1bec2)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 466, in _make_request
self._validate_conn(conn)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 1095, in _validate_conn
conn.connect()
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 652, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 805, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 465, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 509, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1071, in _create
self.do_handshake()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
response = self._make_request(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 490, in _make_request
raise new_e
urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:997)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 667, in send
resp = conn.urlopen(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 843, in urlopen
retries = retries.increment(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\retry.py", line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\ui.py", line 1154, in <lambda>
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
File "D:\stable-diffusion-webui\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "D:\stable-diffusion-webui\modules\sd_models.py", line 620, in get_sd_model
load_model()
File "D:\stable-diffusion-webui\modules\sd_models.py", line 723, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 103, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1784, in from_pretrained
resolved_vocab_files[file_id] = cached_file(
File "D:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 94, in transformers_tokenization_utils_base_cached_file
return transformers_utils_hub_get_file_from_cache(self.transformers_tokenization_utils_base_cached_file, url, *args, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 88, in transformers_utils_hub_get_file_from_cache
return original(url, *args, local_files_only=False, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1282, in _hf_hub_download_to_cache_dir
(url_to_download, etag, commit_hash, expected_size, head_call_error) = _get_metadata_or_catch_error(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1722, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1645, in get_hf_file_metadata
r = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 372, in _request_wrapper
response = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 395, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 66, in send
return super().send(request, *args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 698, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))"), '(Request ID: 7b1db2cb-5341-432c-9847-c0092dba1312)')
Failed to create model quickly; will retry using slow method.
loading stable diffusion model: SSLError
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 466, in _make_request
self._validate_conn(conn)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 1095, in _validate_conn
conn.connect()
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 652, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connection.py", line 805, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 465, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\ssl_.py", line 509, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1071, in _create
self.do_handshake()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 789, in urlopen
response = self._make_request(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 490, in _make_request
raise new_e
urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:997)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 667, in send
resp = conn.urlopen(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\connectionpool.py", line 843, in urlopen
retries = retries.increment(
File "D:\stable-diffusion-webui\venv\lib\site-packages\urllib3\util\retry.py", line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\kingy\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\ui.py", line 1154, in <lambda>
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
File "D:\stable-diffusion-webui\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "D:\stable-diffusion-webui\modules\sd_models.py", line 620, in get_sd_model
load_model()
File "D:\stable-diffusion-webui\modules\sd_models.py", line 732, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 103, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1784, in from_pretrained
resolved_vocab_files[file_id] = cached_file(
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1282, in _hf_hub_download_to_cache_dir
(url_to_download, etag, commit_hash, expected_size, head_call_error) = _get_metadata_or_catch_error(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1722, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1645, in get_hf_file_metadata
r = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 372, in _request_wrapper
response = _request_wrapper(
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 395, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 66, in send
return super().send(request, *args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\requests\adapters.py", line 698, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)')))"), '(Request ID: 825164c6-3d61-4bb3-b0f8-bca68fe6ea84)')
Stable diffusion model failed to load
```
### Additional information
_No response_ | open | 2024-07-02T07:26:58Z | 2024-07-13T20:28:38Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16126 | [
"bug-report"
] | kingyoolee | 2 |
kymatio/kymatio | numpy | 478 | Sphinx gallery does not create thumbnails for all examples | See for example https://www.kymat.io/gallery_2d/regularized_inverse_scattering_MNIST.html#sphx-glr-gallery-2d-regularized-inverse-scattering-mnist-py and various other examples: the figure is neither displayed on the example page, nor in the gallery of thumbnails. I believe this is because of the
```
if __name__ == '__main__':
main()
```
structure. Putting all the instructions inside the `main()` function at the top level of the file should solve the problem. I'm happy to make a PR with these changes if the devs here agree. | closed | 2020-01-05T21:17:58Z | 2020-01-27T02:46:19Z | https://github.com/kymatio/kymatio/issues/478 | [] | emmanuelle | 5 |
pytest-dev/pytest-selenium | pytest | 78 | INTERNALERROR: KeyError: 'startdir' | Hi, I'm using py.test and django. After installing `pytest-selenium`, I got the following error.
Can anyone shed some light on this?
```
$ py.test
Test session starts (platform: darwin, Python 3.5.2, pytest 3.0.2, pytest-sugar 0.7.1)
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/main.py", line 94, in wrap_session
INTERNALERROR> config.hook.pytest_sessionstart(session=session)
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/pytest_sugar.py", line 217, in pytest_sessionstart
INTERNALERROR> lines = self.config.hook.pytest_report_header(config=self.config)
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 593, in execute
INTERNALERROR> args = [all_kwargs[argname] for argname in hook_impl.argnames]
INTERNALERROR> File "/Users/jon/.virtualenvs/webapp/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 593, in <listcomp>
INTERNALERROR> args = [all_kwargs[argname] for argname in hook_impl.argnames]
INTERNALERROR> KeyError: 'startdir'
```
I tested this with both selenium 3.0.0b2 and 2.53.6 -- same error. When I uninstalled `pytest-selenium` the error went away.
[Here's an example project](https://github.com/jondelmil/example), sorry it's so big but it's the quickest way to get where I am now.
To reproduce, `pip install -r requirements/test.txt && pip install -r requirements/local.txt`, then `py.test`
I also had to properly space line 49 in `/Users/jon/.virtualenvs/example/lib/python3.5/site-packages/selenium/webdriver/safari/webdriver.py` to fall under the if statement, per [this commit](https://github.com/SeleniumHQ/selenium/commit/055325c09d783b6b6c9cbdeaa0828384048b0222).
| closed | 2016-09-06T22:38:08Z | 2016-09-06T23:39:03Z | https://github.com/pytest-dev/pytest-selenium/issues/78 | [] | jmillxyz | 7 |
GibbsConsulting/django-plotly-dash | plotly | 154 | Add demo website to documentation and demo templates | There is now a demo website at https://djangoplotlydash.com
It can be used as a live example of the library, so links should be added to the documentation and also the template for the demo itself. | closed | 2019-05-16T16:38:46Z | 2019-05-17T16:08:26Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/154 | [
"enhancement"
] | GibbsConsulting | 0 |
noirbizarre/flask-restplus | api | 613 | The name of the body can only be called payload? | In the generated document, the name of the body can only be called payload? I want change the name to make more sense.
```
@au.route('/authenticate')
@au.response(400, 'params error')
class Authenticate(Resource):
@au.doc('Get a accessToken')
@au.doc(body=auth)
@au.marshal_with(token)
def post(self):
``` | open | 2019-03-27T07:42:34Z | 2020-01-16T02:58:23Z | https://github.com/noirbizarre/flask-restplus/issues/613 | [
"Needed: Feedback"
] | ELvisZxc | 3 |
lukas-blecher/LaTeX-OCR | pytorch | 18 | google/protobuf/pyext/descriptor.cc:358: bad argument to internal function | Hello, i got this error went i run python pix2tex.
I have torch 1.7 with cuda.
| closed | 2021-06-01T09:49:56Z | 2021-06-01T16:22:37Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/18 | [] | geek3000 | 6 |
babysor/MockingBird | pytorch | 682 | 在训练模型时attention图无法生成明显斜线是否正常 | 目前只跑了7k,素材都是自己精心挑选和处理的,但是attention图始终无法呈现明显斜线,是素材的问题还是训练数没够?有没有必要等等看

| open | 2022-07-27T08:21:46Z | 2022-07-30T12:20:58Z | https://github.com/babysor/MockingBird/issues/682 | [] | Ternloli | 1 |
JaidedAI/EasyOCR | machine-learning | 569 | Newbie Question : how to produce Output as TXT file | hi, sorry for the very newbie question, and my broken English,
I am not a python programmer, so installing to this "working state" is quite miraculous for me,
for the info, I use windows 10 and use "easyocr.exe" using cmd and this is the result:
```
([[384, 46], [862, 46], [862, 124], [384, 124]], '元旱安,好久不旯', 0.5539275336640896)
```
(later i learn about "--detail 0")
my question is "how to sent the result to TXT file"?
i tried
`
easyocr.exe -l ch_sim -f Source.jpeg > result.txt
`
but the result is error
thanks for the great job, and thank for reading, have a nice day | closed | 2021-10-15T13:23:30Z | 2024-03-28T12:51:04Z | https://github.com/JaidedAI/EasyOCR/issues/569 | [] | kucingkembar | 3 |
flasgger/flasgger | flask | 434 | The imp module is deprecated in favour of importlib; | It seems this line needs to be updated:
https://github.com/flasgger/flasgger/blob/ce98b046cbec2b2eb8c6de6df4b5409c06a8e539/flasgger/utils.py#L5
```
File "/usr/local/Cellar/python@3.8/3.8.2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/imp.py", line 31, in <module>
warnings.warn("the imp module is deprecated in favour of importlib; "
DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
```
produced using: `PYTHONWARNINGS=error` in a flask project that uses flasgger.
| closed | 2020-09-30T15:14:11Z | 2022-09-03T02:04:56Z | https://github.com/flasgger/flasgger/issues/434 | [] | kamyar | 8 |
slackapi/bolt-python | fastapi | 818 | Scopes required to allow DM-ing bot directly | Hello! Got a feature request that users would like to DM the bot directly instead of asking in channels.
<img width="969" alt="image" src="https://user-images.githubusercontent.com/67022327/216746049-0f6e4c82-b95f-4bff-b74e-3eb665dc79ff.png">
Here are the scopes I have set. Thought adding `im:read` and `im:write` would work but it did not. I did reinstall the app after adding the scopes.

| closed | 2023-02-04T03:38:17Z | 2023-02-07T02:08:57Z | https://github.com/slackapi/bolt-python/issues/818 | [
"question"
] | asontha | 4 |
autokey/autokey | automation | 557 | Autokey breaks dependencies in the latest KDE Neon. | ## Classification:
Bug
## Reproducibility:
Always
## Version
AutoKey version: 0.95.10
Used GUI (Gtk, Qt, or both): Qt
If the problem is known to be present in more than one version, please list all of those.
Installed via: PPA (nemonein)
Linux Distribution: KDE Neon
## Summary
In the latest update on KDE Neon(a few days ago), I've got these messages.
```
Starting pkgProblemResolver with broken count: 1
Starting 2 pkgProblemResolver with broken count: 1
Investigating (0) python3-pyqt5:amd64 < 5.14.1+dfsg-3+20.04+focal+build13 -> 5.15.4+dfsg-1+20.04+focal+unstable+build20 @ii umU Ib >
Broken python3-pyqt5:amd64 breaks on python3-pyqt5.qsci:amd64 < 2.11.2+dfsg-6 @ii mK > (< 2.11.5~)
Considering python3-pyqt5.qsci:amd64 0 as a solution to python3-pyqt5:amd64 8
Added python3-pyqt5.qsci:amd64 to the remove list
Fixing python3-pyqt5:amd64 via remove of python3-pyqt5.qsci:amd64
Investigating (0) autokey-qt:amd64 < 0.95.10-0-0build1~ubuntu20.04 @ii mK Ib >
Broken autokey-qt:amd64 depends on python3-pyqt5.qsci:amd64 < 2.11.2+dfsg-6 @ii mR >
Considering python3-pyqt5.qsci:amd64 0 as a solution to autokey-qt:amd64 0
Removing autokey-qt:amd64 rather than change python3-pyqt5.qsci:amd64
```
Maybe these problems already are solved on 0.96?
| closed | 2021-06-06T08:09:28Z | 2021-06-12T16:51:06Z | https://github.com/autokey/autokey/issues/557 | [
"upstream bug",
"installation/configuration"
] | nemonein | 11 |
streamlit/streamlit | deep-learning | 10,758 | Unable to see tracebacks in console for app exceptions | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Hello, I have encountered a very frustrating issue. I am working on a somewhat complex app, and at some point during development, streamlit stopped giving me tracebacks when exceptions are raised. This includes the console; nothing is being logged except the exception message like "ZeroDivisionError: division by zero". I can't find any reason why this would occur, and I have scoured the forums / google / github issues but found nothing. I tried explicitly setting the .streamlit/config.toml to have showErrorDetails = "full" but this changes nothing.
I thought it might have something to do with fragments, or cached resources, but even after removing these and restarting the server, I'm still not getting tracebacks. As a sort of minimum repro, I added "1/0" as the first line in my app file, and it still only shows me "ZeroDivisionError: division by zero" with no traceback when I rerun the app. I have observed that this only happens later in the app's execution; if I let the app run and crash on its own, any subsequent page refresh will no longer show me tracebacks, even if it was showing before.
I can't reproduce the issue from scratch and I have no idea what is different about my production app. Any advice would be appreciated.
### Reproducible Code Example
```Python
# this code snippet does not work in isolation - there is some other root cause
# if this error is triggered before any other operation, there will be a traceback in the console
# triggering this after an app refresh only prints "ZeroDivisionError: division by zero" and no traceback - even if it's the first line of code
try:
1/0
except Exception:
import traceback
print(traceback.format_exc())
```
### Steps To Reproduce
1. Add the caught zero division error as first line of code
2. Observe proper traceback in console
3. App continues execution and encounters an error
4. Observe error message but no traceback shown in console
5. Refresh the page
6. Observe that now there's no traceback in console for the zero division error
### Expected Behavior
I expect to see tracebacks in the console so I can debug my code more easily.
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.43.2
- Python version: 3.11.11
- Operating System: Windows WSL
- Browser: Chrome
### Additional Information
_No response_ | closed | 2025-03-12T21:52:06Z | 2025-03-13T16:10:11Z | https://github.com/streamlit/streamlit/issues/10758 | [
"type:bug",
"status:needs-triage"
] | cc-c4 | 2 |
graphistry/pygraphistry | jupyter | 473 | [BUG] hackernews demo fails on merge branch | On `http://localhost/notebook/lab/tree/demos/ai/Introduction/Ask-HackerNews-Demo.ipynb`:
```
ile /opt/conda/envs/rapids/lib/python3.8/site-packages/graphistry/feature_utils.py:652, in impute_and_scale_df(df, use_scaler, impute, n_quantiles, output_distribution, quantile_range, n_bins, encode, strategy, keep_n_decimals)
629 def impute_and_scale_df(
630 df: pd.DataFrame,
631 use_scaler: str = "robust",
(...)
639 keep_n_decimals: int = 5,
640 ) -> Tuple[pd.DataFrame, Pipeline]:
642 transformer = get_preprocessing_pipeline(
643 impute=impute,
644 use_scaler=use_scaler,
(...)
650 strategy=strategy,
651 )
--> 652 res = fit_pipeline(df, transformer, keep_n_decimals=keep_n_decimals)
654 return res, transformer
File /opt/conda/envs/rapids/lib/python3.8/site-packages/graphistry/feature_utils.py:622, in fit_pipeline(X, transformer, keep_n_decimals)
619 columns = X.columns
620 index = X.index
--> 622 X = transformer.fit_transform(X)
623 if keep_n_decimals:
624 X = np.round(X, decimals=keep_n_decimals) # type: ignore # noqa
File /opt/conda/envs/rapids/lib/python3.8/site-packages/sklearn/pipeline.py:437, in Pipeline.fit_transform(self, X, y, **fit_params)
410 """Fit the model and transform with the final estimator.
411
412 Fits all the transformers one after the other and transform the
(...)
434 Transformed samples.
435 """
436 fit_params_steps = self._check_fit_params(**fit_params)
--> 437 Xt = self._fit(X, y, **fit_params_steps)
439 last_step = self._final_estimator
440 with _print_elapsed_time("Pipeline", self._log_message(len(self.steps) - 1)):
File /opt/conda/envs/rapids/lib/python3.8/site-packages/sklearn/pipeline.py:339, in Pipeline._fit(self, X, y, **fit_params_steps)
336 def _fit(self, X, y=None, **fit_params_steps):
337 # shallow copy of steps - this should really be steps_
338 self.steps = list(self.steps)
--> 339 self._validate_steps()
340 # Setup the memory
341 memory = check_memory(self.memory)
File /opt/conda/envs/rapids/lib/python3.8/site-packages/sklearn/pipeline.py:243, in Pipeline._validate_steps(self)
237 # We allow last estimator to be None as an identity transformation
238 if (
239 estimator is not None
240 and estimator != "passthrough"
241 and not hasattr(estimator, "fit")
242 ):
--> 243 raise TypeError(
244 "Last step of Pipeline should implement fit "
245 "or be the string 'passthrough'. "
246 "'%s' (type %s) doesn't" % (estimator, type(estimator))
247 )
TypeError: Last step of Pipeline should implement fit or be the string 'passthrough'. '<function identity at 0x7fc7b4870430>' (type <class 'function'>) doesn't
``` | open | 2023-05-01T06:13:05Z | 2023-05-26T23:48:30Z | https://github.com/graphistry/pygraphistry/issues/473 | [
"bug"
] | lmeyerov | 4 |
babysor/MockingBird | deep-learning | 789 | 运行demo_toolbox.py提示错误 | **Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
运行demo_toolbox.py提示错误
Arguments:
datasets_root: None
vc_mode: False
enc_models_dir: encoder\saved_models
syn_models_dir: synthesizer\saved_models
voc_models_dir: vocoder\saved_models
extractor_models_dir: ppg_extractor\saved_models
convertor_models_dir: ppg2mel\saved_models
cpu: False
seed: None
no_mp3_support: False
qt.qpa.plugin: Could not find the Qt platform plugin "windows" in ""
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
python=3.10.8
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
| open | 2022-11-23T14:45:29Z | 2022-12-15T13:40:49Z | https://github.com/babysor/MockingBird/issues/789 | [] | jasonyun | 1 |
piskvorky/gensim | nlp | 2,906 | Add 32/64-bit reporting to issue template | It's be useful if the issue template also requested whether the local Python is a 32-bit or 64-bit executable. Adding this line to the "Please provide the output of" code should be enough:
```python
import struct; print(8 * struct.calcsize("P"))
``` | closed | 2020-07-29T18:37:04Z | 2020-07-29T22:14:54Z | https://github.com/piskvorky/gensim/issues/2906 | [
"housekeeping"
] | gojomo | 3 |
mars-project/mars | pandas | 3,185 | [BUG] Ray shuffle load fetch shuffle raises AssertionError | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
```python
mars/tensor/indexing/tests/test_indexing_execution.py:309 (test_setitem_fancy_index_execution)
setup = <mars.deploy.oscar.session.SyncSession object at 0x1242bbbe0>
def test_setitem_fancy_index_execution(setup):
rs = np.random.RandomState(0)
raw = rs.randint(0, 10, size=(11, 12))
# index is a ndarray, value is a scalar
arr = tensor(raw.copy(), chunk_size=5)
idx = rs.randint(0, 11, (5,))
arr[idx] = 20
> res = arr.execute().fetch()
mars/tensor/indexing/tests/test_indexing_execution.py:319:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
mars/core/entity/tileables.py:462: in execute
result = self.data.execute(session=session, **kw)
mars/core/entity/executable.py:144: in execute
return execute(self, session=session, **kw)
mars/deploy/oscar/session.py:1890: in execute
return session.execute(
mars/deploy/oscar/session.py:1684: in execute
execution_info: ExecutionInfo = fut.result(
../../.pyenv/versions/3.8.7/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()
../../.pyenv/versions/3.8.7/lib/python3.8/concurrent/futures/_base.py:388: in __get_result
raise self._exception
mars/deploy/oscar/session.py:1870: in _execute
await execution_info
../../.pyenv/versions/3.8.7/lib/python3.8/asyncio/tasks.py:695: in _wrap_awaitable
return (yield from awaitable.__await__())
mars/deploy/oscar/session.py:105: in wait
return await self._aio_task
mars/deploy/oscar/session.py:953: in _run_in_background
raise task_result.error.with_traceback(task_result.traceback)
mars/services/task/supervisor/processor.py:369: in run
await self._process_stage_chunk_graph(*stage_args)
mars/services/task/supervisor/processor.py:247: in _process_stage_chunk_graph
chunk_to_result = await self._executor.execute_subtask_graph(
mars/services/task/execution/ray/executor.py:485: in execute_subtask_graph
input_object_refs = await self._load_subtask_inputs(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <mars.services.task.execution.ray.executor.RayTaskExecutor object at 0x13f889ee0>
stage_id = 'xG2oeZ77OGqURoRWnXO9bk0I'
subtask = <Subtask id=SRyg6Deb4AzAjglTkOaDAkBB results=[TensorIndexSetValue(f01fe54c1f4a19cb552c05860d76c114_0)]>
context = {'5ee475fb6e8060867f5a1c97421059b3_0': ObjectRef(16310a0f0a45af5cffffffffffffffffffffffff0100000001000000)}
shuffle_manager = <mars.services.task.execution.ray.shuffle.ShuffleManager object at 0x14295b6a0>
async def _load_subtask_inputs(
self,
stage_id: str,
subtask: Subtask,
context: Dict,
shuffle_manager: ShuffleManager,
):
"""
Load input object refs of subtask from context.
It updates the context if the input object refs are fetched from
the meta service.
"""
input_object_refs = []
key_to_get_meta = {}
# for non-shuffle chunks, chunk key will be used for indexing object refs.
# for shuffle chunks, mapper subtasks will have only one mapper chunk, and all outputs for mapper
# subtask will be shuffle blocks, the downstream reducers will receive inputs in the mappers order.
start_chunks = _get_start_chunks(subtask.chunk_graph)
for index, start_chunk in enumerate(start_chunks):
if isinstance(start_chunk.op, Fetch):
chunk_key = start_chunk.key
# pure_depend data is not used, skip it.
if chunk_key in subtask.pure_depend_keys:
input_object_refs.append(None)
elif chunk_key in context:
input_object_refs.append(context[chunk_key])
else:
input_object_refs.append(None)
key_to_get_meta[index] = self._meta_api.get_chunk_meta.delay(
chunk_key, fields=["object_refs"]
)
elif isinstance(start_chunk.op, FetchShuffle):
> assert len(start_chunks) == 1, start_chunks
E AssertionError: [Chunk <op=TensorFetch, key=5ee475fb6e8060867f5a1c97421059b3_0>, Chunk <op=TensorFetchShuffle, key=73d37b69ae018bc2c4f66072dd0bfb7b_0>]
mars/services/task/execution/ray/executor.py:678: AssertionError
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-07-12T10:00:43Z | 2022-08-08T11:16:55Z | https://github.com/mars-project/mars/issues/3185 | [
"type: bug",
"mod: ray integration",
"shuffle"
] | fyrestone | 1 |
ultralytics/ultralytics | machine-learning | 18,805 | How to Plot and Visualize Feature Maps for Each Layer in a YOLOv11 Mode | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, I'm training a model using YOLOv11 and I've enabled visualization by setting **visualize=True** in my training script. However, I'm not seeing the feature maps from the convolutional layers, which I expected to be part of the visual output. Could you guide me on how to access or enable the visualization of feature maps for each convolutional layer during training? Here's the relevant part of my code:
`from ultralytics import YOLO
if __name__ == "__main__":
model = YOLO("yolo11n.pt", task="detect") # Load model and make predictions
# Train the model
results = model.train(
data = "C:/Users/User/Desktop/University/Annotated_dataset/data.yaml", # path to your data configuration file
epochs=350, # number of epochs
imgsz=640, # image size
batch=32, # batch size
device='0', # cuda device (use '0' for GPU, 'cpu' for CPU)
visualize=True, # plot results
)`
### Additional
_No response_ | open | 2025-01-21T16:39:23Z | 2025-01-21T22:39:08Z | https://github.com/ultralytics/ultralytics/issues/18805 | [
"question",
"detect"
] | WengYan0619 | 2 |
albumentations-team/albumentations | deep-learning | 1,880 | [Feature Request] Make hardcoded parameter `shadow_intensity = 0.5` a paramater | The proposal is to add `shadow_intenisity_range: tuple[int, int]` parameter to the `RandomShadow` class, sample from it and use in `add_shadow` function. | closed | 2024-08-14T16:23:35Z | 2024-08-16T06:36:35Z | https://github.com/albumentations-team/albumentations/issues/1880 | [
"enhancement"
] | ternaus | 4 |
nteract/papermill | jupyter | 814 | Use maintained bumpversion fork | ## 🐛 Bug
bumpversion is not maintained in over 5 years and the README recommends to use one of the maintained forks.
Would you accept a PR to use bump-my-version instead?
https://github.com/callowayproject/bump-my-version
| open | 2024-12-22T02:47:03Z | 2024-12-22T02:47:03Z | https://github.com/nteract/papermill/issues/814 | [
"bug",
"help wanted"
] | jgarte | 0 |
nvbn/thefuck | python | 768 | Syntax errors in fish shell | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0`):
The Fuck 3.25 using Python 3.6.4
Your shell and its version (`bash`, `zsh`, *Windows PowerShell*, etc.):
fish, version 2.7.1
Your system (Debian 7, ArchLinux, Windows, etc.):
MacOS High Sierra 10.13.2
How to reproduce the bug:
Place into config.fish `thefuck --alias | source`, as in instructions.
On terminal startup, gives Fish syntax error message relating to the use of `&&` rather than `; and` in fish, plus more relating to the use of variable setting. Essentially it seems that `thefuck --alias` gives out a result that only runs in bash.
| closed | 2018-01-07T21:53:16Z | 2019-01-01T10:31:04Z | https://github.com/nvbn/thefuck/issues/768 | [] | Genora51 | 4 |
jupyter-book/jupyter-book | jupyter | 1,585 | Documentation or feature to enable glue to work with tables | ### Description / Summary
Following the docs [here](https://jupyterbook.org/content/executable/output-insert.html#the-glue-figure-directive) I can use syntax like this to generate a table, however, it get's called a figure and the formatting is very plain when rendering to PDF:
Code cell to generate glue table:
```
import pandas as pd
from sklearn.datasets import load_iris
from myst_nb import glue
data = load_iris()
df = pd.DataFrame(data.data, columns=data.feature_names)
glue("df_tbl", df)
```
markdown cell to glue in table:
<img width="650" alt="Screen Shot 2022-01-03 at 1 03 42 PM" src="https://user-images.githubusercontent.com/6865016/147980123-b73732a8-c73c-4d78-a5d7-1728368ab09d.png">
Resultant table when rendered to pdf:
<img width="755" alt="Screen Shot 2022-01-03 at 1 04 20 PM" src="https://user-images.githubusercontent.com/6865016/147980166-86685a87-4899-4438-a04a-959655e5f7d8.png">
However, if I create a table using markdown, I can use the MyST syntax to get it labelled as a table and formatted nicely in pdf...
markdown cell to create markdown table:
<img width="580" alt="Screen Shot 2022-01-03 at 1 05 33 PM" src="https://user-images.githubusercontent.com/6865016/147980287-486ceb96-cc8b-4a66-869f-459a26849049.png">
Resultant table when rendered to pdf:
<img width="790" alt="Screen Shot 2022-01-03 at 1 06 00 PM" src="https://user-images.githubusercontent.com/6865016/147980347-181c4eba-ad71-4ff8-b719-fff2d92dc756.png">
### Value / benefit
Is it possible to make a nicely labelled and rendered table like that from a pandas data frame in Jupyter book yet using glue? If so, is there documentation for this, or if you show me how, I would be happy to create a PR to add it. If not, this would be a very valuable feature for Jupyter books - and comparable features exist in `bookdown` (the R world's alternative to Jupyter book).
### Implementation details
_No response_
### Tasks to complete
_No response_ | open | 2022-01-03T21:09:03Z | 2022-01-03T21:09:03Z | https://github.com/jupyter-book/jupyter-book/issues/1585 | [
"enhancement"
] | ttimbers | 0 |
tradingstrategy-ai/web3-ethereum-defi | pytest | 135 | Better tracking for Uniswap v3 price estimation | - Currently `estimate_sell_received_amount` (and co) return raw amount
To track the slippage better, we should also return
- Block number
- Mid-price at the time of the reading
This allows us to post-mortem diagnose why slippage tolerance was exceeded | closed | 2023-07-16T18:17:09Z | 2023-07-25T11:33:04Z | https://github.com/tradingstrategy-ai/web3-ethereum-defi/issues/135 | [
"priority: P1"
] | miohtama | 0 |
neuml/txtai | nlp | 537 | Why do memory usages differ by 3Gb in these 2 scenarios? | scenario 1 where I pull the agnews dataset(60Mb) and save first 10K into an index. It takes about 4.32Gb in memory after completion.
scenario 2 where I load from the same path and only takes about 1.4 Gb.
Why is there a difference here?
```
scenario 1 - 4.32Gb
# Build hybrid index
embeddings = txtai.Embeddings({
"method": "sentence-transformers",
"path": "BAAI/bge-small-en",
"content": True,
"hybrid": False,
"backend": "faiss",
# "faiss": {
# "mmap": True,
# }
})
embeddings.index(stream(dataset, "text", 10000))
embeddings.save("location")
embeddings.search("Kennedy")
scenario 2 - 1.42Gb
embeddings = Embeddings()
embeddings.load("/Users/wingsangvincentliu/Documents/UnlostApp.nosync/txtai_hybrid_false_mmap_true")
embeddings.search("Kennedy")
``` | closed | 2023-08-28T18:36:57Z | 2023-09-22T16:57:48Z | https://github.com/neuml/txtai/issues/537 | [] | Vincent-liuwingsang | 14 |
polarsource/polar | fastapi | 4,721 | Resend API is blocking | It blocks the worker loop, especially when Resend API has hiccups and timeouts. Two things:
* [x] Make the API request ourself using `httpx`, remove their wrapper.
* [x] Make email sending an individual task so the whole job doesn't fail if the email sending fails; allowing us to retry it.
My # 1 priority before the holidays. | closed | 2024-12-20T12:54:58Z | 2024-12-20T15:01:15Z | https://github.com/polarsource/polar/issues/4721 | [
"bug"
] | frankie567 | 1 |
plotly/dash-core-components | dash | 292 | Input type “number” increment bug | https://community.plot.ly/t/input-type-number-increment-issue/13277
If Min is set to 0.0001 and one tries to increase the value from 1 to 2 it instead goes to 1 to 1.0001. The next number it goes to is 0.0001. | closed | 2018-09-04T15:33:31Z | 2018-11-05T20:31:29Z | https://github.com/plotly/dash-core-components/issues/292 | [] | oliverbrace | 0 |
mljar/mljar-supervised | scikit-learn | 276 | Error for 5_Default_CatBoost | Stop training after the first fold. Time needed to train on the first fold 821.0 seconds. The time estimate for training on all folds is larger than total_time_limit.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/supervised/base_automl.py", line 907, in _fit
trained = self.train_model(params)
File "/usr/local/lib/python3.6/dist-packages/supervised/base_automl.py", line 300, in train_model
mf.train(model_path)
File "/usr/local/lib/python3.6/dist-packages/supervised/model_framework.py", line 172, in train
self.callbacks.on_learner_train_end()
File "/usr/local/lib/python3.6/dist-packages/supervised/callbacks/callback_list.py", line 15, in on_learner_train_end
cb.on_learner_train_end(logs)
File "/usr/local/lib/python3.6/dist-packages/supervised/callbacks/total_time_constraint.py", line 44, in on_learner_train_end
"Stop training after the first fold. "
supervised.exceptions.AutoMLException: Stop training after the first fold. Time needed to train on the first fold 821.0 seconds. The time estimate for training on all folds is larger than total_time_limit.
| closed | 2020-12-26T04:33:37Z | 2021-01-11T22:07:15Z | https://github.com/mljar/mljar-supervised/issues/276 | [] | ijeffking | 2 |
ShishirPatil/gorilla | api | 112 | [feature]: Implement GitHub Workflow for Pre-Commit Checks and Validation | # Description
As we continue to enhance the development process of our repository, it's essential to ensure code quality and consistency. This issue aims to implement an automated GitHub Workflow that performs pre-commit checks and validation on every push. By doing so, we can catch potential issues early in the development cycle and maintain a high standard of code quality.
# Proposed Solution
We'll set up a GitHub Actions workflow using a YAML configuration file. This workflow will trigger on every push to any branch, initiating a series of checks that we define. These checks may include linting, code formatting, running tests, and other validations tailored to our project's requirements.
# Tasks
- [ ] Create a .github/workflows directory within the repository.
- [ ] Inside the workflows directory, add a YAML file named pre-commit-checks.yml.
- [ ] Define the workflow trigger to run on every push event to any branch.
- [ ] Specify the necessary jobs and steps within the YAML file for pre-commit checks.
- [ ] Integrate tools or scripts for code linting, formatting, testing, and other relevant checks.
- [ ] Set up conditional steps to display logs or messages in case of check failures.
# Acceptance Criteria
- The GitHub Workflow should trigger automatically on every push to any branch.
- The workflow should successfully perform the specified pre-commit checks and validations.
- In case of check failures, the workflow should display logs or messages indicating the issues.
For more information, @ShishirPatil Will be here to help!
| open | 2023-08-23T09:50:37Z | 2023-08-23T09:58:33Z | https://github.com/ShishirPatil/gorilla/issues/112 | [
"enhancement"
] | rajveer43 | 0 |
Miserlou/Zappa | django | 1,609 | "exception_handler" does not work. | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
### **"When I set custom handler in zappa_settings, it isn't applied."**
I found this when I set sentry setting in django project.
I installed zappa_sentry and then, set exception_handler just like below.
````
{
"production": {
...
"exception_handler": "zappa_sentry.unhandled_exceptions",
...
}
}
````
Then, enter command "zappa update production", and I raise error on purpose.
In zappa tail,
````
Traceback (most recent call last):
File "/var/task/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/var/task/django/core/handlers/base.py", line 156, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/var/task/django/core/handlers/base.py", line 154, in _get_response
response = response.render()
File "/var/task/django/template/response.py", line 106, in render
self.content = self.rendered_content
File "/var/task/django/template/response.py", line 81, in rendered_content
template = self.resolve_template(self.template_name)
File "/var/task/django/template/response.py", line 65, in resolve_template
return get_template(template, using=self.using)
File "/var/task/django/template/loader.py", line 19, in get_template
raise TemplateDoesNotExist(template_name, chain=chain)
django.template.exceptions.TemplateDoesNotExist: admin/change_list__.html
````
However, there was no error report in sentry.
So I made custom handler like this,
````
something.py
def handle_error(e, exception, context):
print("BOOOOOOOM!")
return False
````
then I set exception_handler in zappa_settings to 'something.handle_error'.
I raise error on purpose and see zappa tail production.
I expect "BOOOOOOOM" and traceback, but there is no "BOOOOOOOM" print, just traceback above.
When I set handle_error function return True, still zappa tail print same traceback above.
Even I set exception_handler like this random string,
````
{
"production": {
...
"exception_handler": "qwer",
...
}
}
````
still zappa tail print same traceback above.
Of course, I enter command "zappa update production" before every test and I even tried redeploy(zappa undeploy production, and zappa deploy production), but still same.
## Expected Behavior
<!--- Tell us what should happen -->
custom exception handler applied.
## Actual Behavior
<!--- Tell us what happens instead -->
Any exception handler doesn't applied.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. set exception_handler in zappa_settings to any custom handler
2. raise error
3. see zappa_tail or sentry or any method that can verify whether exception_handler works.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.46.1, 0.46.2 (I tested two versions, and still same result.)
* Operating System and Python version: Docker lambci/lambda:build-python3.6
* The output of `pip freeze`:
argcomplete==1.9.3
Babel==2.6.0
base58==1.0.0
boto==2.49.0
boto3==1.7.74
botocore==1.12.4
certifi==2018.8.24
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
coolsms-python-sdk==2.0.3
Django==2.1.1
django-phonenumber-field==2.0.1
django-storages==1.7
djangorestframework==3.8.2
djangorestframework-camel-case==1.0b1
docutils==0.14
durationpy==0.5
future==0.16.0
hjson==3.0.1
idna==2.7
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
phonenumbers==8.9.13
placebo==0.8.2
psycopg2-binary==2.7.5
pycryptodome==3.6.6
python-dateutil==2.7.3
python-dotenv==0.9.1
python-slugify==1.2.4
pytz==2018.5
PyYAML==3.12
raven==6.9.0
requests==2.19.1
s3transfer==0.1.13
six==1.11.0
toml==0.9.6
tqdm==4.19.1
troposphere==2.3.3
Unidecode==1.0.22
urllib3==1.23
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
zappa==0.46.2
zappa-sentry==0.2.3
* Link to your project (optional):
* Your `zappa_settings.py`:
{
"production": {
"aws_region": "ap-northeast-2",
"django_settings": "config.settings.production",
"project_name": "blahblah",
"runtime": "python3.6",
"s3_bucket": "blahblah",
"certificate_arn": "blahblah",
"domain": "blahblah",
"vpc_config": {
"SubnetIds": ["blahblah"],
"SecurityGroupIds": ["blahblah"]
},
"exception_handler": "qwer",
"environment_variables": {
"SENTRY_DSN": "blahblah"
}
}
}
| closed | 2018-09-14T15:13:15Z | 2018-09-19T14:13:56Z | https://github.com/Miserlou/Zappa/issues/1609 | [] | ohjeyong | 2 |
jofpin/trape | flask | 288 | CVEs fixed? | Hi,
Have you fixed the following CVEs:
CVE-2019-13489
CVE-2019-13488
If so, in what commits?
thanks in advance!! | open | 2020-12-31T13:15:58Z | 2020-12-31T13:15:58Z | https://github.com/jofpin/trape/issues/288 | [] | OS-WS | 0 |
dfm/corner.py | data-visualization | 89 | Plotting a range of truth values | I'm looking for a way to plot a range of truth values, sort of like a confidence interval. Ideally this would take the form of a semi-opaque wide line, just like the standard truths function, but the width of the line would represent the confidence width.
Is there any way to do that? I have been playing around with the axes object, maybe that's a way forward.
fig = corner.corner(data)
fig.axes[3].axhline(value, lw=2) | open | 2016-11-04T13:30:41Z | 2016-11-27T17:23:31Z | https://github.com/dfm/corner.py/issues/89 | [] | albin-n | 1 |
httpie/cli | python | 688 | Tox on Travis uses system python for -e py27 and -epy37 | https://travis-ci.org/jakubroztocil/httpie/jobs/403274486
```
Error: python 2.7.14 is already installed
To upgrade to 3.7.0, run `brew upgrade python`
```
```
============================= test session starts ==============================
platform darwin -- Python 2.7.14, pytest-3.6.3, py-1.5.4, pluggy-0.6.0 -- /Users/travis/build/jakubroztocil/httpie/.tox/py37/bin/python2.7
```
---
https://travis-ci.org/jakubroztocil/httpie/jobs/403274485#L5847
```
==> Installing python@2
==> Downloading https://homebrew.bintray.com/bottles/python@2-2.7.15_1.sierra.bo
==> Pouring python@2-2.7.15_1.sierra.bottle.tar.gz
```
```
============================= test session starts ==============================
platform darwin -- Python 2.7.14, pytest-3.6.3, py-1.5.4, pluggy-0.6.0 -- /Users/travis/build/jakubroztocil/httpie/.tox/py27/bin/python
```
| closed | 2018-07-12T20:11:27Z | 2018-09-07T17:24:59Z | https://github.com/httpie/cli/issues/688 | [] | jkbrzt | 1 |
pykaldi/pykaldi | numpy | 33 | In Protobuf installation, google test is not pulled. | Adding
```
git submodule update --init --recursive
```
in protobuf directory does this. | closed | 2018-04-15T16:20:56Z | 2018-04-20T18:21:09Z | https://github.com/pykaldi/pykaldi/issues/33 | [] | alexraju91 | 3 |
encode/httpx | asyncio | 2,738 | Cookies are not being imported properly? | Hello,
I think it's worth opening an issue report, because after long and countless time spent on reading docs I did not find any solution for my issue which is while setting cookies for Client or request itself.
I do import my cookies from selenium chrome and at the end they looks like this:
<details>
<summary>Cookies exported after emulation</summary>
{'_ga_NJ8BH4ETQS': 'GS1.1.1686190607.1.1.1686190608.0.0.0', 'PHPSESSID': '1mmh22v3cv3emq03mfnmp9at7c', '_gat_gtag_UA_146586338_1': '1', '__cf_bm': 'd9.lRGrcxe8773zqF2kEIqkHjPBypcLAXW9IUbZw2xs-1686190608-0-AZYm7BpJjSQzc3vtweTrL3ZShFZxdzsuxNjLbdy16F/prX4DBvmzfV1D1saoGLss3Q==', '_gid': 'GA1.2.400962821.1686190607', '_ga': 'GA1.1.1668825872.1686190607', 'cf_clearance': 'swD.Mvx4O5.TO3.WhMUxQRY9zgaWMaV5U8.Q18fSXaE-1686190600-0-250'}
</details>
then I use that output in httpx.AsyncClient(..., cookies=browser_cookies, ...) and only 1 cookie is present in client and I am completely clueless why it is like that.
<details>
<summary>Debug view</summary>
https://i.imgur.com/STBZEEZ.png
</details>
I have tried setting up cookies for AsyncClient only, build_request method only and on both in the same time and no matter if I use these cookies as argument for things above or I do use httpx.Cookie and use set method for each cookie I always see only PHPSESSID in requests and all of them fails due to being stuck on Cloudflare page, because other cookies are not set.
Have anyone face issue like this or can point me right direction to solve that mess? | closed | 2023-06-08T02:38:45Z | 2023-06-08T04:24:54Z | https://github.com/encode/httpx/issues/2738 | [] | OpsecGuy | 0 |
janosh/pymatviz | plotly | 160 | Add `density_scatter_plotly` | one very important plot type that i've been meaning to port over to `pymatviz` is a truly scalable version of `density_scatter` but using `plotly` as backend to get interactive tooltips. those can be very useful when making parity plots of e.g. of [machine learning models tested on large datasets](https://matbench-discovery.materialsproject.org/preprint#parity-plots):

`density_scatter` could either be upgraded by adding a `backend="plotly"` option or by making a new plot function `density_scatter_plotly`. i prefer the 1st option from a user-facing API perspective but think the code would end up more readable taking the 2nd option. still, option 1 probably better.
given [matbench discovery already has several examples](https://github.com/search?q=repo%3Ajanosh%2Fmatbench-discovery%20bin_df_cols&type=code) of how to make such plots, it would largely be a case of wrapping [the way `bin_df_cols` is used here](https://github.com/janosh/matbench-discovery/blob/b10e608f5c0d3bec6f77097dcd84e5e2f11b4a58/scripts/model_figs/parity_energy_models.py#L61-L95) into a more streamlined user-friendly function. | closed | 2024-06-18T01:17:21Z | 2024-06-20T15:02:16Z | https://github.com/janosh/pymatviz/issues/160 | [
"enhancement",
"plotly",
"scatter"
] | janosh | 4 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,910 | Someone is impersonating AUTOMATIC1111 for crypto | Thanks to @Jessiebase and users on Discord
It has been brought to our attention that someone on Twitter (X) is impersonating @AUTOMATIC1111 and using project to promote a crypto wallet
- for details see orignal post #16908
We have not set up any form of donation to this project, nor am I aware that @AUTOMATIC1111 have any personal donation means
As far as I'm aware no one hase been in contact @AUTOMATIC1111 since 2024/10/19
Consider any announcement that did not come directly from GitHub to be a impersonation of some kind | open | 2025-03-21T19:57:32Z | 2025-03-21T20:00:36Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16910 | [
"announcement"
] | w-e-w | 0 |
jupyter/nbviewer | jupyter | 759 | Error 503 No healthy backends | Getting the following when trying to view any notebook or the viewer homepage (http://nbviewer.jupyter.org/)
Error 503 No healthy backends
No healthy backends
Guru Mediation:
Details: cache-mdw17327-MDW 1516977568 4278289699
Varnish cache server

| closed | 2018-01-26T14:41:09Z | 2021-04-02T12:02:54Z | https://github.com/jupyter/nbviewer/issues/759 | [
"status:Duplicate",
"tag:Public Service"
] | snotskie | 4 |
Neoteroi/BlackSheep | asyncio | 327 | Can operations in OpenAPI Specification be sorted? | Can operations be sorted alphabetically?

| closed | 2023-04-10T11:55:53Z | 2023-06-17T11:06:00Z | https://github.com/Neoteroi/BlackSheep/issues/327 | [] | RobertoPrevato | 6 |
ShishirPatil/gorilla | api | 807 | After using raft_local.py in Gorilla to generate the training dataset, it seems that the subsequent code for fine-tuning the model is not included or visible | After using raft_local.py in Gorilla to generate the training dataset, it seems that the subsequent code for fine-tuning the model is not included or visible。
Could you provide more details about this part? Specifically, after using raft_local.py to generate the training dataset, are there any specific steps, configurations, or scripts that are missing for fine-tuning the model? If so, clarifying these details would help address the issue more effectively. | closed | 2024-12-02T09:53:56Z | 2024-12-05T08:37:39Z | https://github.com/ShishirPatil/gorilla/issues/807 | [
"hosted-openfunctions-v2"
] | belief888 | 2 |
Farama-Foundation/PettingZoo | api | 599 | ImportError | Hello,
I would like to try the PettingZoo library and installed version 1.14.0.
Nevertheless, I can't import any environment correctly. For example:
`from pettingzoo.butterfly import pistonball_v5`
Gives me the following error:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/var/folders/5b/8tkq7qs17k734h1j8jft220w0000gn/T/ipykernel_2921/1496936942.py in <module>
----> 1 from pettingzoo.butterfly import pistonball_v5
~/opt/anaconda3/envs/ReinforcementLearning/lib/python3.9/site-packages/pettingzoo/butterfly/__init__.py in __getattr__(env_name)
3
4 def __getattr__(env_name):
----> 5 return depricated_handler(env_name, __path__, __name__)
~/opt/anaconda3/envs/ReinforcementLearning/lib/python3.9/site-packages/pettingzoo/utils/deprecated_module.py in depricated_handler(env_name, module_path, module_name)
40 return DeprecatedModule(name, version, alt_version)
41 else:
---> 42 raise ImportError(f"cannot import name '{env_name}' from '{module_name}'")
ImportError: cannot import name 'pistonball_v5' from 'pettingzoo.butterfly'
```
My environment: macOs, Anaconda3, Python3.9, JupyterLab.
Any idea of what the problem can be ?
Best regards | closed | 2021-12-31T17:25:21Z | 2021-12-31T17:52:04Z | https://github.com/Farama-Foundation/PettingZoo/issues/599 | [] | christophelebrun | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,365 | Discrepancy of dataset size in the two domains | Hello,
I'm trying to adopt the idea of CycleGAN on solving a domain mapping problem. Say I have domain A and B, and I want a function mapping data from B to A.
The problem is that the amount of available data in the two domains differ significantly: I have ~20k entries in domain A and only ~4k entries in domain B.
It seems that most CycleGAN-like implementations (this original one and others for different tasks) load the same amount of data (a batch) in two domains and train the two semi-cycles simultaneously, which implies that only the domain with smaller amount of data is exhaustively used for training, and a large proportion of data in the other domain are not used for training.
That is, in my case, only ~4k out of ~20k entries in domain A are used for training.
The mitigation currently in my mind is to duplicate entries in domain B so that the total amount of data in two domains are basically the same. Is this a good idea? Please give me some suggestions.
Thanks! | closed | 2022-01-11T04:01:36Z | 2022-01-14T01:18:26Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1365 | [] | marcmk6 | 2 |
piskvorky/gensim | data-science | 3,509 | library stubs are missing | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/g/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
it seems `gensim` is missing library stubs.
*mypy* says: `Cannot find implementation or library stub for module named "gensim" [import-not-found]`
also I have tried *typeshed* (`pip install types-gensim`) but it wasn't available. | open | 2024-01-08T13:54:14Z | 2024-01-10T18:45:52Z | https://github.com/piskvorky/gensim/issues/3509 | [] | kkasra12 | 1 |
hankcs/HanLP | nlp | 1,264 | 分词结果是否支持不进行词性标注呢? | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.4
我使用的版本是:1.7.4
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
```java
System.out.println(HanLP.segment("你好,欢迎使用HanLP汉语处理包!"));
```
的输出结果为
```
[你好/l, ,/w, 欢迎/v, 使用/v, HanLP/nx, 汉语/nz, 处理/v, 包/v, !/w]
```
我想知道,有没有参数或者有没有其他接口能够支持不进行分词后的词性标注,例如
```
[你好, ,, 欢迎, 使用, HanLP, 汉语, 处理, 包, !]
```
| closed | 2019-08-15T09:21:58Z | 2019-08-16T09:55:06Z | https://github.com/hankcs/HanLP/issues/1264 | [
"question"
] | lxw0109 | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,031 | CelebA face generation met problem? | 
Hi there,
I apply cycle consistent loss on conditional GAN to keep the identity. Do you have any idea why the generated images are like this?
I appreciate your help!
Thanks!
| open | 2020-05-19T01:09:31Z | 2020-05-24T00:13:30Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1031 | [] | EvaFlower | 5 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 198 | windows模型合并assert not torch.allclose(first_weight_old, first_weight) | 感谢您使用Issue提问模板,请按照以下步骤提供相关信息。我们将优先处理信息相对完整的Issue,感谢您的配合。
*提示:将[ ]中填入x,表示打对钩。提问时删除上面这两行。请只保留符合的选项,删掉其他。*
### 问前必查项目
- [x] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [x] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [x] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
### 选择问题类型
基础模型:
- [x] LLaMA
- [ ] Alpaca
问题类型:
- [ ] 下载问题
- [x] 模型转换和合并问题
- [ ] 模型推理问题(🤗 transformers)
- [ ] 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat)
- [ ] 效果问题
- [ ] 其他问题
### 详细描述问题
windows下运行merge_llama_with_chinese_lora.py后,加载模型都完成,但是异常处理部分报错
### 运行截图或log

*(如有必要)请提供文本log或者运行截图,以便我们更好地了解问题详情。*
| closed | 2023-04-22T05:22:49Z | 2023-05-08T00:03:37Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/198 | [
"stale"
] | zyxdm | 4 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,069 | Result is not what i expected | Maybe i'm missing something, but i've got some voice sample from videogame Thief II, and i used this file to make my text sound like character from the game. It doesn't. I even recorded the process, take a look?
https://youtu.be/lDbpoaaBJSo
| open | 2022-05-25T19:46:01Z | 2022-05-31T23:33:29Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1069 | [] | vorob1 | 3 |
seleniumbase/SeleniumBase | web-scraping | 3,133 | uc_gui_click_captcha() isn't working with CF on Indeed | This was working fine until this morning.
The behavior *did* change a little bit after upgrading to 4.30.8, but it still isn't getting past the captcha without manual intervention (i.e., duplicating the tab, manually clicking the captcha)
```
driver = Driver(uc=True)
driver.uc_open_with_reconnect("https://www.indeed.com/", 6)
if is_securityCheck():
driver.uc_gui_click_captcha()
```
| closed | 2024-09-13T11:00:39Z | 2024-09-13T12:22:05Z | https://github.com/seleniumbase/SeleniumBase/issues/3133 | [
"duplicate",
"UC Mode / CDP Mode"
] | nicq10 | 1 |
gee-community/geemap | jupyter | 459 | Unexpected result for style=None in Map.add_geojson() |
### Environment Information
- geemap version: 0.8.15
- Python version: 3.8
- Operating System: macOS 10.14.6
### Description
On https://geemap.org/geemap/#geemap.geemap.Map.add_geojson `style=None` is used which I've copied, (likely from another snippet) and found to give unexpected results, see attached image. Using `style={color="black"}` instead gave what would be more expected.
### What I Did
```python
import geemap
import requests
m = geemap.Map(center=[0, 0], zoom=1)
url = ("https://raw.githubusercontent.com/telegeography/www.submarinecablemap.com"
"/master/public/api/v2/cable/cable-geo.json")
data = requests.get(url).json()
m.add_geojson(data, style=None) # ok: style={"color": "black"}
m.layout.height = "200px"
m
```
<img width="827" alt="Screenshot 2021-05-06 at 22 44 31" src="https://user-images.githubusercontent.com/1001778/117364790-a0d0f480-aebe-11eb-87f5-e48dc17b0334.png">
<img width="829" alt="Screenshot 2021-05-06 at 22 45 17" src="https://user-images.githubusercontent.com/1001778/117364858-b80fe200-aebe-11eb-93f7-35de78a9b795.png">
| closed | 2021-05-06T21:02:33Z | 2021-05-07T04:15:23Z | https://github.com/gee-community/geemap/issues/459 | [
"bug"
] | deeplook | 3 |
zappa/Zappa | django | 591 | [Migrated] ERROR: To modify pip, please run the following command | Originally from: https://github.com/Miserlou/Zappa/issues/1537 by [karthy1988](https://github.com/karthy1988)
While deploying code. The dependencies try to package "pip" which throws an error : "ERROR: To modify pip, please run the following command"
https://github.com/Miserlou/Zappa/blob/d99c193e32733946fb52a4f9b2bdfd1d2929ba49/zappa/core.py#L411 | closed | 2021-02-20T12:26:18Z | 2023-08-17T01:07:59Z | https://github.com/zappa/Zappa/issues/591 | [
"bug"
] | jneves | 6 |
robinhood/faust | asyncio | 384 | Error fetching metadata for topic | ## Checklist
- [x] I have included information about relevant versions
- [x] I have verified that the issue persists when using the `master` branch of Faust.
## Steps to reproduce
start a worker with new topic defined
## Expected behavior
auto create the topic
## Actual behavior
```bash
SC[36mworker-socketio_1 |ESC[0m [2019-07-13 15:01:28,020: ERROR]: Error fetching metadata for topic api_visited: <class 'kafka.errors.UnknownError'>
ESC[36mworker-socketio_1 |ESC[0m [2019-07-13 15:01:28,021: ERROR]: Error fetching metadata for topic download_url: <class 'kafka.errors.UnknownError'>
...
ESC[36mworker-socketio_1 |ESC[0m [2019-07-13 15:01:29,018: WARNING]: Ignoring missing topic: 'api_visited'
ESC[36mworker-socketio_1 |ESC[0m [2019-07-13 15:01:29,018: WARNING]: Ignoring missing topic: 'download_url'
```
## Full traceback
```pytb
Paste the full traceback (if there is any)
```
# Versions
* Python version: 3.7.4
* Faust version: 1.7.3
* Operating system: all
* Kafka version: 2.1
* RocksDB version (if applicable)
Restarting the kafka solves the problem, from debug logs it shows error_code `-1`, not sure if there's a better way to handle or verbose such issue is `faust`
| open | 2019-07-15T06:47:19Z | 2019-07-15T07:29:06Z | https://github.com/robinhood/faust/issues/384 | [] | DeoLeung | 4 |
raphaelvallat/pingouin | pandas | 318 | shift_plot | Thank you for suggesting that I open an issue on GitHub after my unsuccessful pull requests. You're right, my experience on GitHub is low, and using the issue topics might be a better way to address the problem. I originally edited the HTML in my previous pull request because I didn't know how to use Sphinx, but now I understand its logic better. Based on your suggestion, I edited the second figure in the shift plot documentation and added it to the helper section. However, I wasn't able to achieve the desired result in my fork. You can see the change I made in my fork, and if you think it's appropriate, I can create a pull request for it.
Here is my Kaggle notebook: [Shift plot example](https://www.kaggle.com/code/zzettrkalpakbal/shift-plot)
This figure need for doc

Here is the my commit : [commit](https://github.com/turkalpmd/pingouin_plot_shift/commit/653bc8b982bac54998e54967c86303e9143cdca3) | closed | 2022-12-07T06:58:48Z | 2022-12-10T19:04:23Z | https://github.com/raphaelvallat/pingouin/issues/318 | [
"docs/testing :book:"
] | turkalpmd | 2 |
nltk/nltk | nlp | 3,028 | already downloaded 'wordnet' but can't find it | Hello, I ran `nltk.download('wordnet')` in jupternotebook in Linux.
```
[nltk_data] Downloading package wordnet to
[nltk_data] /home/xxxx_linux/nltk_data...
[nltk_data] Package wordnet is already up-to-date!
```
then then i ran
```py
from nltk.stem.wordnet import WordNetLemmatizer
WordNetLemmatizer().lemmatize("better kip")
```
got error message
```
LookupError Traceback (most recent call last)
File ~/.cache/pypoetry/virtualenvs/disaster-response-pipeline-project-Ber-KOyS-py3.8/lib/python3.8/site-packages/nltk/corpus/util.py:80, in LazyCorpusLoader.__load(self)
79 except LookupError as e:
---> 80 try: root = nltk.data.find('{}/{}'.format(self.subdir, zip_name))
81 except LookupError: raise e
File ~/.cache/pypoetry/virtualenvs/disaster-response-pipeline-project-Ber-KOyS-py3.8/lib/python3.8/site-packages/nltk/data.py:673, in find(resource_name, paths)
672 resource_not_found = '\n%s\n%s\n%s\n' % (sep, msg, sep)
--> 673 raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource wordnet not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('wordnet')
Searched in:
- '/home/xxxx_linux/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
...
- '/usr/local/lib/nltk_data'
- '/home/xxxx_linux/.cache/pypoetry/virtualenvs/disaster-response-pipeline-project-Ber-KOyS-py3.8/nltk_data'
- '/home/xxxx_linux/.cache/pypoetry/virtualenvs/disaster-response-pipeline-project-Ber-KOyS-py3.8/lib/nltk_data'
**********************************************************************
```
in the terminal, i also try to ` cd /home/xxxx_linux/nltk_data`, i only found two folders `corpora tokenizers`
Anyone know what is the reason to cause it? I assumed it should download successfully, but it is not there. | closed | 2022-07-28T12:26:08Z | 2025-03-13T22:16:06Z | https://github.com/nltk/nltk/issues/3028 | [] | canfang-feng | 19 |
mljar/mljar-supervised | scikit-learn | 514 | Please remove warning messages | File binary_classifier.py was started on Windows 10 using Python 3.8 with Anaconda. AutoML is working good, but there are two recurring errors:
-`'evals_result' argument is deprecated and will be removed in a future release of LightGBM. Pass 'record_evaluation()' callback via 'callbacks' argument instead.`
-`'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. Pass 'early_stopping()' callback via 'callbacks' argument instead.` | closed | 2022-02-14T11:16:17Z | 2022-02-14T12:33:12Z | https://github.com/mljar/mljar-supervised/issues/514 | [] | maciekmalachowski | 1 |
deepset-ai/haystack | machine-learning | 8,671 | Adding 'setup.py' and 'pyproject.toml' for Local Installation | **Is your feature request related to a problem? Please describe.**
Currently, `haystack` and its related PyPI packages (e.g., `haystack-core-integrations`) lack `setup.py` and sometimes `pyproject.toml` files in their repositories. This creates challenges for developers who want to install the packages locally for testing or development purposes. Without these files:
- Local installations (e.g., via `pip install -e .`) are not straightforward.
- Developers cannot easily modify the source code and test changes locally without complex workarounds.
- The absence of a `pyproject.toml` file limits compatibility with modern Python packaging standards (PEP 517/518), making it harder to manage dependencies and build processes.
**Describe the solution you'd like**
I propose the addition of both `setup.py` and `pyproject.toml` files to all related repositories. This would provide a robust and flexible solution for local installation, testing, and development workflows.
**Describe alternatives you've considered**
None
**Additional context**
Adding these files aligns with Python packaging best practices and enhances compatibility with modern tools (e.g., Docker). In my opinion, this feature would significantly improve the developer experience and encourage more contributions from the community. | closed | 2024-12-23T06:19:16Z | 2024-12-31T12:14:22Z | https://github.com/deepset-ai/haystack/issues/8671 | [] | d-kleine | 2 |
suitenumerique/docs | django | 392 | Redirect user to their last doc | ## Feature Request
**Is your feature request related to a problem or unsupported use case? Please describe.**
As a user 70% of the time I want to get back to the last doc I was editing.
Right now I have to look at the doc index and find the doc I want to work on every time.
**Describe the solution you'd like**
Instead of always redirecting users to the index I'd prefer to be redirected to the last doc I was editing.
Including when I'm logging back in.
**Discovery, Documentation, Adoption, Migration Strategy**
Notion does this, I find it quite handy.
| open | 2024-10-28T10:12:59Z | 2024-11-21T11:35:07Z | https://github.com/suitenumerique/docs/issues/392 | [
"needs design"
] | virgile-dev | 0 |
ScottfreeLLC/AlphaPy | scikit-learn | 22 | Samples not working | The examples do not work. Is it possible to get examples that work or are prepared to be used? Would it be possible to show a solution to the mistakes? | closed | 2018-06-25T18:48:11Z | 2018-09-15T21:01:50Z | https://github.com/ScottfreeLLC/AlphaPy/issues/22 | [] | ConsultingSecurty | 0 |
vi3k6i5/flashtext | nlp | 3 | example is 12x slower than regex | This code is actually 12x faster:
```python
import re
reg = re.compile("Big Apple|New York")
for line in big_text: # wikipedia dump
reg.findall(line)
``` | closed | 2017-09-03T17:14:44Z | 2017-09-06T17:43:31Z | https://github.com/vi3k6i5/flashtext/issues/3 | [
"question"
] | kootenpv | 3 |
nolar/kopf | asyncio | 327 | [PR] Extended syntax for filtering and accessing labels/annotations/patches/body parts | > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2020-03-11 10:22:06+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/327
> Merged by [nolar](https://github.com/nolar) at _2020-03-12 08:37:49+00:00_
## What do these changes do?
Extend the resource bodies, body-parts, and patches with properties making them easier to use, and add few handler kwargs for well-known and often-used fields (labels/annotations).
## Description
Previously, it was quite common to use complicated Python structures to access the body/patch fields like this:
```python
# OLD, as it was before:
@kopf.on.create(..., annotations={'some-marker': None})
def assign_ids(body, patch, **_):
marker = body['metadata']['annotations']['some-marker']
run_id = body.get('metadata', {}).get('labels', {}).get('run-id')
field = body.get('spec', {}).get('field')
patch.setdefault('status', {})['key'] = 'value'
patch.setdefault('metadata', {}).setdefault('labels', {})['another-marker'] = 'value'
```
With this PR, well-known structures are exposed as body/patch properties, and are always behave as "live views" into their original dicts — i.e. every update on the dict is reflected in all views, all updates on the views is reflected in the original dict (similar to how KeysView, ValuesView, ItemsView work in Python itself).
```python
# NEW, to be used:
@kopf.on.create(..., annotations={'some-marker': kopf.PRESENT})
def assign_ids(body, patch, labels, annotations, **_):
marker = annotations['some-marker']
run_id = labels.get('run-id')
field = body.spec.get('field')
patch.status['key'] = 'value'
patch.meta.labels['another-marker'] = 'value'
```
Specifically:
* JSON-decoded objects/dicts/fields are now explicitly typed as "raw" to make their origin clear. These are the lighweight structures coming directly from K8s API.
* `body` kwarg is now always a "live view" to the original RawBody JSON-decoded dict, same as it already was for `spec`, `meta`, `status` kwargs before.
* `labels` & `annotations` kwargs are added to all resource-related handlers.
* `body.spec|meta|status` and `body.meta.labels|annotations` are the "live views" to a raw dict, persistent across multiple uses, and are exactly the same objects as provided in the kwargs (i.e. not re-created for every new handler).
* `patch` follows the same `.spec|meta|status` semantics with "mutable live views".
In addition, the labels/annotations filters are changed to deprecate `None` as an "any value" marker, as it was confusing ("some value is None" does not sound natural). Instead, `kopf.PRESENT` or `kopf.ABSENT` tokens should be used for labels/annotations filters.
**Purpose:** All of the above adds some conventional features now for the event-driven handlers, but is critically important for the upcoming daemons/timers, where the bodies are updated over time, while the handlers that use these bodies, run in a sequential execution flow (unlike short event handlers).
For the end users, there are no big changes except for the new kwargs and properties available.
**Slightly breaking:** `body` is now not a `dict`, but a custom class, therefore it is not JSON-serialisable. Use `dict(body)` to make it JSON-serialisable. But there were no expected usage patterns to JSON-serialise the whole body, and here is a quick solution — so, should be fine.
## Issues/PRs
<!-- Cross-referencing is highly useful in hindsight. Put the main issue, and all the related/affected/causing/preceding issues and PRs related to this change. -->
> Issues: #19
## Type of changes
- New feature (non-breaking change which adds functionality)
## Checklist
- [x] The code addresses only the mentioned problem, and this problem only
- [x] I think the code is well written
- [x] Unit tests for the changes exist
- [x] Documentation reflects the changes
- [x] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`
<!-- Are there any questions or uncertainties left?
Any tasks that have to be done to complete the PR? -->
| closed | 2020-08-18T20:03:48Z | 2020-08-23T20:56:19Z | https://github.com/nolar/kopf/issues/327 | [
"enhancement",
"archive"
] | kopf-archiver[bot] | 0 |
docarray/docarray | fastapi | 1,570 | AnyDoc not working as expected | I believe this should work:
```python
from docarray import DocList
from docarray.base_doc import AnyDoc, BaseDoc
from typing import Dict
class ProcessingTestDocConditions(BaseDoc):
text: str
tags: Dict[str, int]
input_da = DocList[ProcessingTestDocConditions](
[ProcessingTestDocConditions(text='type1', tags={'type': 1}),
ProcessingTestDocConditions(text='type2', tags={'type': 2})])
from docarray.base_doc import AnyDoc
aux = DocList[AnyDoc].from_protobuf(input_da.to_protobuf())
assert len(aux.id) == 2
```
```
Traceback (most recent call last):
File "/home/joan/jina/docarray/docarray/array/doc_list/doc_list.py", line 296, in from_protobuf
return super().from_protobuf(pb_msg)
File "/home/joan/jina/docarray/docarray/array/doc_list/io.py", line 119, in from_protobuf
return cls(cls.doc_type.from_protobuf(doc_proto) for doc_proto in pb_msg.docs)
File "/home/joan/jina/docarray/docarray/array/doc_list/doc_list.py", line 128, in __init__
super().__init__(docs)
File "/home/joan/jina/docarray/docarray/array/doc_list/doc_list.py", line 155, in _validate_docs
for doc in docs:
File "/home/joan/jina/docarray/docarray/array/doc_list/io.py", line 119, in <genexpr>
return cls(cls.doc_type.from_protobuf(doc_proto) for doc_proto in pb_msg.docs)
File "/home/joan/jina/docarray/docarray/base_doc/mixins/io.py", line 243, in from_protobuf
pb_msg.data[field_name], field_name
File "/home/joan/jina/docarray/docarray/base_doc/mixins/io.py", line 320, in _get_content_from_node_proto
field_type = cls.__fields__[field_name].type_ if field_name else None
KeyError: 'tags'
``` | closed | 2023-05-24T10:43:47Z | 2023-05-24T14:21:29Z | https://github.com/docarray/docarray/issues/1570 | [] | JoanFM | 3 |
developmentseed/lonboard | data-visualization | 491 | [EPIC] Enable mixed geometry type visualization | ## Context
I have a GeoParquet file with geometries of mixed type (points, linestrings, polygons, multipolygons). I'd like to have it visualized at once, just like calling `explore` on a GeoDataFrame object with folium. Would it be possible to stack multiple layers on top of each other? I tried to filter out geometries based on the type in GeoArrow beforehand. but I'm also struggling with that (https://github.com/geoarrow/geoarrow-python/issues/46).
## Issue
Calling `viz(pa_table)` results in `ValueError: Geometry type combination is not supported (['POINT', 'LINESTRING', 'POLYGON', 'MULTIPOLYGON'])`
| closed | 2024-04-29T21:14:27Z | 2024-05-05T17:23:16Z | https://github.com/developmentseed/lonboard/issues/491 | [] | RaczeQ | 9 |
ultralytics/ultralytics | machine-learning | 19,787 | Dataset Validation Issue: Non-Sequential Class Indices Cause Errors in data.yaml | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi Ultralytics Team,
I'm encountering an issue when using a YOLO dataset with non-sequential class IDs. The dataset was previously trained with specific sequential class IDs, and I am re-finetuning it by adding new images for specific classes while keeping the same class IDs. The problem is that i am re-finetuning on specific chosen classes that have low confidence scores, so their IDs are not sequential. However, YOLO expects class IDs to be sequential (0 to nc-1), which causes a validation error in ultralytics/yolo/data/utils.py. If i change the IDs to be sequential it will affect the previous fine tuning as a class may have two IDs
### Additional
This is the error message
RuntimeError: Dataset '/kaggle/working/data/data.yaml' error ❌
'14-class dataset requires class indices 0-13, but you have invalid class indices 6-35 defined in your dataset YAML.'
| open | 2025-03-19T19:03:08Z | 2025-03-19T20:07:19Z | https://github.com/ultralytics/ultralytics/issues/19787 | [
"question",
"detect"
] | mariamattia181 | 4 |
LAION-AI/Open-Assistant | python | 3,133 | Using Hidden Engrams for long context | Currently, the OA models have a fairly short window of context (Compared to other models). While efforts are underway to expand the context size, I suggest that we should use hidden engrams (https://github.com/AeroScripts/HiddenEngrams) to expand the context size of the model temporarily. Previously, Areo (the maker of hidden engrams) had used this on GPT-J with some very good results. Given models with instruction finetuning produce more predictable and useable output than models that only have causal LM pretraining, it can be reasonably inferred that hidden engrams would be even more effective at producing the desired output compared to the model than what was previously looked at. I could take a look at implementing this, but if anyone else wants to try their hand at making a notebook demo, that would be fine too.
| closed | 2023-05-12T14:57:31Z | 2023-12-06T03:50:58Z | https://github.com/LAION-AI/Open-Assistant/issues/3133 | [
"feature",
"ml"
] | SummerSigh | 1 |
benbusby/whoogle-search | flask | 763 | [BUG] <Instance gets ratelimited if more than 2-3 devices use it> | **Describe the bug**
A clear and concise description of what the bug is.
When I use whoogle on mobile, there are no issues with it. I also use Kaffeine to keep it alive. But if I start using it on desktop, it gets ratelimites in a day or two
**To Reproduce**
Steps to reproduce the behavior:
1. Use whoogle on device #1
2. Use whoogle on device #2
3. Add your instance to Kaffeine (probably Heroku only)
**Deployment Method**
- [ ✓] Heroku (one-click deploy)
- [ ] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [ ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ✓] Not sure
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| open | 2022-05-21T18:05:56Z | 2022-06-05T06:37:41Z | https://github.com/benbusby/whoogle-search/issues/763 | [
"bug"
] | peternrdstrm | 1 |
localstack/localstack | python | 12,014 | bug: DynamoDB - using terraform on table with on-demand capacity + GSI behaves oddly | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
using terraform + localstack to create a dynamodb table with pay_per_request billing mode + a GSI
- the table creation using `terraform apply` works fine
- if i subsequently run `terraform plan` (without any changes), it thinks it needs to update the GSI
- if i then run `terraform apply` again, i get an error like `deleting AWS DynamoDB Table (testddb): GSI (): operation error DynamoDB: UpdateTable, https response error StatusCode: 400, RequestID: 91c88f41-9599-402f-908c-637b5a05f713, api error ValidationException: Invalid global secondary index name`
### Expected Behavior
after creating a table with `terraform apply`, if i make no changes, then subsequent calls to `terraform plan` should produce no changes
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker compose up -d
```
version: '3.8'
services:
localstack:
image: localstack/localstack:latest
ports:
- "4566:4566"
- "4510:4510"
volumes:
- ${USERPROFILE}\local-stack-data:/persisted-data # LHS of `:` can be any path you choose (not sure if paths outside "Users" behave)
container_name: localstack-test
restart: always
healthcheck:
test:
- CMD
- bash
- -c
- awslocal sqs list-queues
interval: 300s
timeout: 10s
start_period: 10s
start_interval: 5s
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
terraform init
terraform apply --auto-approve
terraform plan
```
with `.tf` file
```
provider "aws" {
access_key = "test"
secret_key = "test"
region = "us-west-2"
endpoints {
sts = "http://localhost:4566"
dynamodb = "http://localhost:4566"
sqs = "http://localhost:4566"
s3 = "http://s3.localhost.localstack.cloud:4566"
}
}
terraform {
required_version = ">= 1.3.6"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
resource "aws_dynamodb_table" "test" {
provider = aws
name = "testddb"
billing_mode = "PAY_PER_REQUEST"
stream_enabled = false
hash_key = "TID"
attribute {
name = "TID"
type = "S"
}
attribute {
name = "TRID"
type = "S"
}
ttl {
attribute_name = "ExpireAtEpochSeconds"
enabled = true
}
global_secondary_index {
name = "TransactionRecordID"
hash_key = "TRID"
projection_type = "ALL"
}
lifecycle {
prevent_destroy = false
}
}
```
### Environment
```markdown
- OS: Windows (my host machine)
- LocalStack:
LocalStack version: 3.8.2.dev120
LocalStack Docker image sha: sha256:70a953b6dd1201e00d710dc57bfcfc79ee20d6c21b798be573a8ec0e2d5f11f4
LocalStack build date: 2024-11-15
LocalStack build git hash: 75436efc5
```
### Anything else?
i did an experiment where
- i first created the table with provisioned mode, and specified read and write capacities for the table and the GSI (using `terraform apply`)
- i changed the table mode to pay_per_request and removed the table's read and write capacities (but i left the capacities in the index block) and ran `terraform apply`
- subsequent calls to `terraform plan` behaved as i expected: ie no changes to apply
- the difference seems to be that in this case the describe-table call returns a `ProvisionedThroughput` element under the `GlobalSecondaryIndexes[0]` element, while in the first case, there is no such element [i used terraform's debug logging to determine this]
- `describe-table` command against real aws (via aws cli) contains the `ProvisionedThroughput` element even for an on-demand billing mode table | closed | 2024-12-10T20:00:40Z | 2025-01-13T22:49:03Z | https://github.com/localstack/localstack/issues/12014 | [
"type: bug",
"area: integration/terraform",
"aws:dynamodb",
"status: in progress"
] | bkindersleytrulioo | 4 |
keras-team/keras | machine-learning | 20,474 | NO _loss_tracker on train_on_batch because compile model multiple times. Possible Bug. | Same code of a GAN works perfectly in Keras 3.3 but doesn´t work in keras 3.6. There is an error on train_on_batch I believe is because an bug introduce in a change in Keras 3.6
The code is this:
import numpy as np
import matplotlib.pyplot as plt
import random
from keras.datasets import mnist
from keras.utils import plot_model
from keras.models import Sequential, Model
from keras.layers import (Input, Conv2D, Dense, Activation,
Flatten, Reshape, Dropout,
UpSampling2D, MaxPooling2D,
BatchNormalization, LeakyReLU, Conv2DTranspose,
GlobalMaxPooling2D)
from keras.losses import BinaryCrossentropy
from keras.optimizers import Adam
from keras.metrics import Mean, Accuracy
from keras.backend import backend
from keras.random import SeedGenerator, uniform, normal
from keras import ops
!pip list | grep keras
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
X_train = X_train.astype('float32')/127.5 -1
X_train = np.expand_dims(X_train, axis = 3)
def create_generator():
generator = Sequential(
[
Input(shape = (100,)),
Dense(7 * 7 * 128),
LeakyReLU(0.2),
Reshape((7, 7, 128)),
Conv2DTranspose(128, 4, 2, "same"),
LeakyReLU(0.2),
Conv2DTranspose(256, 4, 2, "same"),
LeakyReLU(0.2),
Conv2D(1, 7, padding = "same", activation = "tanh"),
],
name = "generator",
)
return generator
def create_discriminator():
discriminator = Sequential(
[
Input(shape = (28, 28, 1)),
Conv2D(64, 3, 2, "same"),
LeakyReLU(0.2),
Conv2D(128, 3, 2, "same"),
LeakyReLU(0.2),
Conv2D(256, 3, 2, "same"),
LeakyReLU(0.2),
Flatten(),
Dropout(0.2),
Dense(1, activation = "sigmoid"),
],
name = "discriminator",
)
return discriminator
generator = create_generator()
discriminator = create_discriminator()
discriminator.compile(loss = 'binary_crossentropy', optimizer = Adam(), metrics = ['accuracy'])
discriminator.trainable = False
###Print for debugging/show the error
print('---Debugging/show the error after compiled discriminator and before compiled combined---')
print('discriminator.compiled ->', discriminator.compiled)
print('discriminator.optimizer ->', discriminator.optimizer)
print('discriminator.train_function ->', discriminator.train_function)
print('discriminator.train_step ->', discriminator.train_step)
print('discriminator.metrics ->', discriminator.metrics)
print('discriminator._loss_tracker ->', discriminator._loss_tracker)
print('discriminator._jit_compile ->', discriminator._jit_compile)
###
z = Input(shape=(100,))
img = generator(z)
validity = discriminator(img)
combined = Model(z, validity)
combined.compile(loss = 'binary_crossentropy', optimizer = Adam())
###Print for debugging/show the error
print('---Debugging/show the error after compiled discriminator and combined---')
print('discriminator.compiled ->', discriminator.compiled)
print('discriminator.optimizer ->', discriminator.optimizer)
print('discriminator.train_function ->', discriminator.train_function)
print('discriminator.train_step ->', discriminator.train_step)
print('discriminator.metrics ->', discriminator.metrics)
print('discriminator._loss_tracker ->', discriminator._loss_tracker)
print('discriminator._jit_compile ->', discriminator._jit_compile)
###
def train(X_train, generator, discriminator, combined, epochs, batch_size = 32, sample_interval = 100):
valid = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
history = {
'd_loss' : [],
'd_acc' : [],
'g_loss' : []
}
for epoch in range(epochs):
print("----EPOCH " + str(epoch) + '-----')
for batch in range(int(len(X_train)/batch_size)):
# Train the Discriminator
noise = np.random.normal(0, 1, (batch_size, 100))
gen_imgs = generator.predict(noise, verbose = 0)
imgs = X_train[batch*batch_size : (batch+1)*batch_size]
#Print for debugging/show the error
print('---Debugging/show the error---')
print('discriminator.compiled ->', discriminator.compiled)
print('discriminator.optimizer ->', discriminator.optimizer)
print('discriminator._loss_tracker ->', discriminator._loss_tracker)
print('discriminator._jit_compile ->', discriminator._jit_compile)
d_loss_real = discriminator.train_on_batch(imgs, valid)
d_loss_fake = discriminator.train_on_batch(gen_imgs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Train the Generator
noise = np.random.normal(0, 1, (batch_size, 100))
g_loss = combined.train_on_batch(noise, valid)
# Save losses
history['d_loss'].append(d_loss[0])
history['d_acc'].append(d_loss[1])
history['g_loss'].append(g_loss[0])
train(X_train, generator, discriminator, combined, epochs = 2, batch_size = 256, sample_interval = 100)
The error is this:
Cell In[13], line 128, in train(X_train, generator, discriminator, combined, epochs, batch_size, sample_interval)
124 print('discriminator._loss_tracker ->', discriminator._loss_tracker)
125 print('discriminator._jit_compile ->', discriminator._jit_compile)
--> 128 d_loss_real = discriminator.train_on_batch(imgs, valid)
129 d_loss_fake = discriminator.train_on_batch(gen_imgs, fake)
130 d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
File ~/.local/lib/python3.10/site-packages/keras/src/backend/torch/trainer.py:468, in TorchTrainer.train_on_batch(self, x, y, sample_weight, class_weight, return_dict)
465 self._symbolic_build(data_batch=data)
466 self.make_train_function()
--> 468 logs = self.train_function([data])
469 logs = tree.map_structure(lambda x: np.array(x), logs)
470 if return_dict:
File ~/.local/lib/python3.10/site-packages/keras/src/backend/torch/trainer.py:117, in TorchTrainer.make_train_function.<locals>.one_step_on_data(data)
115 """Runs a single training step on a batch of data."""
116 data = data[0]
--> 117 return self.train_step(data)
File ~/.local/lib/python3.10/site-packages/keras/src/backend/torch/trainer.py:55, in TorchTrainer.train_step(self, data)
50 self.zero_grad()
52 loss = self._compute_loss(
53 x=x, y=y, y_pred=y_pred, sample_weight=sample_weight, training=True
54 )
---> 55 self._loss_tracker.update_state(
56 loss, sample_weight=tree.flatten(x)[0].shape[0]
57 )
58 if self.optimizer is not None:
59 loss = self.optimizer.scale_loss(loss)
AttributeError: 'NoneType' object has no attribute 'update_state'.
The error says that self._loss_tracker.update_state is None when is should be metrics_module.Mean(name="loss") as is been compiled.
The print that I write in the code shows that after compiled the discriminator and before the combined:
---Debugging/show the error after compiled discriminator and before compiled combined---
discriminator.compiled -> True
discriminator.optimizer -> <keras.src.optimizers.adam.Adam object at 0x77ecf6bb0a00>
discriminator.train_function -> None
discriminator.train_step -> <bound method TensorFlowTrainer.train_step of <Sequential name=discriminator, built=True>>
discriminator.metrics -> [<Mean name=loss>, <CompileMetrics name=compile_metrics>]
discriminator._loss_tracker -> <Mean name=loss>
discriminator._jit_compile -> True
However after compiled the combined:
discriminator.compiled -> True
discriminator.optimizer -> <keras.src.optimizers.adam.Adam object at 0x77ecf6bb0a00>
discriminator.train_function -> None
discriminator.train_step -> <bound method TensorFlowTrainer.train_step of <Sequential name=discriminator, built=True>>
discriminator.metrics -> []
discriminator._loss_tracker -> None
discriminator._jit_compile -> True
So the problems realives that compiling the combined erases the metrics (loss_tracks,...) of the discriminator what it shouldn't and keeps the discriminator as compiled when it shouldn´t becouse it undo the compiled. I belive the bug relieves in a change introduce in Keras 3.6 that change the compile of keras/src/trainers/trainer.py:

The function self._clear_previous_trainer_metrics not only clears the metrics of combined but also of discriminator what that makes that discriminator not having proper metrics.
.
My pull request to this possible error is: https://github.com/keras-team/keras/pull/20473
I try the code with the thee backeends and happens always
I hope it help ! :)
| closed | 2024-11-08T17:19:38Z | 2024-12-06T16:31:34Z | https://github.com/keras-team/keras/issues/20474 | [
"type:Bug"
] | TheMGGdev | 5 |
s3rius/FastAPI-template | asyncio | 224 | The address docker 0.0.0.0:8000 cannot be accessed from an external computer | Problem: The issue is that while all select components function correctly within the Docker environment, I can't access http://localhost:8000 from my computer.
Solution: To resolve this, it is necessary to include **ALL** port definitions/ mapping in the Docker Compose file.
ex:
........
restart: always
ports:
- 8000:8000
env_file:
- .env
| closed | 2024-08-30T09:09:47Z | 2024-12-14T10:27:07Z | https://github.com/s3rius/FastAPI-template/issues/224 | [] | yucelz | 1 |
sktime/pytorch-forecasting | pandas | 1,303 | UserWarning with target EncoderNormalizer | - PyTorch-Forecasting version: 1.0.0
- PyTorch version: 2.0.0
- Python version: Python 3.10.8
- Operating System: Ubuntu
I am using a target EncoderNormalizer which produces the following warning:
```
sklearn/base.py:409: UserWarning:
X does not have valid feature names, but StandardScaler was fitted with feature names
```
Why could this happen?
Should I be concerned about this? | open | 2023-05-15T10:03:13Z | 2024-02-26T14:11:06Z | https://github.com/sktime/pytorch-forecasting/issues/1303 | [] | terbed | 2 |
kennethreitz/responder | graphql | 294 | Document how to do file uploads | After a bit of trial and error (and poking around in the codebase), I think I’ve worked out how to process file uploads:
```python
import responder
api = responder.API()
@api.route("/")
async def upload_file(req, resp):
@api.background.task
def process_data(data):
open("uploaded_file.txt", "wb").write(data["file"])
data = await req.media(format="files")
process_data(data)
resp.media = {"success": "ok"}
if __name__ == "__main__":
api.run(port=8210)
```
But as far as I can tell, this isn't documented anywhere, and for all I know this has some obvious bug that’s going to cause me grief later. Since responder seems to have support for this, it’d be great to have an example in the docs to show how easy it is! | closed | 2019-02-17T22:40:44Z | 2019-03-21T03:07:05Z | https://github.com/kennethreitz/responder/issues/294 | [
"feature",
"documentation"
] | alexwlchan | 4 |
holoviz/panel | jupyter | 7,031 | JSComponent with Bytes parameter raises [object ArrayBuffer] is not cloneable | I'm on the latest `main` branch of Panel.
I was trying to create an `AudioStream` widget using the `JSComponent`. The `value` parameter should be a `Bytes` parameter. But it raises an exception in the browser console.
```python
import panel as pn
import param
from panel.custom import JSComponent
pn.extension()
class AudioStream(JSComponent):
value = param.Bytes()
_esm = """
export function render({ model }) {
console.log("Hello")
}
"""
audio_stream = AudioStream()
audio_stream.servable()
```
When I serve the application I see

```bash
Error rendering Bokeh items: Error: [object ArrayBuffer] is not cloneable
at a.clone (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:197:679)
at N.default_value (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:180:764)
at N.initialize (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:184:1240)
at r.initialize_props (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:180:4419)
at p._decode_object_ref (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:213:4647)
at p._decode (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:213:1739)
at p._decode_plain_object (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:213:2300)
at p._decode (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:213:1026)
at p._decode_object_ref (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:213:4623)
at p._decode (bokeh.min.js?v=4885298c46175e086cef50e41170e1d47cb77faa4020ef403fb6a65d313bcc55ee11e012be061d7dc6affb7fe3e44e03140a12dce87566a478efea820562e68d:213:1739)
``` | closed | 2024-07-28T06:54:51Z | 2024-07-28T12:19:46Z | https://github.com/holoviz/panel/issues/7031 | [] | MarcSkovMadsen | 1 |
aminalaee/sqladmin | sqlalchemy | 398 | Convertor for PhoneNumberType | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
If field of model has `PhoneNumberType` from [sqlalchemy_utils](https://pypi.org/project/SQLAlchemy-Utils/), you cannot create or edit new records.
### Steps to reproduce the bug
1. Made model with
2. Add to admin panel.
3. Try to create new one or update old one.
### Expected behavior
Can edit and create new records.
### Actual behavior
Getting Internal Error.
### Debugging material
```shell
Traceback (most recent call last):
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/routing.py", line 443, in handle
await self.app(scope, receive, send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/sessions.py", line 86, in __call__
await self.app(scope, receive, send_wrapper)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/sqladmin/authentication.py", line 56, in wrapper_decorator
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/sqladmin/application.py", line 465, in edit
Form = await model_view.scaffold_form()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/sqladmin/models.py", line 1005, in scaffold_form
return await get_model_form(
^^^^^^^^^^^^^^^^^^^^^
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/sqladmin/forms.py", line 536, in get_model_form
field = await converter.convert(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/sqladmin/forms.py", line 302, in convert
converter = self.get_converter(prop=prop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aryadovoy/Documents/coding/back/.venv/lib/python3.11/site-packages/sqladmin/forms.py", line 255, in get_converter
raise NoConverterFound( # pragma: nocover
sqladmin.exceptions.NoConverterFound: Could not find field converter for column phone_number (<class 'sqlalchemy_utils.types.phone_number.PhoneNumberType'>).
```
### Environment
OS: Fedora Linux 35 (KDE Plasma) x86_64
Python: 3.11.0
SQLAdmin: 0.8.0
### Additional context
_No response_ | closed | 2022-12-20T11:38:16Z | 2023-02-03T09:22:53Z | https://github.com/aminalaee/sqladmin/issues/398 | [] | aryadovoy | 5 |
healthchecks/healthchecks | django | 182 | Description field for Checks | In addition to Name and Tags, allow users to enter a free form description for each check. Perhaps allow a subset of markdown for simple formatting.
This would be useful to add some quick information on "what to do when this check goes down", it could link out to wiki / knowledge base articles, etc. | closed | 2018-07-31T13:13:09Z | 2018-08-20T15:16:13Z | https://github.com/healthchecks/healthchecks/issues/182 | [] | cuu508 | 0 |
lux-org/lux | pandas | 39 | Show widget in Out[...] to conform to Pandas semantics | Currently, the Lux widget is rendered through the [IPython.display](https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html) module, which is not directly tied to the output. This makes the widget show up before the Out[...] in Jupyter.
There is a way to display the Jupyter widget through the [Output display](https://ipywidgets.readthedocs.io/en/latest/examples/Output%20Widget.html). However, since the LuxDataframe is overriding Pandas. We have to suppress the Pandas output in the `__repr__` function.
This issue and #56 are related and should be addressed together. | closed | 2020-07-19T10:26:39Z | 2021-04-20T15:55:59Z | https://github.com/lux-org/lux/issues/39 | [
"priority"
] | dorisjlee | 1 |
pandas-dev/pandas | pandas | 61,043 | BUG: `.str.replace()` with capture groups does not play nice with string methods | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
Code
```python
import pandas as pd
c = pd.Series("THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG")
x, y, z = "\\b(FOX|THE)\\b", "_ABC_", "\\1_ABC_"
print(c.str.replace(x, y.lower(), regex=True))
print(c.str.replace(x, z.lower(), regex=True))
```
Output
```
0 _abc_ QUICK BROWN _abc_ JUMPS OVER _abc_ LAZY DOG
dtype: object
0 THE_abc_ QUICK BROWN FOX_abc_ JUMPS OVER THE_a...
dtype: object
```
### Issue Description
The `.lower()` string method inconsistently modifies the `repl` argument when the latter includes a regex capture group.
### Expected Behavior
I would expect `.lower()` to modify all characters in `repl`, including those in the capture group (or a warning stating otherwise).
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.16
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : None
IPython : 8.33.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.2.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details> | closed | 2025-03-03T22:00:55Z | 2025-03-03T22:57:07Z | https://github.com/pandas-dev/pandas/issues/61043 | [
"Bug",
"Strings"
] | noahblakesmith | 1 |
davidsandberg/facenet | tensorflow | 941 | is this #openSource Project???can i use this for my project?? | open | 2018-12-23T17:43:06Z | 2018-12-23T17:43:06Z | https://github.com/davidsandberg/facenet/issues/941 | [] | RajHarry | 0 | |
pytorch/pytorch | machine-learning | 149,306 | `context_parallel` fails for training with `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation` | ### 🐛 Describe the bug
Hi, I am from Hugging Face and we are trying to use `context_parallel` (using `stable` and `nightly torch`). However, for training, it fails with
> RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
I have created a reproducible with minimal example where a very simple model `DummyModel` is defined in the script. The same error occurs for a real model (Qwen 2.5) too.
The same error happens for both `SDPBackend.FLASH_ATTENTION` and ` SDPBackend.EFFICIENT_ATTENTION`.
### To reproduce
Run the following script, on a multiple GPU machine (I am using a single cloud machine with 4 A10 GPU), as
1. python script.py
2. torchrun --nproc-per-node=2 script.py --distributed
3. torchrun --nproc-per-node=2 script.py --distributed --use-cp
where 1. (not using any distributed stuff) and 2. (distributed, without CP) succeed and **3. (distributed with CP) fails**.
### script.py
```python
import torch
torch.autograd.set_detect_anomaly(True)
class DummyOutput:
def __init__(self, loss, logits, attn_out):
self.loss = loss
self.logits = logits
self.attn_out = attn_out
def __str__(self):
return str({"loss": self.loss, "logits": self.logits, "attn_out": self.attn_out})
class DummyModel(torch.nn.Module):
def __init__(self, vocab_size, hidden_dim, n_heads, is_causal=True):
super().__init__()
self.vocab_size = vocab_size
self.hidden_dim = hidden_dim
self.n_heads = n_heads
self.head_dim = hidden_dim // n_heads
self.is_causal = is_causal
self.embedding = torch.nn.Embedding(num_embeddings=vocab_size, embedding_dim=self.hidden_dim)
self.linear = torch.nn.Linear(hidden_dim, hidden_dim)
self.q = torch.nn.Linear(hidden_dim, hidden_dim)
self.k = torch.nn.Linear(hidden_dim, hidden_dim)
self.v = torch.nn.Linear(hidden_dim, hidden_dim)
self.atnn_out = torch.nn.Linear(hidden_dim, hidden_dim)
self.proj = torch.nn.Linear(hidden_dim, vocab_size)
# h being [batch_size, seq_len, hidden_dim]
# we convert it to q, k, v here
def forward(self, input_ids, labels=None):
embeddings = self.embedding(input_ids)
hidden_states = self.linear(embeddings)
# we need to change it to q, k, v with [batch_size, n_head, seq_len, head_dim]
# first, projection to get to [batch_size, seq_len, head_dim]
q = self.q(hidden_states)
k = self.k(hidden_states)
v = self.v(hidden_states)
batch_size = 1
# reshape to [batch_size, n_head, seq_len, head_dim]
q = q.view(batch_size, -1, self.n_heads, self.head_dim).transpose(1, 2)
k = k.view(batch_size, -1, self.n_heads, self.head_dim).transpose(1, 2)
v = v.view(batch_size, -1, self.n_heads, self.head_dim).transpose(1, 2)
attn_out = F.scaled_dot_product_attention(q, k, v, is_causal=self.is_causal)
# back to [batch_size, n_head, seq_len, head_dim]
# need contiguous for training
hidden = attn_out.transpose(1, 2).contiguous().view(batch_size, -1, self.n_heads * self.head_dim)
atnn_out = self.atnn_out(hidden)
logits = self.proj(atnn_out)
loss = None
if labels is not None:
loss = torch.nn.functional.cross_entropy(logits.transpose(1, 2), labels)
return DummyOutput(loss=loss, logits=logits, attn_out=attn_out)
def check(distributed=False, use_cp=False):
device = "cuda"
dtype = torch.bfloat16
sdpa_backend = SDPBackend.FLASH_ATTENTION
is_causal = True
input_ids = torch.randint(low=8, high=64, size=(1, 64), device=device)
labels = torch.clone(input_ids)
model = DummyModel(vocab_size=128, hidden_dim=128, n_heads=4, is_causal=is_causal)
model = model.to(device, dtype=dtype)
model.eval()
if distributed:
dist.broadcast(input_ids, src=0)
dist.broadcast(labels, src=0)
rank = torch.distributed.get_node_local_rank()
model = DDP(model, device_ids=[rank])
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
model.train()
for step in range(3):
model.zero_grad()
optimizer.zero_grad()
with sdpa_kernel(sdpa_backend):
if use_cp:
with context_parallel(
cp_mesh, buffers=(input_ids, labels), buffer_seq_dims=(1, 1)
):
outputs = model(input_ids, labels=labels)
else:
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
print(f"device: {loss.device} | step: {step} | loss = {loss.detach().to('cpu').float().numpy()}")
loss.backward()
optimizer.step()
if __name__ == '__main__':
# python3 temp.py
# torchrun --nproc-per-node=2 temp.py --distributed
# torchrun --nproc-per-node=2 temp.py --distributed --use_cp
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--distributed", action="store_true", default=False)
parser.add_argument("--use-cp", action="store_true", default=False)
parser.add_argument("--nproc-per-node", type=int, default=1)
args = parser.parse_args()
import torch
import torch.nn.functional as F
from torch.nn.attention import sdpa_kernel, SDPBackend
distributed = args.distributed
use_cp = args.use_cp
if distributed:
from torch.distributed.device_mesh import init_device_mesh
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.distributed as dist
if use_cp:
from torch.distributed.tensor.experimental import context_parallel
world_size = args.nproc_per_node
cp_mesh = init_device_mesh("cuda", (world_size,))
check(distributed=distributed, use_cp=use_cp)
```
### Error log
```bash
root@dff7b35823a9:/transformers# torchrun --nproc-per-node=2 script.py --distributed --use-cp
W0317 08:57:27.892000 1659 torch/distributed/run.py:766]
W0317 08:57:27.892000 1659 torch/distributed/run.py:766] *****************************************
W0317 08:57:27.892000 1659 torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0317 08:57:27.892000 1659 torch/distributed/run.py:766] *****************************************
[rank1]: Traceback (most recent call last):
[rank1]: File "/transformers/script.py", line 149, in <module>
[rank1]: check(distributed=distributed, use_cp=use_cp)
[rank1]: File "/transformers/script.py", line 105, in check
[rank1]: with context_parallel(
[rank1]: File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
[rank1]: return next(self.gen)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 36, in generator_context
[rank1]: response = gen.send(None)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/experimental/_attention.py", line 1345, in context_parallel
[rank1]: chunks = _context_parallel_buffers(mesh, buffers, buffer_seq_dims)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/experimental/_attention.py", line 1287, in _context_parallel_buffers
[rank1]: new_buffers.append(sharder.shard(buffer, mesh, seq_dim))
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/experimental/_attention.py", line 1244, in shard
[rank1]: cp_rank = mesh.get_local_rank()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/device_mesh.py", line 946, in get_local_rank
[rank1]: mesh_dim_group = not_none(self.get_group(mesh_dim))
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/device_mesh.py", line 781, in get_group
[rank1]: _find_pg_by_ranks_and_tag(*self._dim_group_infos[mesh_dim][:2]) # type: ignore[index]
[rank1]: IndexError: list index out of range
device: cuda:0 | step: 0 | loss = 4.84375
/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py:824: UserWarning: Error detected in NllLoss2DBackward0. Traceback of forward call that caused the error:
File "/transformers/script.py", line 149, in <module>
check(distributed=distributed, use_cp=use_cp)
File "/transformers/script.py", line 108, in check
outputs = model(input_ids, labels=labels)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1637, in forward
else self._run_ddp_forward(*inputs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1464, in _run_ddp_forward
return self.module(*inputs, **kwargs) # type: ignore[index]
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/transformers/script.py", line 68, in forward
loss = torch.nn.functional.cross_entropy(logits.transpose(1, 2), labels)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 3494, in cross_entropy
return torch._C._nn.cross_entropy_loss(
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:122.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: Traceback (most recent call last):
[rank0]: File "/transformers/script.py", line 149, in <module>
[rank0]: check(distributed=distributed, use_cp=use_cp)
[rank0]: File "/transformers/script.py", line 115, in check
[rank0]: loss.backward()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 648, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 353, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 1, 64]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
[rank0]:[W317 08:57:31.906052155 ProcessGroupNCCL.cpp:1497] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
W0317 08:57:31.821000 1659 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1708 closing signal SIGTERM
E0317 08:57:31.985000 1659 torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 1 (pid: 1709) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 892, in main
run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 883, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
script.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-03-17_08:57:31
host : dff7b35823a9
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 1709)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
### Versions
```bash
PyTorch version: 2.8.0.dev20250315+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.20.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.234-225.895.amzn2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
GPU 2: NVIDIA A10G
GPU 3: NVIDIA A10G
Nvidia driver version: 550.144.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R32
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
BogoMIPS: 5599.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_ts
c rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy
abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 96 MiB (6 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel-extension-for-pytorch==2.3.0
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.15.1+torch220cu121
[pip3] numpy==1.24.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxconverter-common==1.13.0
[pip3] onnxruntime==1.21.0
[pip3] onnxruntime-tools==1.7.0
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] tf2onnx==1.16.1
[pip3] torch==2.8.0.dev20250315+cu126
[pip3] torchaudio==2.6.0.dev20250315+cu126
[pip3] torchvision==0.22.0.dev20250315+cu126
[pip3] triton==3.2.0
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | open | 2025-03-17T09:04:23Z | 2025-03-17T21:32:56Z | https://github.com/pytorch/pytorch/issues/149306 | [
"oncall: distributed",
"module: context parallel"
] | ydshieh | 1 |
Josh-XT/AGiXT | automation | 903 | NameError: name 'all_responses' is not defined | ### Description
Instruct, Chains, Smart Instruct
```
agixt-agixt-1 | INFO: 172.18.0.5:52316 - "GET /api/chain/Smart%20Instruct/args HTTP/1.1" 200 OK
agixt-streamlit-1 | 2023-08-12 15:47:00.417 Uncaught app exception
agixt-streamlit-1 | Traceback (most recent call last):
agixt-streamlit-1 | File "/usr/local/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
agixt-streamlit-1 | exec(code, module.__dict__)
agixt-streamlit-1 | File "/app/pages/0-Agent Interactions.py", line 99, in <module>
agixt-streamlit-1 | all_responses=all_responses,
agixt-streamlit-1 | NameError: name 'all_responses' is not defined
```
### Steps to Reproduce the Bug
ver 1.3.93
### Expected Behavior
If i go first to Learing and learn some file, after that it works
### Operating System
- [X] Linux
- [ ] Microsoft Windows
- [ ] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [X] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [X] Using docker compose
- [ ] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | closed | 2023-08-12T15:51:07Z | 2023-08-13T00:07:18Z | https://github.com/Josh-XT/AGiXT/issues/903 | [
"type | report | bug",
"needs triage"
] | mongolu | 2 |
PaddlePaddle/models | nlp | 5,414 | fluid版本的resnet预训练权重还能下载吗 | https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/image_classification | open | 2021-12-10T04:39:02Z | 2024-02-26T05:08:22Z | https://github.com/PaddlePaddle/models/issues/5414 | [] | renmada | 0 |
pennersr/django-allauth | django | 3,263 | User roles pulled from Keycloak | Dear developers, is there any straightforward way of how to pull the `roles` for the user from Keycloak environment? Didn't find this feature in django-allauth docs (or at least it is not obvious for me). Thanx a lot! | closed | 2023-02-16T14:21:07Z | 2023-07-03T20:53:02Z | https://github.com/pennersr/django-allauth/issues/3263 | [] | maxeval | 2 |
ploomber/ploomber | jupyter | 1,137 | Rendering Glitch in DAG Plot Generated by `ploomber plot` | I used the `ploomber plot` command to generate a Directed Acyclic Graph (DAG) plot of my pipeline, and I encountered a rendering glitch when viewing the generated HTML file. The plot appears glitched or incorrectly rendered in certain browsers like Safari, while it renders correctly in others.
Steps to Reproduce:
Run `ploomber plot` to generate the DAG plot.
Open the generated HTML file in Safari.
Observe the rendering issue which makes the text of each node overlap each other in the top left corner with blank nodes.
Expected Behavior:
The DAG plot should render correctly and consistently across different browsers, including Safari, when generated using `ploomber plot`.
Additional Information:
Ploomber Version: 0.23.0
Safari Version: 17.0
Operating System: macOS Sonoma (14.0 (23A344))
Screenshots:


| open | 2023-10-03T16:39:52Z | 2023-10-04T00:19:21Z | https://github.com/ploomber/ploomber/issues/1137 | [] | rajatmjain | 2 |
wkentaro/labelme | deep-learning | 895 | [Feature] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2021-07-25T17:34:06Z | 2021-07-25T20:34:59Z | https://github.com/wkentaro/labelme/issues/895 | [] | Apidwalin | 0 |
davidteather/TikTok-Api | api | 694 | Need some help | Hi I hope I'm right with you and you can help me.
I have an Android app reverse engineering and sniffed the traffic with Charles Proxy,
but the app seems to communicate via WSS or RTSP (I don't know exactly) in some places
and my question is how can I sniff this traffic? Thanks so much :)
This is the project: https://github.com/ageof/MyDlink-API-Python/ | closed | 2021-09-16T11:50:40Z | 2022-02-14T03:02:39Z | https://github.com/davidteather/TikTok-Api/issues/694 | [] | ageof | 1 |
donnemartin/system-design-primer | python | 769 | how to write a code about input code | open | 2023-04-13T20:55:27Z | 2023-06-06T10:47:14Z | https://github.com/donnemartin/system-design-primer/issues/769 | [
"needs-review"
] | nora-1234 | 4 | |
MaartenGr/BERTopic | nlp | 2,036 | [Guided Topic Modeling] ValueError: setting an array element with a sequence. | Hi, I am trying to run the example code given in https://maartengr.github.io/BERTopic/getting_started/guided/guided.html#example and got an error.
Example code:
```python
from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
docs = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes'))["data"]
seed_topic_list = [["drug", "cancer", "drugs", "doctor"],
["windows", "drive", "dos", "file"],
["space", "launch", "orbit", "lunar"]]
topic_model = BERTopic(seed_topic_list=seed_topic_list)
topics, probs = topic_model.fit_transform(docs)
```
Error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/reneechou/git/BERTopic/bertopic/_bertopic.py", line 400, in fit_transform
y, embeddings = self._guided_topic_modeling(embeddings)
File "/Users/reneechou/git/BERTopic/bertopic/_bertopic.py", line 3770, in _guided_topic_modeling
embeddings[indices] = np.average([embeddings[indices], seed_topic_embeddings[seed_topic]], weights=[3, 1])
File "/Users/reneechou/miniconda3/envs/bertopic/lib/python3.10/site-packages/numpy/lib/function_base.py", line 511, in average
a = np.asanyarray(a)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
```
The issue happened when calculating the (weighted) averages between a set of documents (`embeddings[indices]`) and their seed topic embeddings (`seed_topic_embeddings[seed_topic]`), where `np.average` cannot calculate the averages between a 2D array and a 1D array.
This issue can be solved by broadcasting the 1D array to match the shape of the 2D array, and calculating the averages along axis 0.
Original code (https://github.com/MaartenGr/BERTopic/blob/master/bertopic/_bertopic.py#L3766):
```python
# Average the document embeddings related to the seeded topics with the
# embedding of the seeded topic to force the documents in a cluster
for seed_topic in range(len(seed_topic_list)):
indices = [index for index, topic in enumerate(y) if topic == seed_topic]
embeddings[indices] = np.average([embeddings[indices], seed_topic_embeddings[seed_topic]], weights=[3, 1])
```
Modified code:
```python
# Average the document embeddings related to the seeded topics with the
# embedding of the seeded topic to force the documents in a cluster
for seed_topic in range(len(seed_topic_list)):
indices = [index for index, topic in enumerate(y) if topic == seed_topic]
embeddings_ = embeddings[indices]
seed_topic_embeddings_ = seed_topic_embeddings[seed_topic]
seed_topic_embeddings_ = np.tile(seed_topic_embeddings_, (embeddings_.shape[0], 1))
embeddings[indices] = np.average([embeddings_, seed_topic_embeddings_], axis=0, weights=[3, 1])
``` | closed | 2024-06-05T13:26:27Z | 2024-08-07T09:29:12Z | https://github.com/MaartenGr/BERTopic/issues/2036 | [] | RTChou | 8 |
jeffknupp/sandman2 | sqlalchemy | 52 | standalone license file? | most projects have a `LICENSE` file in their root folder. Sandman 1 had this. Is the absence here intentional? If so, the reason for this should be noted in [README.md](https://github.com/jeffknupp/sandman2/blob/master/README.md). I'm genuinely interested -- Thanks! | closed | 2016-11-15T20:56:42Z | 2016-11-16T18:55:01Z | https://github.com/jeffknupp/sandman2/issues/52 | [] | swharden | 1 |
google-research/bert | tensorflow | 1,158 | does tokenizer support emoji? | Hi, I have the code below and it always encodes emoji as "unk". Can someone tell me what I should do? Thanks
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
s =" 😃 hello how are you"
tokenizer.tokenize(s)
['[UNK]', 'hello', 'how', 'are', 'you'] | closed | 2020-10-07T19:54:12Z | 2020-10-12T00:16:41Z | https://github.com/google-research/bert/issues/1158 | [] | steveguang | 0 |
deeppavlov/DeepPavlov | nlp | 1,123 | Problem with training my own TF-IDF Ranker on Windows 10. | Hello, I created my own config file based on ru_ranker_tfidf_wiki.json:
```
{
"dataset_reader": {
"class_name": "odqa_reader",
"data_path": "~/Desktop/ODQA/train_data",
"save_path": "~/Desktop/ODQA/tfidf.db",
"dataset_format": "txt"
},
"dataset_iterator": {
"class_name": "sqlite_iterator",
"shuffle": false,
"load_path": "~/Desktop/ODQA/tfidf.db"
},
"chainer": {
"in": [
"docs"
],
"in_y": [
"doc_ids",
"doc_nums"
],
"out": [
"tfidf_doc_ids"
],
"pipe": [
{
"class_name": "hashing_tfidf_vectorizer",
"id": "vectorizer",
"fit_on": [
"docs",
"doc_ids",
"doc_nums"
],
"save_path": "~/Desktop/ODQA/tfidf_matrix.npz",
"load_path": "~/Desktop/ODQA/tfidf_matrix.npz",
"tokenizer": {
"class_name": "ru_tokenizer",
"lemmas": true,
"ngram_range": [
1,
2
]
}
},
{
"class_name": "tfidf_ranker",
"top_n": 5,
"in": [
"docs"
],
"out": [
"tfidf_doc_ids",
"tfidf_doc_scores"
],
"vectorizer": "#vectorizer"
}
]
},
"train": {
"validate_best": false,
"test_best": false,
"batch_size": 10000
},
"metadata": {
"variables": {
"ROOT_PATH": "~/.deeppavlov",
"DOWNLOADS_PATH": "{ROOT_PATH}/downloads",
"MODELS_PATH": "{ROOT_PATH}/models"
},
"requirements": [],
"labels": {
"server_utils": "Ranker"
}
}
}
```
When I'm trying to train the ranker, I'm facing the next problem:
```
C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\python.exe C:/Users/apantiuk/Desktop/ODQA/main.py
2020-01-28 13:41:46.721 INFO in 'deeppavlov.dataset_readers.odqa_reader'['odqa_reader'] at line 57: Reading files...
2020-01-28 13:41:46.722 INFO in 'deeppavlov.dataset_readers.odqa_reader'['odqa_reader'] at line 134: Building the database...
0%| | 0/300 [00:00<?, ?it/s]
0it [00:00, ?it/s]2020-01-28 13:41:48.416 INFO in 'deeppavlov.dataset_readers.odqa_reader'['odqa_reader'] at line 57: Reading files...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\apantiuk\Desktop\ODQA\main.py", line 9, in <module>
ranker = train_model(model_config)
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\site-packages\deeppavlov\__init__.py", line 32, in train_model
train_evaluate_model_from_config(config, download=download, recursive=recursive)
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\site-packages\deeppavlov\core\commands\train.py", line 92, in train_evaluate_model_from_config
data = read_data_by_config(config)
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\site-packages\deeppavlov\core\commands\train.py", line 58, in read_data_by_config
return reader.read(data_path, **reader_config)
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\site-packages\deeppavlov\dataset_readers\odqa_reader.py", line 81, in read
self._build_db(save_path, dataset_format, expand_path(data_path))
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\site-packages\deeppavlov\dataset_readers\odqa_reader.py", line 130, in _build_db
Path(save_path).unlink()
File "C:\Users\apantiuk\AppData\Local\Continuum\anaconda3\envs\bot_env\lib\pathlib.py", line 1294, in unlink
self._accessor.unlink(self)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\apantiuk\\Desktop\\ODQA\\tfidf.db'
```
Execution of the program doesn't stop and goes into a loop while showing the error.
I runs into this problem only on my local Windows 10 Pro machine, everything is fine on Google Colab. | closed | 2020-01-28T11:08:26Z | 2020-01-28T14:34:56Z | https://github.com/deeppavlov/DeepPavlov/issues/1123 | [] | matherialist | 2 |
apify/crawlee-python | automation | 1,079 | Integrate `impit` as HTTP client | - Impit is already available on PyPI: [impit](https://pypi.org/project/impit/).
- Integrate it as the HTTP client, and make it a default one, while keeping `HTTPX` optional.
- Breaking change?
- The `apify_fingerprint_datapoints`, `browserforge` and `httpx[brotli,http2,zstd]` could be then removed from the mandatory dependencies.
- Relates to: https://github.com/apify/crawlee-python/issues/1077.
| open | 2025-03-13T08:00:33Z | 2025-03-24T10:25:27Z | https://github.com/apify/crawlee-python/issues/1079 | [
"enhancement",
"t-tooling"
] | vdusek | 0 |
flairNLP/flair | pytorch | 2,962 | Can't load local NER model | Hi,
I trained a NER model following your tutorial [here](https://medium.com/thecyphy/training-custom-ner-model-using-flair-df1f9ea9c762), which btw is very nice and clear, thank you so much. However when I try to load the trained tagger, `SequenceTagger` doesn't seem to recognize the local path. It keeps trying to download the model from huggingface repo. Here's the error I got:
```
Traceback (most recent call last):
File "flair_evaluate.py", line 9, in <module>
model = SequenceTagger.load('resources/tagger/wikiner_model/final-model.pt')
File "/share/home/cao/.conda/envs/bilstm_crf_Study/lib/python3.8/site-packages/flair/nn/model.py", line 134, in load
model_file = cls._fetch_model(str(model_path))
File "/share/home/cao/.conda/envs/bilstm_crf_Study/lib/python3.8/site-packages/flair/models/sequence_tagger_model.py", line 924, in _fetch_model
model_path = cached_download(
File "/share/home/cao/.conda/envs/bilstm_crf_Study/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 665, in cached_download
_raise_for_status(r)
File "/share/home/cao/.conda/envs/bilstm_crf_Study/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 169, in _raise_for_status
raise e
File "/share/home/cao/.conda/envs/bilstm_crf_Study/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 131, in _raise_for_status
response.raise_for_status()
File "/share/home/cao/.conda/envs/bilstm_crf_Study/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/resources/tagger/wikiner_model/final-model.pt/resolve/main/pytorch_model.bin (Request ID: XH-Tc3TQnKyPEKgmNWpJE)
```
I have checked the paths, the files and directories are all there with the right name. I tried using absolute paths, changing the cache root parameter, but nothing seems to work. Do you have an idea how I can fix this ?
Thank you in advance.
**Environment:**
- OS : Linux debian 5.10.140-1
- Version : flair 0.11.3
| closed | 2022-10-13T15:33:55Z | 2024-09-13T10:43:10Z | https://github.com/flairNLP/flair/issues/2962 | [
"bug"
] | DanrunFR | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 998 | How to use even more CPU's |
Hi,
I have been getting great results for inference but now wish to train a new model.
I have started the training with the following command for 800x600 images:
python3 train.py --dataroot datasets/style_constable --name constable_cyclegan --model cycle_gan --preprocess scale_width_and_crop --load_size 800 --crop_size 800 --gpu_ids -1
I have CPUs only, but quite a lot of them (120).
The inference and above training example uses about 22 CPUs on average - (varying between 28 and 8) as monitored by the Linux top command.
Do you know whether it is possible to increase the parallelism even further ? I'm assuming that there is no parameter for this but was wondering whether this could be forced using some batching or larger sized work unit ?
Thanks again. | open | 2020-04-18T06:19:58Z | 2020-04-22T16:48:18Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/998 | [] | Adrian-1234 | 4 |
sanic-org/sanic | asyncio | 2,718 | ASGI exceptions outside handlers should be able to pass status | An exception raised in the lifecycle (but not handler) of a request on an ASGI-served application should be able to pass a status code.
---
> In my experiment, raising a `BadRequest` in `asgi.py` will cause the ASGI server return 500 error, instead of 400. This seems out of the scope of this PR, but I am wondering whether we should change the raised error to `ServerError` or others before we figure out how to let ASGI server return a 400 error.
_Originally posted by @ChihweiLHBird in https://github.com/sanic-org/sanic/pull/2606#discussion_r1133448078_
| open | 2023-03-19T20:11:45Z | 2023-07-04T19:39:15Z | https://github.com/sanic-org/sanic/issues/2718 | [
"help wanted",
"beginner"
] | ahopkins | 0 |
darrenburns/posting | rest-api | 115 | Searching and/or filtering for response bodies | Especially for big request a search and filter functionality can be a game changer.
A regular vim search `/` would already go a long way. Filtering via jq would be an outstanding feature. | closed | 2024-10-16T20:28:08Z | 2024-11-18T17:23:39Z | https://github.com/darrenburns/posting/issues/115 | [] | ttytm | 2 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,081 | sqlalchemy integrity error wraps db-api integrity error | So when i created a new teanet from the admin account, then im going in to create a new admin accout only for that tenat on the sub domain, but i get error like this. But i still get a notifikation about i have create a admin user and can login.
So im dunno why this error come, but im going to test it more
@evilaliv3 by the way i will go and use kalilinux soon to try pentest the globaleaks is this something i can do whitout hitting any legal rules ?
Version: 4.4.4
sqlalchemy.exc.IntegrityError Wraps a DB-API IntegrityError.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1236, in _execute_context
cursor, statement, parameters, context
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 536, in do_execute
cursor.execute(statement, parameters)
sqlite3.IntegrityError: UNIQUE constraint failed: user.tid, user.username
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/usr/lib/python3/dist-packages/twisted/python/failure.py", line 491, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/admin/user.py", line 161, in post
user = yield create_user(self.request.tid, request, self.request.language)
File "/usr/lib/python3/dist-packages/twisted/python/threadpool.py", line 250, in inContext
result = inContext.theWork()
File "/usr/lib/python3/dist-packages/twisted/python/threadpool.py", line 266, in <lambda>
inContext.theWork = lambda: context.call(ctx, func, *args, **kw)
File "/usr/lib/python3/dist-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/lib/python3/dist-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
File "/usr/lib/python3/dist-packages/globaleaks/orm.py", line 152, in _wrap
result = function(session, *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/admin/user.py", line 70, in create_user
return user_serialize_user(session, db_create_user(session, tid, request, language), language)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/admin/user.py", line 54, in db_create_user
session.flush()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2446, in flush
self._flush(objects)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2584, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 67, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 277, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2544, in _flush
flush_context.execute()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 416, in execute
rec.execute(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 583, in execute
uow,
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj
insert,
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 1116, in _emit_insert_statements
statement, params
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 980, in execute
return meth(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/sql/elements.py", line 273, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1099, in _execute_clauseelement
distilled_params,
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1240, in _execute_context
e, statement, parameters, cursor, context
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1458, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 276, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1236, in _execute_context
cursor, statement, parameters, context
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 536, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: user.tid, user.username [SQL: 'INSERT INTO user (id, tid, creation_date, username, salt, hash_alg, password, name, description, public_name, role, state, last_login, mail_address, language, password_change_needed, password_change_date, crypto_prv_key, crypto_pub_key, crypto_rec_key, crypto_bkp_key, crypto_escrow_prv_key, crypto_escrow_bkp1_key, crypto_escrow_bkp2_key, change_email_address, change_email_token, change_email_date, reset_password_token, reset_password_date, notification, forcefully_selected, can_delete_submission, can_postpone_expiration, can_grant_access_to_reports, can_edit_general_settings, readonly, two_factor_enable, two_factor_secret, reminder_date, pgp_key_fingerprint, pgp_key_public, pgp_key_expiration, clicked_recovery_key) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('b622cded-974d-47e8-9ba3-b88040933840', 6, '2021-10-28 06:28:04.350468', 'demo', 'db+msVpwl88CS9h3qs3XCA==', 'ARGON2', 'Gkm6iA0KJWNP8KePsSFCZcKl+KDbm4nuKmQR0HeTbj8=', 'demo', '{"en": ""}', 'demo', 0, 1, '1970-01-01 00:00:00.000000', 'info@upgradeit.dk', 'da', 1, '2021-10-28 06:28:01.817359', 'dsAqRDMHrGBvoY5cilwc4lJjPNt8q2yOtUHDbkgYz4/ox5zlmqa2KEVUrCUa8hH95rtuvTC4L43Rs+QApfSql9P4/7nznd0hD3Q+2STccO4lrdzj', 'wtqKo6hjC9XI4DRmBZvdYcRF9tC5QWq1PIJHAJHd8RQ=', 'Mqay1+Re9rZiLfwiU+ZD/e1IuyUe0Y/qmM+hefDHmQMS/Bx2iQLndCjYKMLoe7tjxgqxfs24hsuZ3qAdK+Uj3BIBMqrqmhW0HQC3z+pcgQM=', 'bOTiGi6KEBQRvg4GCOWMc1Yy71NEQMIicuXV36NsVCBOKuTmYAGpv3RPulTWs4f7TpwsgnzRLPdtuvXr4kh+Foo/oCDR3qez3TM2NysnPPofvmi/', '', 'mTQpISgK2EyZ3RltK4RdJS3+hRSAnZe8c1S2o6vYX34Fg/evxKIf4TP7uexa2tAhF63K5Z1mm9b7QoGwJQl4D/sgdbtraX4t+xTUin6rW90nky4nW7yQeEsRJ1Q=', 'vPCsXPr1S2KRDyv0p64fDmX2HDkt/RyhkQ//FJ+htW9PYN9yM9vTI5a+UU1jEmCB8L/3sufXcVXYlxJf7dOyr+6FQTpPkOcyjoEXZ329m+fZpRRC4eXJdN7AvwU=', '', None, '1970-01-01 00:00:00.000000', None, datetime.datetime(1970, 1, 1, 0, 0), 1, 0, 0, 0, 0, 0, 0, 0, '', '1970-01-01 00:00:00.000000', '', '', '1970-01-01 00:00:00.000000', 0)] (Background on this error at: http://sqlalche.me/e/gkpj)
| closed | 2021-10-28T06:38:18Z | 2021-10-28T09:48:06Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3081 | [] | strinux | 2 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,048 | Very Very different PSNR between the original and the latest version | I have followed this program since Aug, 2023. (Many thanks to the relevant workers!) Recently, I noticed the program updated a lot. Therefore, I also updated the codes in my computer. However, when I trained the same directory, which is created by running COLMAP with a series of figures, I got a really very different PSNR after same times of training. With the original version, only use the loss function like belows,
Ll1 = l1_loss(image, gt_image)
loss = (1.0 - opt.lambda_dssim) * Ll1 + opt.lambda_dssim * (1.0 - ssim(image, gt_image))
the PSNR is aroud 20 or 30. However, with the latest version, loss funtion like,
Ll1 = l1_loss(image, gt_image)
if FUSED_SSIM_AVAILABLE:
ssim_value = fused_ssim(image.unsqueeze(0), gt_image.unsqueeze(0))
else:
ssim_value = ssim(image, gt_image)
loss = (1.0 - opt.lambda_dssim) * Ll1 + opt.lambda_dssim * (1.0 - ssim_value)
I got PSNR is aroud 7 or 8, less a lot. I have examined they have both the same training_report function, which print the PSNR. However, observed with my naked eyes that the difference between the figures was very tiny. High distance in PSNR number, low in figures. I want to know WHY?
Really appreciate!!!
| open | 2024-11-05T04:20:41Z | 2025-02-14T14:46:57Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1048 | [] | wla-98 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.