repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
davidsandberg/facenet | computer-vision | 903 | det.npy | I'm so new in this field. when i follow your code, i couldn't understand what ids det1,2,3.npy files.
I thought it is a model file, but it wasnt. anybody solve what i want to know? Thanks | open | 2018-10-26T12:24:50Z | 2018-10-26T13:02:30Z | https://github.com/davidsandberg/facenet/issues/903 | [] | sangkyuleeKOR | 1 |
ploomber/ploomber | jupyter | 435 | Ploomber fails to install a virtual environment on a system with py2 and py3 installed | After `ploomber install` I got this:
> /usr/bin/python: No module named venv
> Error: An error occurred when executing command: python -m venv venv-chq
> Original error message: Command '('python', '-m', 'venv', 'venv-chq')' returned non-zero exit status 1.
It looks like ploomber calls plain python, which on my system defaults to py2 that doesn't have venv.
My system is CentOS with py2.7.5 (python) and py3.7.6 (python3)
I wonder if py2 interferes only with creation of a virtual environment, which can be remedied manually, or there are more surprises ahead.
Thanks,
Marcin | closed | 2021-12-09T17:38:29Z | 2022-02-14T01:08:20Z | https://github.com/ploomber/ploomber/issues/435 | [] | mgierdal | 7 |
aleju/imgaug | deep-learning | 337 | AssertionError when working with ImageDataGenerator | I've been trying to implement IMGAUG to work with Keras's ImageDataGenerator (and have found the thread where this is discussed), but am running into an Assertion Error with no explanation during the fit command
`history = model.fit_generator(train_generator,
steps_per_epoch = steps_per_epoch,
epochs=epochs,
workers=4,
validation_data=validation_generator,
validation_steps=validation_steps)`
This yields the error:
```
AssertionError Traceback (most recent call last)
<ipython-input-39-e7ff542a51eb> in <module>()
10 workers=4,
11 validation_data=validation_generator,
---> 12 validation_steps=validation_steps)
18 frames
/usr/local/lib/python3.6/dist-packages/imgaug/augmenters/blend.py in blend_alpha(image_fg, image_bg, alpha, eps)
94 """
95 assert image_fg.shape == image_bg.shape
---> 96 assert image_fg.dtype.kind == image_bg.dtype.kind
97 # TODO switch to gate_dtypes()
98 assert image_fg.dtype.name not in ["float128"]
```
`
I think this has to do with my images not being in the numpy unit8 format I believe, but I'm not fully sure about this, as documentation says that at times it will work with floats too? Any advice is appreciated | open | 2019-06-14T06:06:47Z | 2019-06-15T08:32:25Z | https://github.com/aleju/imgaug/issues/337 | [] | EXJUSTICE | 3 |
huggingface/transformers | python | 36,086 | The docstring doesn't match the file name in utils/check_table.py | ### System Info

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
check file docstring
### Expected behavior
change into correct file name | closed | 2025-02-07T09:16:09Z | 2025-02-10T15:48:04Z | https://github.com/huggingface/transformers/issues/36086 | [
"bug"
] | kkscilife | 2 |
opengeos/leafmap | streamlit | 355 | Leaf maps options question | Does leaf maps offer?
1. Direction API to draw path From source to destination.
2. Direction API to calculate the distance from the source to the destination.
3. Show multiple pin locations on the maps.
@Pop these are the points that need to clear from the Leafmaps support team. | closed | 2023-02-03T07:06:36Z | 2023-02-03T20:33:43Z | https://github.com/opengeos/leafmap/issues/355 | [
"Feature Request"
] | ziskind60 | 1 |
langmanus/langmanus | automation | 44 | openai.InternalServerError: Unable to round-trip http request to upstream: dial tcp [::1]:11434: connectex: No connection could be made because the target machine actively refused it. | closed | 2025-03-19T03:47:23Z | 2025-03-20T02:13:42Z | https://github.com/langmanus/langmanus/issues/44 | [] | bigsinger | 2 | |
pytorch/pytorch | machine-learning | 149,040 | Only 2D, 3D, 4D, 5D padding with non-constant padding are supported for now | ### 🐛 Describe the bug
Hello, thanks for sharing the work.
I encountered an issue while running my ESPnet-based TTS script on Windows. Here is the error message I got:
G:\code> & g:/University_documents_over_four_years/AI语音/.conda/python.exe g:/code/tts1.py
Failed to import Flash Attention, using ESPnet default: No module named 'flash_attn'
Loaded spembs for speaker: test
emedding shape:(1, 1, 512)
Fbank feature shape: torch.Size([1, 512, 80])
Traceback (most recent call last):
File "g:\code\tts1.py", line 95, in <module>
result = text2speech(text_input, speech=speech_tensor, spembs=embedding)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\espnet2\bin\tts_inference.py", line 196, in __call__
output_dict = self.model.inference(**batch, **cfg)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\espnet2\tts\espnet_model.py", line 256, in inference
feats = self.feats_extract(speech[None])[0][0]
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\espnet2\tts\feats_extract\log_mel_fbank.py", line 88, in forward
input_stft, feats_lens = self.stft(input, input_lengths)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\espnet2\layers\stft.py", line 105, in forward
output = torch.stft(input.float(), **stft_kwargs)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\functional.py", line 707, in stft
input = F.pad(input.view(extended_shape), [pad, pad], pad_mode)
File "G:\University_documents_over_four_years\AI语音\.conda\lib\site-packages\torch\nn\functional.py", line 5209, in pad
return torch._C._nn.pad(input, pad, mode, value)
NotImplementedError: Only 2D, 3D, 4D, 5D padding with non-constant padding are supported for now
Has anyone encountered this issue before? How can I fix it?
Thanks in advance! 🙏
### Versions
PyTorch version: 2.3.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 家庭中文版 (10.0.26100 64 位)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.16 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:19:12) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 566.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i7-12700H
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2300
MaxClockSpeed: 2300
L2CacheSize: 11776
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.3.1
[pip3] torch-complex==0.4.4
[pip3] torchaudio==2.3.1
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.18.1
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] mkl 2021.4.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h827c3e9_2
[conda] mkl_fft 1.3.11 py310h827c3e9_0
[conda] mkl_random 1.2.8 py310hc64d2fc_0
[conda] numpy 1.22.4 pypi_0 pypi
[conda] pytorch 2.3.1 cpu_py310h0ce1571_0
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torch-complex 0.4.4
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | open | 2025-03-12T13:54:45Z | 2025-03-14T19:01:28Z | https://github.com/pytorch/pytorch/issues/149040 | [
"module: nn",
"triaged"
] | fallen-leaves-web | 1 |
yeongpin/cursor-free-vip | automation | 149 | 建议更改CloudFlare人机验证逻辑 | 建議更改CloudFlare人機驗證邏輯 | 方案一:使用 cloudscraper 库(推荐)
这个库专门用于处理 Cloudflare 的反爬机制,可自动完成 JavaScript 验证挑战。
python
复制
import cloudscraper
# 创建 scraper 对象
scraper = cloudscraper.create_scraper(
browser={
'browser': 'firefox',
'platform': 'windows',
'mobile': False
}
)
url = "https://目标网站.com"
try:
# 发送请求(自动处理验证)
response = scraper.get(url)
# 检查响应状态
if response.status_code == 200:
print("成功绕过验证!")
print(response.text[:500]) # 打印部分内容
else:
print("请求失败,状态码:", response.status_code)
except Exception as e:
print("发生错误:", str(e))
安装依赖:
bash
复制
pip install cloudscraper
方案二:Selenium 自动化浏览器
适用于需要交互操作的复杂验证(如点击验证按钮):
python
复制
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
options = webdriver.ChromeOptions()
# options.add_argument("--headless") # 无头模式
driver = webdriver.Chrome(options=options)
url = "https://目标网站.com"
try:
driver.get(url)
# 等待 Cloudflare 验证完成(最长30秒)
WebDriverWait(driver, 30).until(
EC.presence_of_element_located((By.TAG_NAME, "body"))
)
print("页面标题:", driver.title)
print("页面内容:", driver.page_source[:500])
except Exception as e:
print("发生错误:", str(e))
finally:
driver.quit()
繁體版本:
方案壹:使用 cloudscraper 庫(推薦)
這個庫專門用於處理 Cloudflare 的反爬機制,可自動完成 JavaScript 驗證挑戰。
python
復制
import cloudscraper
# 創建 scraper 對象
scraper = cloudscraper.create_scraper(
browser={
'browser': 'firefox',
'platform': 'windows',
'mobile': False
}
)
url = "https://目標網站.com"
try:
# 發送請求(自動處理驗證)
response = scraper.get(url)
# 檢查響應狀態
if response.status_code == 200:
print("成功繞過驗證!")
print(response.text[:500]) # 打印部分內容
else:
print("請求失敗,狀態碼:", response.status_code)
except Exception as e:
print("發生錯誤:", str(e))
安裝依賴:
bash
復制
pip install cloudscraper
方案二:Selenium 自動化瀏覽器
適用於需要交互操作的復雜驗證(如點擊驗證按鈕):
python
復制
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
options = webdriver.ChromeOptions()
# options.add_argument("--headless") # 無頭模式
driver = webdriver.Chrome(options=options)
url = "https://目標網站.com"
try:
driver.get(url)
# 等待 Cloudflare 驗證完成(最長30秒)
WebDriverWait(driver, 30).until(
EC.presence_of_element_located((By.TAG_NAME, "body"))
)
print("頁面標題:", driver.title)
print("頁面內容:", driver.page_source[:500])
except Exception as e:
print("發生錯誤:", str(e))
finally:
driver.quit() | closed | 2025-03-06T12:39:15Z | 2025-03-06T12:41:53Z | https://github.com/yeongpin/cursor-free-vip/issues/149 | [] | sysedger | 0 |
kizniche/Mycodo | automation | 953 | Cannot activate Bang-Bang Hysteretic function | Trying to Activate a Bang-Bang Hysteretic function after initial configuration and saving. Errors with "Error: Error: receiving: timeout"
- Mycodo Version: [e.g. 8.0.3] 8.9.1
- Raspberry Pi Version: [e.g. 3B+] a02082
- Raspbian OS Version: [e.g. Buster Lite] Linux raspberrypi 5.10.17-v7+
1. Go to Setup -> Function
2. Click on Activate on Bang-Bang Hysteretic function
3. After a period of time, page will refresh with success messages "Success: Function Activate (SQL)
Success: Activate Controller" and error message "Error: Error: receiving: timeout"
Expect function to enable and start working.


Brand new installation using SHT31D sensor, all sensor values reporting (temp, humidity, dewpoint, VPD) and PID Controller, Conditional Controllers activate and work fine.
| closed | 2021-03-16T03:05:17Z | 2021-03-17T04:43:12Z | https://github.com/kizniche/Mycodo/issues/953 | [
"bug",
"Fixed and Committed"
] | JustinCredible- | 5 |
babysor/MockingBird | pytorch | 578 | 打开web端后,左侧显示模式选择,但是右侧报错,如下 | ImportError: DLL load failed while importing shell: 找不到指定的程序。
Traceback:
File "D:\anaconda\envs\aivoice\lib\site-packages\streamlit\scriptrunner\script_runner.py", line 443, in _run_script
exec(code, module.__dict__)
File "C:\Users\giant\AppData\Local\Temp\tmpp_sbx1jh.py", line 14, in <module>
render_streamlit_ui()
File "D:\voice\MockingBird-main\mkgui\base\ui\streamlit_ui.py", line 838, in render_streamlit_ui
opyrator = getOpyrator(mode)
File "D:\voice\MockingBird-main\mkgui\base\ui\streamlit_ui.py", line 818, in getOpyrator
from mkgui.app import synthesize
File "D:\voice\MockingBird-main\mkgui\app.py", line 6, in <module>
from encoder import inference as encoder
File "D:\voice\MockingBird-main\encoder\inference.py", line 3, in <module>
from encoder.audio import preprocess_wav # We want to expose this function from here
File "D:\voice\MockingBird-main\encoder\audio.py", line 7, in <module>
import librosa
File "D:\anaconda\envs\aivoice\lib\site-packages\librosa\__init__.py", line 211, in <module>
from . import core
File "D:\anaconda\envs\aivoice\lib\site-packages\librosa\core\__init__.py", line 5, in <module>
from .convert import * # pylint: disable=wildcard-import
File "D:\anaconda\envs\aivoice\lib\site-packages\librosa\core\convert.py", line 7, in <module>
from . import notation
File "D:\anaconda\envs\aivoice\lib\site-packages\librosa\core\notation.py", line 8, in <module>
from ..util.exceptions import ParameterError
File "D:\anaconda\envs\aivoice\lib\site-packages\librosa\util\__init__.py", line 84, in <module>
from .files import * # pylint: disable=wildcard-import
File "D:\anaconda\envs\aivoice\lib\site-packages\librosa\util\files.py", line 28, in <module>
__data_path = os.environ.get("LIBROSA_DATA_DIR", pooch.os_cache("librosa"))
File "D:\anaconda\envs\aivoice\lib\site-packages\pooch\utils.py", line 99, in os_cache
return Path(appdirs.user_cache_dir(project))
File "D:\anaconda\envs\aivoice\lib\site-packages\appdirs.py", line 293, in user_cache_dir
path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA"))
File "D:\anaconda\envs\aivoice\lib\site-packages\appdirs.py", line 480, in _get_win_folder_with_pywin32
from win32com.shell import shellcon, shell
请问有没有大佬知道如何解决 | closed | 2022-05-25T11:07:10Z | 2022-10-26T15:01:26Z | https://github.com/babysor/MockingBird/issues/578 | [] | novolands | 3 |
deezer/spleeter | tensorflow | 175 | [Feature] Generating a vocal time code file for karaoke | ## Description
I don't know how this software work. But it's simply amazing! :-)
I would love to use it for creating karaoke music file, i.e. I have to synchronize the lyrics. I'm wondering whether it's possible to generate the time code information while splitting the vocal, of course not each single words, but between silence and vocals.
For instance, the software generates a text file with the timestamps of vocal starts and ends.
| closed | 2019-12-09T23:26:44Z | 2019-12-30T14:59:46Z | https://github.com/deezer/spleeter/issues/175 | [
"enhancement",
"feature"
] | bigboss97 | 2 |
miguelgrinberg/Flask-Migrate | flask | 176 | Duplicate migrations in non default schemas (Postgres) | I recently ran into this issue: https://stackoverflow.com/questions/40577640/flask-migrate-using-different-postgres-schemas-table-args-schema-te
As described in my answer there, this problem can be fixed by setting `include_schemas = True` in the `configure` call, but I find this behavior surprising / borderline buggy.
If `flask db migrate` will generate migrations for models in non-default schemas by default, it seems natural that it should also scan the DB for tables in non-default schemas. Otherwise you end up in this very perplexing situation where every time your run `flask db migrate` it recreates the exact same table creation migration. | closed | 2017-11-30T04:52:05Z | 2019-01-13T22:21:30Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/176 | [
"question",
"auto-closed"
] | jolleon | 2 |
allure-framework/allure-python | pytest | 33 | Add documentation | closed | 2017-02-06T14:12:21Z | 2018-07-16T12:22:48Z | https://github.com/allure-framework/allure-python/issues/33 | [
"theme:old pytest",
"type:enhancement"
] | baev | 2 | |
pandas-dev/pandas | python | 60,284 | BUG?: using `None` as replacement value in `replace()` typically upcasts to object dtype | I noticed that in certain cases, when replacing a value with `None`, that we always cast to object dtype, regardless of whether the dtype of the calling series can actually hold None (at least, when considering `None` just as a generic "missing value" indicator).
For example, a float Series can hold `None` in the sense of holding missing values, which is how `None` is treated in setitem:
```python
>>> ser = pd.Series([1, 2, 3], dtype="float")
>>> ser[1] = None
>>> ser
0 1.0
1 NaN
2 3.0
dtype: float64
```
However, when using `replace()` to change the value 2.0 with None, it depends on the exact way to specify the to_replace/value combo, but typically it will upcast to object:
```python
# with list
>>> ser.replace([1, 2], [10, None])
0 10.0
1 None
2 3.0
dtype: object
# with Series -> here it gives NaN but that is because the Series constructor already coerces the None
>>> ser.replace(pd.Series({1: 10, 2: None}))
0 10.0
1 NaN
2 3.0
dtype: float64
# with scalar replacements
>>> ser.replace(1, 10).replace(2, None)
0 10.0
1 None
2 3.0
dtype: object
```
In all the above cases, when replacing `None` with `np.nan`, it of course just results in a float Series with NaN.
The reason for this is two-fold. First, in `Block._replace_coerce` there is a check specifically for `value is None` and in that case we always cast to object dtype:
https://github.com/pandas-dev/pandas/blob/5f23aced2f97f2ed481deda4eaeeb049d6c7debe/pandas/core/internals/blocks.py#L906-L910
The above is used when replacing with a list of values. But for the scalar case, we also cast to object dtype because in this case we check for `if self._can_hold_element(value)` to do the replacement with a simple setitem (and if not cast to object dtype first before trying again). But it seems that `can_hold_element(np.array([], dtype=float), None)` gives False ..
---
Everything is tested with current main (3.0.0.dev), but I see the same behaviour on older releases (2.0 and 1.5)
---
Somewhat related issue:
* https://github.com/pandas-dev/pandas/issues/29024 | open | 2024-11-12T10:45:46Z | 2024-11-12T12:45:38Z | https://github.com/pandas-dev/pandas/issues/60284 | [
"Bug",
"Missing-data",
"replace",
"API - Consistency"
] | jorisvandenbossche | 2 |
pyg-team/pytorch_geometric | deep-learning | 10,029 | ToUndirected strange behaviour | ### 🐛 Describe the bug
I found this when checking the ToUndirected transformer on batches and am confused by the behaviour.
I have a list of directed graphs which I want to undirect. I have N nodes, N*(N-1)/2 edges and just as many edge attributes.
When I undirect the individual graphs first and then batch them, or batch the individual graphs and then undirect the whole batch, I wouldn't expect a difference.
As you can see from the example, it seems that all relevant tensors are indeed the same.
Yet, when I use the `to_data_list` argument, the undirectedness is lost in one of the cases, but not in the other.
We also get different numbers of graphs out after the to_data_list and it seems that the way of first undirecting and then batching is the correct and save way to do, but the other way around would be nice to work. Is this a bug or intended behaviour for some mathematical reason?
```python
import torch
from torch_geometric.data import Batch, Data
from torch_geometric.transforms import ToUndirected
undirected_transformer = ToUndirected()
edge_index1 = torch.tensor([[0, 1, 2], [1, 2, 0]], dtype=torch.long)
x1 = torch.tensor([[1], [2], [3]], dtype=torch.float)
edge_attr1 = torch.tensor([[0.1], [0.2], [0.3]], dtype=torch.float)
edge_index2 = torch.tensor([[0, 1], [1, 0]], dtype=torch.long)
x2 = torch.tensor([[4], [5]], dtype=torch.float)
edge_attr2 = torch.tensor([[0.4], [0.5]], dtype=torch.float)
data1 = Data(x=x1, edge_index=edge_index1, edge_attr=edge_attr1)
data2 = Data(x=x2, edge_index=edge_index2, edge_attr=edge_attr2)
undirected_batch = undirected_transformer(Batch.from_data_list([data1, data2]))
batch_from_undirected = Batch.from_data_list([undirected_transformer(data1), undirected_transformer(data2)])
assert undirected_batch.is_undirected()
assert batch_from_undirected.is_undirected()
assert (undirected_batch.x == batch_from_undirected.x).all()
assert (undirected_batch.edge_index == batch_from_undirected.edge_index).all()
assert (undirected_batch.edge_attr == batch_from_undirected.edge_attr).all()
assert (undirected_batch.batch == batch_from_undirected.batch).all()
assert all( [data.is_undirected() for data in batch_from_undirected.to_data_list()] )
assert not all( [data.is_undirected() for data in undirected_batch.to_data_list()] )
assert len([data for data in batch_from_undirected.to_data_list()]) == 2
assert len([data for data in undirected_batch] ) == 5
```
Thank you for your help!
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.26.3
Libc version: N/A
Python version: 3.12.7 (main, Oct 1 2024, 02:05:46) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.6.0
[pip3] torch-geometric==2.6.1
[pip3] torchmetrics==1.6.1
[conda] numpy 1.26.4 pypi_0 pypi | open | 2025-02-13T17:01:08Z | 2025-03-12T16:54:26Z | https://github.com/pyg-team/pytorch_geometric/issues/10029 | [
"bug"
] | HCelion | 1 |
MilesCranmer/PySR | scikit-learn | 207 | [BUG] *Installation difficulties on Windows 10, Py3.9* | **Describe the bug**
Facing issues while installing pysr through
import pysr
pysr.install()
**julia.tools.PyCallInstallError: Precompiling PyCall failed.**
Fatal Python error: init_import_size: Failed to import the site module
**Version (please include the following information):**
- OS: Windows (x86_64-w64-mingw32)
Microsoft Windows [Version 10.0.19042.2130]
- Julia version: Version 1.8.2 (2022-09-29)
- Python version: Python 3.9.13
- PySR version: (0.11.4)
- Does the bug still appear with the latest version of PySR?: Yes
**Configuration**
- What are your PySR settings?
- What dataset are you running on?: No dataset
- If possible, please share a minimal code example that produces the error.
**Error message**
```
Julia Version 1.8.2
Commit 36034abf26 (2022-09-29 15:21 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
Microsoft Windows [Version 10.0.19042.2130]
CPU: Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz:
speed user nice sys idle irq
#1-12 2592 MHz 10071994 0 7263058 161157713 326495 ticks
Memory: 15.723960876464844 GB (6015.3671875 MB free)
Uptime: 14874.625 sec
Load Avg: 0.0 0.0 0.0
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-13.0.1 (ORCJIT, skylake)
Threads: 1 on 12 virtual cores
Environment:
JULIA_PROJECT = @pysr-0.11.4
HOMEDRIVE = C:
HOMEPATH = \Users\nakk116
PATH = C:\JCP\symreg\Scripts;C:\Program Files (x86)\Google\Chrome\Application;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\windows\System32\OpenSSH\;C:\Program Files (x86)\DVC (Data Version Control);C:\Program Files\Git\cmd;C:\Users\nakk116\AppData\Local\Microsoft\WindowsApps;;C:\Users\nakk116\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\nakk116\AppData\Local\Programs\Julia-1.8.2\bin
PATHEXT = .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
PSMODULEPATH = C:\Program Files\WindowsPowerShell\Modules;C:\windows\system32\WindowsPowerShell\v1.0\Modules
│ ArgumentError: Package PyCall [438e738f-606a-5dbb-bf0a-cddfbfd45ab0] is required but does not seem to be installed:
│ - Run `Pkg.instantiate()` to install all recorded dependencies.
│
│ Stacktrace:
│ [1] _require(pkg::Base.PkgId)
│ @ Base .\loading.jl:1306
│ [2] _require_prelocked(uuidkey::Base.PkgId)
│ @ Base .\loading.jl:1200
│ [3] macro expansion
│ @ .\lock.jl:223 [inlined]
│ [4] require(uuidkey::Base.PkgId)
│ @ Base .\loading.jl:1195
│ [5] top-level scope
│ @ C:\JCP\symreg\Lib\site-packages\julia\install.jl:36
│ [6] include(mod::Module, _path::String)
│ @ Base .\Base.jl:419
│ [7] exec_options(opts::Base.JLOptions)
│ @ Base .\client.jl:303
│ [8] _start()
│ @ Base .\client.jl:522
└ @ Main C:\JCP\symreg\Lib\site-packages\julia\install.jl:38
[ Info: Installing PyCall...
Installing known registries into `C:\Users\nakk116\.julia`
Updating registry at `C:\Users\nakk116\.julia\registries\General.toml`
Resolving package versions...
Installed Parsers ──────── v2.4.2
Installed VersionParsing ─ v1.3.0
Installed Conda ────────── v1.7.0
...
[83775a58] + Zlib_jll v1.2.12+3
[8e850b90] + libblastrampoline_jll v5.1.1+0
[8e850ede] + nghttp2_jll v1.48.0+0
Building Conda ─→ `C:\Users\nakk116\.julia\scratchspaces\44cfe95a-1eb2-52ea-b672-e2afdf69b78f\6e47d11ea2776bc5627421d59cdcc1296c058071\build.log`
Building PyCall → `C:\Users\nakk116\.julia\scratchspaces\44cfe95a-1eb2-52ea-b672-e2afdf69b78f\53b8b07b721b77144a0fbbbc2675222ebf40a02d\build.log`
Precompiling project...
8 dependencies successfully precompiled in 8 seconds. 4 already precompiled.
**Precompiling PyCall...
Fatal Python error: init_import_size: Failed to import the site module**
Python runtime state: initialized
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\site.py", line 73, in <module>
import os
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\os.py", line 29, in <module>
from _collections_abc import _check_methods
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\_collections_abc.py", line 12, in <module>
GenericAlias = type(list[int])
TypeError: 'type' object is not subscriptable
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\JCP\symreg\lib\site-packages\pysr\julia_helpers.py", line 78, in install
julia.install(quiet=quiet)
File "C:\JCP\symreg\lib\site-packages\julia\tools.py", line 127, in install
raise PyCallInstallError("Precompiling")
julia.tools.PyCallInstallError: Precompiling PyCall failed.
```
** Important information from Julia may be printed before Python's Traceback **
| closed | 2022-10-18T22:17:12Z | 2024-02-12T09:04:30Z | https://github.com/MilesCranmer/PySR/issues/207 | [
"bug"
] | ganatma | 4 |
tqdm/tqdm | jupyter | 1,264 | Update using current "total" (and not increment size) | - [x] I have marked all applicable categories:
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [x] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
I want to loop over a list that is a cumulative sum, and I want to use the current value of the sum to update the progress bar. Example of what I would like to do (but which does not work):
```python
import tqdm
items = [0, 10, 50, 100, 3000]
pbar = tqdm.tqdm(total=max(items))
for i in items:
pbar.update(n=i)
```
What does work:
```python
import tqdm
items = [0, 10, 50, 100, 3000]
pbar = tqdm.tqdm(total=max(items))
for i in items:
pbar.n = i
pbar.refresh()
```
See https://stackoverflow.com/questions/69604872/tqdm-by-percentage .
Note:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
gives
```none
4.62.3 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:22:19)
[Clang 11.1.0 ] darwin
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| open | 2021-10-20T10:14:24Z | 2024-11-05T09:59:03Z | https://github.com/tqdm/tqdm/issues/1264 | [] | tdegeus | 6 |
ultralytics/ultralytics | computer-vision | 19,822 | Can you fix up the custom model performance for the Android export? | https://github.com/user-attachments/assets/ef60262e-e196-4bef-9f6f-73bbf9f59458
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
Can you fix up the custom model performance for the Android export?
On the RTX 4090, it works great. Would be nice if it worked as good on Android too.
https://github.com/user-attachments/assets/dbe27289-8cf3-4e89-96b8-cd1e449c4a06
### Environment
s23 ultra
### Minimal Reproducible Example
s23 ultra
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-03-22T11:21:33Z | 2025-03-22T11:49:54Z | https://github.com/ultralytics/ultralytics/issues/19822 | [
"embedded",
"exports"
] | AntDX316 | 2 |
modin-project/modin | pandas | 7,435 | PERF: Look into reducing copies for native execution | Native execution uses a pandas dataframe to represent the data. For operations that act on just one dataframe, the sequence for [defaulting to pandas](https://github.com/modin-project/modin/blob/8a832de870243294c407dee6300d993647205ff3/modin/core/storage_formats/base/query_compiler.py#L202) is:
1) copy the pandas dataframe
2) call a pandas method on it (with some wrapping to ensure semantic correctness)
3) construct a new query compiler out of the result, which requires another copy
I have observed that the copies can take a very long time (on the order of seconds for GBs of data) whether or not pandas copy-on-write is enabled on a 2GB numerical dataset.
We do the copies to ensure correct semantics-- in the compiler constructor we do the copy so that mutating the original pandas dataframe doesn't affect the new dataframe, and we copy before doing the operation because `NativeQueryCompiler.to_pandas()` has to make a copy for the same reason. However, if we're careful not to ever return a mutated pandas dataframe or mutate an input pandas dataframe, we can skip the copies. OTOH, maybe we should be relying more on pandas copy on write.
Here is the little benchmark I was trying out on my mac (8 physical cores, 64 GB memory, MacBook Pro 16-inch 2023, macOS Sequoia 15.3, Apple M2 CPU)
```python
import modin.pandas as pd
import numpy as np
from modin import set_execution
set_execution("Native", "Native")
# make a df of about 8 GB
df = pd.DataFrame(np.random.randint(0,100,size=(2**22,2**8)))
%time repr(df.sort_values(0))
```
The modin sort takes about 15.4 seconds whereas pandas takes about 5.21 seconds to do the equivalent task. Copying the pandas frame takes about 6 seconds, whether or not I have copy on write enabled.
<details>
<summary> output of `modin.pandas.show_versions()` </summary>
```
INSTALLED VERSIONS
------------------
commit : 35275e1c2d8a420a0fd16e3fca6ae5383fbdbc55
python : 3.9.21
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:23 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6020
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.32.0+11.g35275e1c.dirty
ray : 2.40.0
dask : 2024.8.0
distributed : 2024.8.0
pandas dependencies
-------------------
pandas : 2.2.3
numpy : 2.0.2
pytz : 2024.2
dateutil : 2.8.2
pip : 24.2
Cython : None
sphinx : 7.4.7
IPython : 8.18.1
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : 2024.11.0
fsspec : 2024.12.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : 5.3.0
matplotlib : 3.9.4
numba : None
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : 0.26.1
psycopg2 : 2.9.10
pymysql : None
pyarrow : 19.0.0
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : 2024.12.0
scipy : 1.13.1
sqlalchemy : 2.0.37
tables : N/A
tabulate : 0.9.0
xarray : 2024.7.0
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
```
</details> | open | 2025-02-08T01:37:22Z | 2025-02-11T20:30:04Z | https://github.com/modin-project/modin/issues/7435 | [
"Performance 🚀",
"P1"
] | sfc-gh-mvashishtha | 1 |
erdewit/ib_insync | asyncio | 426 | whatiforder is not working | how can work with whatiforder ? dosent give me corect data
if is it blocking how to get margin of trade?
thank you | closed | 2022-01-10T14:21:47Z | 2022-01-30T09:40:51Z | https://github.com/erdewit/ib_insync/issues/426 | [] | alisiroos | 0 |
dunossauro/fastapi-do-zero | sqlalchemy | 211 | FastAPI do Zero! | | Link do projeto | Seu @ no git | Comentário (opcional) |
|-------------|:-------------:|:-------------:|
|[FastAPI do Zero](https://github.com/Hiroowtf/FastAPI-Dunossauro) | [ Hiroowtf ](https://github.com/Hiroowtf) | Iniciando o Curso | | closed | 2024-07-20T21:15:04Z | 2024-07-21T19:52:42Z | https://github.com/dunossauro/fastapi-do-zero/issues/211 | [] | Hiroowtf | 1 |
thp/urlwatch | automation | 770 | Best practice for a bunch of keywords for several urls | Hello,
I need some hints, how to realize my workflow.
I want to check multiple sites (shops) for changes when searching keywords (products). In general, I would use an array of keywords and loop through it and make requests to each site/shop with placeholders in the url for the searched keyword. Something like [shop1.com?search={{product}}, shop2.com?productsearch={{product}}] and [productname1, productname2]
In my workflow-idea, I want to maintain these two objects, maybe in seperate files, and then led urlmagic do it's magic. Is it possible to do this with hooks? Is a hook able to edit the url before doing the real request? Or is a hook able to edit the list of urls that watches urlwatch? | open | 2023-10-23T11:04:46Z | 2023-10-23T13:08:37Z | https://github.com/thp/urlwatch/issues/770 | [] | Hans-Maulwurf | 1 |
nteract/papermill | jupyter | 709 | ImportError: attempted relative import with no known parent package | I'm trying to turn into a pip package a notebook which runs using Papermill. There are python file local to the package that I want to import into the notebook but it's giving me an error:
```py
# SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
# sys.path.append(os.path.dirname(SCRIPT_DIR))
from . import utils, preprocessing as prep
ImportError: attempted relative import with no known parent package
```
This way of importing gives this error whether I try to run the notebook manually and through Papermill.
I've also tried importing it directly:
```py
import utils, preprocessing as prep
ModuleNotFoundError: No module named 'utils'
```
The direct import runs fine if I execute the notebook manually, but when I try to run them through Papermill it doesn't work.
In my package parent I have
```py
import os, sys; sys.path.append(os.path.dirname(os.path.realpath(__file__)))
from . import preprocessing
from . import utils
``` | open | 2022-12-27T03:07:32Z | 2023-01-03T16:08:53Z | https://github.com/nteract/papermill/issues/709 | [
"bug",
"help wanted"
] | u3Izx9ql7vW4 | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 12,357 | propagate_attrs not accommodated by _construct_for_list |
### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/12356
```py
from __future__ import annotations
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class A(Base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(primary_key=True)
data: Mapped[str]
expr1 = A.data
expr2 = A.data + A.data
expr3 = A.data + A.data + A.data
# also test non-ORM elements in the list to make sure the list is scanned
assert expr1._propagate_attrs.get("compile_state_plugin") == "orm"
assert expr2._propagate_attrs.get("compile_state_plugin") == "orm"
assert expr3._propagate_attrs.get("compile_state_plugin") == "orm"
```
```diff
diff --git a/lib/sqlalchemy/sql/elements.py b/lib/sqlalchemy/sql/elements.py
index 6f20d7efa..fde503aaf 100644
--- a/lib/sqlalchemy/sql/elements.py
+++ b/lib/sqlalchemy/sql/elements.py
@@ -2977,6 +2977,10 @@ class ExpressionClauseList(OperatorExpression[_T]):
self.clauses = clauses
self.operator = operator
self.type = type_
+ for c in clauses:
+ if c._propagate_attrs:
+ self._propagate_attrs = c._propagate_attrs
+ break
return self
def _negate(self) -> Any:
``` | closed | 2025-02-18T15:13:24Z | 2025-02-18T17:57:29Z | https://github.com/sqlalchemy/sqlalchemy/issues/12357 | [
"bug",
"orm",
"near-term release"
] | zzzeek | 2 |
modelscope/data-juicer | data-visualization | 564 | [Bug]: Test failed with no language_id_score_filter | ### Before Reporting 报告之前
- [x] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [x] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [x] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
Ubuntu
### Installation Method 安装方式
docker
### Data-Juicer Version Data-Juicer版本
latest == v1.1.0
### Python Version Python版本
3.10
### Describe the bug 描述这个bug
Hi,
For the provided test config: https://github.com/modelscope/data-juicer/blob/main/configs/demo/dedup.yaml
When https://github.com/modelscope/data-juicer/blob/main/configs/demo/dedup.yaml#L15-L17 is commented out.
That is with no language_id_score_filter.
It reported an error:
`2025-01-29 11:07:31 | INFO | data_juicer.core.executor:206 - All OPs are done in 2.867s.
2025-01-29 11:07:31 | INFO | data_juicer.core.executor:209 - Exporting dataset to disk...
2025-01-29 11:07:31 | INFO | data_juicer.core.exporter:111 - Exporting computed stats into a single file...
Creating json from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s]/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:101: RuntimeWarning: divide by zero encountered in remainder
return table.fast_gather(key % table.num_rows)
Creating json from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s]
2025-01-29 11:07:31 | ERROR | __main__:33 - An error has been caught in function '<module>', process 'MainProcess' (1), thread 'MainThread' (124647976726528):
Traceback (most recent call last):
> File "/usr/local/bin/dj-process", line 33, in <module>
sys.exit(load_entry_point('py-data-juicer', 'console_scripts', 'dj-process')())
│ │ └ <function importlib_load_entry_point at 0x715dde50bd90>
│ └ <built-in function exit>
└ <module 'sys' (built-in)>
File "/data-juicer/tools/process_data.py", line 15, in main
executor.run()
│ └ <function Executor.run at 0x715c8dbdcf70>
└ <data_juicer.core.executor.Executor object at 0x715c8d2a2e60>
File "/data-juicer/data_juicer/core/executor.py", line 210, in run
self.exporter.export(dataset)
│ │ │ └ Dataset({
│ │ │ features: ['text', 'meta'],
│ │ │ num_rows: 10
│ │ │ })
│ │ └ <function Exporter.export at 0x715c8dbabb50>
│ └ <data_juicer.core.exporter.Exporter object at 0x715c8d4b4580>
└ <data_juicer.core.executor.Executor object at 0x715c8d2a2e60>
File "/data-juicer/data_juicer/core/exporter.py", line 203, in export
self._export_impl(dataset, self.export_path, self.suffix,
│ │ │ │ │ │ └ 'jsonl'
│ │ │ │ │ └ <data_juicer.core.exporter.Exporter object at 0x715c8d4b4580>
│ │ │ │ └ '/cache/main/output_doc/demo-dedup-processed.jsonl'
│ │ │ └ <data_juicer.core.exporter.Exporter object at 0x715c8d4b4580>
│ │ └ Dataset({
│ │ features: ['text', 'meta'],
│ │ num_rows: 10
│ │ })
│ └ <function Exporter._export_impl at 0x715c8dbabac0>
└ <data_juicer.core.exporter.Exporter object at 0x715c8d4b4580>
File "/data-juicer/data_juicer/core/exporter.py", line 119, in _export_impl
Exporter.to_jsonl(
│ └ <staticmethod(<function Exporter.to_jsonl at 0x715c8dbabc70>)>
└ <class 'data_juicer.core.exporter.Exporter'>
File "/data-juicer/data_juicer/core/exporter.py", line 229, in to_jsonl
dataset.to_json(export_path, force_ascii=False, num_proc=num_proc)
│ │ │ └ 1
│ │ └ '/cache/main/output_doc/demo-dedup-processed_stats.jsonl'
│ └ <function Dataset.to_json at 0x715ce4fb2680>
└ Dataset({
features: [],
num_rows: 10
})
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 4882, in to_json
).write()
File "/usr/local/lib/python3.10/dist-packages/datasets/io/json.py", line 116, in write
written = self._write(file_obj=buffer, orient=orient, lines=lines, **self.to_json_kwargs)
│ │ │ │ │ │ └ {'force_ascii': False}
│ │ │ │ │ └ <datasets.io.json.JsonDatasetWriter object at 0x715ce7eb66b0>
│ │ │ │ └ True
│ │ │ └ 'records'
│ │ └ <fsspec.implementations.local.LocalFileOpener object at 0x715dd9b5c940>
│ └ <function JsonDatasetWriter._write at 0x715dd9b8cca0>
└ <datasets.io.json.JsonDatasetWriter object at 0x715ce7eb66b0>
File "/usr/local/lib/python3.10/dist-packages/datasets/io/json.py", line 158, in _write
json_str = self._batch_json((offset, orient, lines, to_json_kwargs))
│ │ │ │ │ └ {'force_ascii': False}
│ │ │ │ └ True
│ │ │ └ 'records'
│ │ └ 0
│ └ <function JsonDatasetWriter._batch_json at 0x715dd9b8c1f0>
└ <datasets.io.json.JsonDatasetWriter object at 0x715ce7eb66b0>
File "/usr/local/lib/python3.10/dist-packages/datasets/io/json.py", line 129, in _batch_json
batch = query_table(
└ <function query_table at 0x715ce4f7c3a0>
File "/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py", line 598, in query_table
pa_subtable = _query_table_with_indices_mapping(table, key, indices=indices)
│ │ │ └ MemoryMappedTable
│ │ │ indices: uint64
│ │ │ ----
│ │ │ indices: [[0,1,2,3,4,5,6,7,9,12]]
│ │ └ slice(0, 1000, None)
│ └ MemoryMappedTable
│
│ ----
└ <function _query_table_with_indices_mapping at 0x715ce4f55bd0>
File "/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py", line 67, in _query_table_with_indices_mapping
return _query_table(
└ <function _query_table at 0x715ce4f55c60>
File "/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py", line 101, in _query_table
return table.fast_gather(key % table.num_rows)
│ │ │ │ └ <property object at 0x715ce5b8a430>
│ │ │ └ MemoryMappedTable
│ │ │
│ │ │ ----
│ │ └ array([ 0, 1, 2, 3, 4, 5, 6, 7, 9, 12])
│ └ <function IndexedTableMixin.fast_gather at 0x715ce5b67490>
└ MemoryMappedTable
----
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 122, in fast_gather
[
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 123, in <listcomp>
self._batches[batch_idx].slice(i - self._offsets[batch_idx], 1)
│ │ │ │ │ │ └ 0
│ │ │ │ │ └ array([0])
│ │ │ │ └ MemoryMappedTable
│ │ │ │
│ │ │ │ ----
│ │ │ └ 0
│ │ └ 0
│ └ []
└ MemoryMappedTable
----
IndexError: list index out of range
Traceback (most recent call last):
File "/usr/local/bin/dj-process", line 33, in <module>
sys.exit(load_entry_point('py-data-juicer', 'console_scripts', 'dj-process')())
File "/usr/local/lib/python3.10/dist-packages/loguru/_logger.py", line 1297, in catch_wrapper
return function(*args, **kwargs)
File "/data-juicer/tools/process_data.py", line 15, in main
executor.run()
File "/data-juicer/data_juicer/core/executor.py", line 210, in run
self.exporter.export(dataset)
File "/data-juicer/data_juicer/core/exporter.py", line 203, in export
self._export_impl(dataset, self.export_path, self.suffix,
File "/data-juicer/data_juicer/core/exporter.py", line 119, in _export_impl
Exporter.to_jsonl(
File "/data-juicer/data_juicer/core/exporter.py", line 229, in to_jsonl
dataset.to_json(export_path, force_ascii=False, num_proc=num_proc)
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 4882, in to_json
).write()
File "/usr/local/lib/python3.10/dist-packages/datasets/io/json.py", line 116, in write
written = self._write(file_obj=buffer, orient=orient, lines=lines, **self.to_json_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/io/json.py", line 158, in _write
json_str = self._batch_json((offset, orient, lines, to_json_kwargs))
File "/usr/local/lib/python3.10/dist-packages/datasets/io/json.py", line 129, in _batch_json
batch = query_table(
File "/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py", line 598, in query_table
pa_subtable = _query_table_with_indices_mapping(table, key, indices=indices)
File "/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py", line 67, in _query_table_with_indices_mapping
return _query_table(
File "/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py", line 101, in _query_table
return table.fast_gather(key % table.num_rows)
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 122, in fast_gather
[
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 123, in <listcomp>
self._batches[batch_idx].slice(i - self._offsets[batch_idx], 1)
IndexError: list index out of range
`
I checked the value of self._batches, it's empty: []
but no idea how to fix.
Please helps. Thanks.
### To Reproduce 如何复现
Just comment L15-17 https://github.com/modelscope/data-juicer/blob/main/configs/demo/dedup.yaml#L15-L17
### Configs 配置信息
_No response_
### Logs 报错日志
_No response_
### Screenshots 截图
_No response_
### Additional 额外信息
_No response_ | closed | 2025-01-29T11:21:27Z | 2025-01-29T18:13:53Z | https://github.com/modelscope/data-juicer/issues/564 | [
"bug"
] | monsieurzhang | 1 |
gradio-app/gradio | data-visualization | 10,489 | HTML object has no attribute 'fill_height' | ### Describe the bug
gradio 5.14.0
When adding a gr.HTML element to gr.TabbedInterface, an error "AttributeError: 'HTML' object has no attribute 'fill_height'. Did you mean: 'min_height'?" occurs, in previous 5.9.1 releases everything worked fine.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
hello_world = gr.Interface(lambda name: "Hello " + name, "text", "text")
bye_world = gr.Interface(lambda name: "Bye " + name, "text", "text")
chat = gr.ChatInterface(lambda *args: "Hello " + args[0])
big_block = gr.HTML("""
<div style='height: 800px; width: 100px; background-color: pink;'></div>
""")
demo = gr.TabbedInterface([hello_world, bye_world, chat, big_block], ["Hello World", "Bye World", "Chat", "html"])
if __name__ == "__main__":
demo.launch()
```
### Screenshot
_No response_
### Logs
```shell
Traceback (most recent call last): File "<exec>", line 149, in _run_code File "<string>", line 10, in <module> File "/lib/python3.12/site-packages/gradio/interface.py", line 987, in __init__ scale=1 if interface.fill_height else 0, ^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'HTML' object has no attribute 'fill_height'. Did you mean: 'min_height'?
PythonError: Traceback (most recent call last):
File "<exec>", line 149, in _run_code
File "<string>", line 10, in <module>
File "/lib/python3.12/site-packages/gradio/interface.py", line 987, in __init__
scale=1 if interface.fill_height else 0,
^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'HTML' object has no attribute 'fill_height'. Did you mean: 'min_height'?
at new_error (https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.js:10:10009)
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[307]:0x16e356
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[504]:0x17849f
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[4583]:0x328567
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[1148]:0x1c441a
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[3619]:0x2ca2cc
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[2181]:0x20c217
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[1156]:0x1c4b01
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[1159]:0x1c4e10
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[1160]:0x1c4e8e
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[3425]:0x2a0ca0
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[3426]:0x2a71fe
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[1162]:0x1c4fce
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[1157]:0x1c4c37
at https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.wasm:wasm-function[491]:0x177aff
at callPyObjectKwargs (https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.js:10:62781)
at Module.callPyObjectMaybePromising (https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.js:10:63963)
at wrapper (https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.js:10:27341)
at Zn.e.port1.onmessage (https://cdn.jsdelivr.net/pyodide/v0.27.2/full/pyodide.asm.js:10
```
### System Info
```shell
gradio 5.14.0
```
### Severity
I can work around it | closed | 2025-02-03T07:59:37Z | 2025-02-03T08:01:46Z | https://github.com/gradio-app/gradio/issues/10489 | [
"bug"
] | AlexeyPoptsov | 1 |
litestar-org/litestar | asyncio | 3,129 | Bug: CacheControlHeader not applying on create_static_files_router | ### Description
When applying a `CacheControlHeader` to the [static files router](https://docs.litestar.dev/latest/reference/static_files.html#litestar.static_files.create_static_files_router), the cache headers are not applied to requests served from this router.
`CacheControlHeader` works correctly on regular routers.
### URL to code causing the issue
NA
### MCVE
```python
from litestar import Litestar, Router, get
from litestar.response import File
from litestar.static_files import create_static_files_router
from litestar.datastructures.headers import CacheControlHeader
import uvicorn
def create_static_router() -> Router:
return create_static_files_router(
path="/static",
directories=["./static"],
name="static",
cache_control=CacheControlHeader(max_age=3600)
)
def create_direct_route_handler() -> Router:
@get("/example.txt")
async def example_txt() -> File:
return File("static/example.txt", content_disposition_type="inline", media_type="text/plain", filename="example.txt")
return Router(
path="/static",
cache_control=CacheControlHeader(max_age=3600),
route_handlers=[example_txt],
)
app = Litestar(route_handlers=[create_static_router()]) # <- this does not work as expected
#app = Litestar(route_handlers=[create_direct_route_handler()]) # <- this works as expected
if __name__ == "__main__":
uvicorn.run(
app,
)
# Expects a "static" folder in the same directory as this file and an example.txt file in the "static" folder
```
### Steps to reproduce
```bash
1. Put the MVCE beside a folder called static with example.txt inside it
2. Run the server and make a request to /static/example.txt
3. Observe that there is no cache header applied to the request
4. Change the code to the create_direct_route_handler() version and repeat
5. Observe the cache header applied as expected
```
### Screenshots
```bash
NA
```
### Logs
_No response_
### Litestar Version
2.6.1
### Platform
- [ ] Linux
- [ ] Mac
- [X] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-02-23T01:36:51Z | 2025-03-20T15:54:26Z | https://github.com/litestar-org/litestar/issues/3129 | [
"Bug :bug:"
] | ANIALLATOR114 | 3 |
Avaiga/taipy | automation | 1,996 | [🐛 BUG] Better rendering on Mobile | ### What went wrong? 🤔
One of our customers needs help getting a good-looking website on mobile. The application is available here:
https://develop-datauv.taipy.cloud/
On Desktop, the website looks fine:

On Mobile, the website is impossible to use:
<img src="https://github.com/user-attachments/assets/9c695dd8-7df6-4404-879d-c8ea397f906a" width="250">
How do I modify the code to make this look good on Mobile? @FredLL-Avaiga Can you take a look please?
The code is available here:
https://github.com/Avaiga/demo_datauv
Similar to: https://github.com/Avaiga/taipy/issues/473
### Steps to Reproduce Issue
1. A code fragment
2. And/or configuration files or code
3. And/or Taipy GUI Markdown or HTML files
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-10-09T16:13:46Z | 2024-10-09T16:25:32Z | https://github.com/Avaiga/taipy/issues/1996 | [
"💥Malfunction",
"🟧 Priority: High",
"🔒 Staff only"
] | AlexandreSajus | 1 |
deepspeedai/DeepSpeed | deep-learning | 7,165 | Install DeepSpeed fail with setuptools-77.0.3 | ```bash
root@csi67c88jasm4-0:/workspace/LLaMA-Factory# pip list
Package Version Editable project location
--------------------------------- ------------- ---------------------------------
accelerate 1.2.1
aiofiles 23.2.1
aiohappyeyeballs 2.6.1
aiohttp 3.11.14
aiosignal 1.3.2
airportsdata 20250224
annotated-types 0.7.0
anyio 4.9.0
astor 0.8.1
async-timeout 5.0.1
attrs 25.3.0
audioread 3.0.1
av 14.2.0
blake3 1.0.4
cachetools 5.5.2
certifi 2025.1.31
cffi 1.17.1
charset-normalizer 3.4.1
click 8.1.8
cloudpickle 3.1.1
compressed-tensors 0.9.2
contourpy 1.3.1
cupy-cuda12x 13.4.1
cycler 0.12.1
datasets 3.2.0
dbus-python 1.2.18
decorator 5.2.1
depyf 0.18.0
dill 0.3.8
diskcache 5.6.3
distro 1.7.0
dnspython 2.7.0
docstring_parser 0.16
einops 0.8.1
email_validator 2.2.0
exceptiongroup 1.2.2
fastapi 0.115.11
fastapi-cli 0.0.7
fastrlock 0.8.3
ffmpy 0.5.0
filelock 3.18.0
fire 0.7.0
fonttools 4.56.0
frozenlist 1.5.0
fsspec 2024.9.0
gguf 0.10.0
gradio 5.21.0
gradio_client 1.7.2
groovy 0.1.2
h11 0.14.0
httpcore 1.0.7
httptools 0.6.4
httpx 0.28.1
huggingface-hub 0.29.3
idna 3.10
importlib_metadata 8.6.1
interegular 0.3.3
jieba 0.42.1
Jinja2 3.1.6
jiter 0.9.0
joblib 1.4.2
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
kiwisolver 1.4.8
lark 1.2.2
lazy_loader 0.4
librosa 0.11.0
llamafactory 0.9.2 /workspace/LLM_test/LLaMA-Factory
llvmlite 0.43.0
lm-format-enforcer 0.10.11
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.10.1
mdurl 0.1.2
mistral_common 1.5.4
mpmath 1.3.0
msgpack 1.1.0
msgspec 0.19.0
multidict 6.2.0
multiprocess 0.70.16
nest-asyncio 1.6.0
networkx 3.4.2
ninja 1.11.1.3
nltk 3.9.1
numba 0.60.0
numpy 1.26.4
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-cusparselt-cu12 0.6.2
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
openai 1.67.0
opencv-python-headless 4.11.0.86
orjson 3.10.15
outlines 0.1.11
outlines_core 0.1.26
packaging 24.2
pandas 2.2.3
partial-json-parser 0.2.1.1.post5
peft 0.12.0
pillow 11.1.0
pip 25.0.1
platformdirs 4.3.7
pooch 1.8.2
prometheus_client 0.21.1
prometheus-fastapi-instrumentator 7.1.0
propcache 0.3.0
protobuf 6.30.1
psutil 7.0.0
py-cpuinfo 9.0.0
pyarrow 19.0.1
pycountry 24.6.1
pycparser 2.22
pydantic 2.10.6
pydantic_core 2.27.2
pydub 0.25.1
Pygments 2.19.1
PyGObject 3.42.1
pyparsing 3.2.1
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-json-logger 3.3.0
python-multipart 0.0.20
pytz 2025.1
PyYAML 6.0.2
pyzmq 26.3.0
ray 2.44.0
referencing 0.36.2
regex 2024.11.6
requests 2.32.3
rich 13.9.4
rich-toolkit 0.13.2
rouge-chinese 1.0.3
rpds-py 0.23.1
ruff 0.11.1
safehttpx 0.1.6
safetensors 0.5.3
scikit-learn 1.6.1
scipy 1.15.2
semantic-version 2.10.0
sentencepiece 0.2.0
setuptools 77.0.3
shellingham 1.5.4
shtab 1.7.1
six 1.17.0
sniffio 1.3.1
soundfile 0.13.1
soxr 0.5.0.post1
sse-starlette 2.2.1
ssh-import-id 5.11
starlette 0.46.1
sympy 1.13.1
termcolor 2.5.0
threadpoolctl 3.6.0
tiktoken 0.9.0
tokenizers 0.21.0
tomlkit 0.13.2
torch 2.6.0
torchaudio 2.6.0
torchvision 0.21.0
tqdm 4.67.1
transformers 4.49.0
triton 3.2.0
trl 0.9.6
typer 0.15.2
typing_extensions 4.12.2
tyro 0.8.14
tzdata 2025.1
urllib3 2.3.0
uvicorn 0.34.0
uvloop 0.21.0
vllm 0.8.1
watchfiles 1.0.4
websockets 15.0.1
wheel 0.45.1
xformers 0.0.29.post2
xgrammar 0.1.16
xxhash 3.5.0
yarl 1.18.3
zipp 3.21.0
```
When I install deepspeed with `pip install deepspeed`
I got this:
```error report
TypeError: Command.__init__() got an unexpected keyword argument 'use_ninja'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for deepspeed
Running setup.py clean for deepspeed
Failed to build deepspeed
```
Then I downgrade setuptools with `pip install setuptools==58.0.4` and succeed to install deepspeed.
I will check if it works as well in practice as it seems.
| open | 2025-03-21T09:01:13Z | 2025-03-24T18:56:19Z | https://github.com/deepspeedai/DeepSpeed/issues/7165 | [] | kuailehaha | 6 |
AirtestProject/Airtest | automation | 1,008 | 用例执行的图像识别情况与Image Editor中的不一致 | **描述问题bug**
通过IDE截图的图片 内容 在用例执行的时候始终匹配不到,confidence只有0.20+ ,但是我通过Image Editor去调试的时候,confidence能够达到0.98
```
Try finding: Template(logo.png)
resize: (594, 186)->(624, 195), resolution: (1296, 759)=>(1312, 798)
try match with MultiScaleTemplateMatchingPre
[MSTemplatePre] threshold=0.7, result={'result': (220, 551), 'rectangle': ((114.24800000000005, 518.624), (114.24800000000005, 584.624), (327.24800000000005, 584.624), (327.24800000000005, 518.624)), 'confidence': 0.2666640877723694}
find_best_result() run time is 0.74 s.
try match with TemplateMatching
[Template] threshold=0.7, result={'result': (465, 98), 'rectangle': ((153, 1), (153, 196), (777, 196), (777, 1)), 'confidence': 0.19524629414081573}
find_best_result() run time is 0.03 s.
try match with SURFMatching
find_best_result() run time is 0.15 s.
try match with BRISKMatching
find_best_result() run time is 0.21 s.
match result: None
resize: (594, 186)->(624, 195), resolution: (1296, 759)=>(1312, 798)
try match with MultiScaleTemplateMatchingPre
[MSTemplatePre] threshold=0.7, result={'result': (220, 551), 'rectangle': ((114.24800000000005, 518.624), (114.24800000000005, 584.624), (327.24800000000005, 584.624), (327.24800000000005, 518.624)), 'confidence': 0.2666640877723694}
find_best_result() run time is 0.73 s.
try match with TemplateMatching
[Template] threshold=0.7, result={'result': (465, 98), 'rectangle': ((153, 1), (153, 196), (777, 196), (777, 1)), 'confidence': 0.19524629414081573}
find_best_result() run time is 0.03 s.
try match with SURFMatching
find_best_result() run time is 0.17 s.
try match with BRISKMatching
find_best_result() run time is 0.22 s.
match result: None
resize: (594, 186)->(624, 195), resolution: (1296, 759)=>(1312, 798)
try match with MultiScaleTemplateMatchingPre
[MSTemplatePre] threshold=0.7, result={'result': (220, 551), 'rectangle': ((114.24800000000005, 518.624), (114.24800000000005, 584.624), (327.24800000000005, 584.624), (327.24800000000005, 518.624)), 'confidence': 0.2666640877723694}
find_best_result() run time is 0.74 s.
try match with TemplateMatching
[Template] threshold=0.7, result={'result': (465, 98), 'rectangle': ((153, 1), (153, 196), (777, 196), (777, 1)), 'confidence': 0.19524629414081573}
find_best_result() run time is 0.03 s.
try match with SURFMatching
find_best_result() run time is 0.15 s.
try match with BRISKMatching
find_best_result() run time is 0.21 s.
match result: None
```
**相关截图**

**预期效果**
用例执行的结果跟Image Editor里的结果一致
**python 版本:** `python3.7`
Airtest IDE 1.2.12
airtest = "==1.2.3"
**设备:**
- 系统: Windows 10
| open | 2021-12-24T07:31:28Z | 2021-12-24T07:31:28Z | https://github.com/AirtestProject/Airtest/issues/1008 | [] | yili1992 | 0 |
aio-libs/aiomysql | asyncio | 488 | [Question] Are there anyways to set read_timeout or write_timeout like pymysql? | Regarding to this: https://pymysql.readthedocs.io/en/latest/modules/connections.html
When creating `pymysql` connection, we can set `read_timeout` or `write_timneout`. But I can't find anyway to specific the same arguments to `aiomysql` connection | open | 2020-05-09T03:36:36Z | 2022-01-13T00:35:54Z | https://github.com/aio-libs/aiomysql/issues/488 | [
"enhancement"
] | WindSoilder | 1 |
falconry/falcon | api | 2,352 | Support app `state` from ASGI Lifespan Protocol | The [ASGI Lifespan Protocol](https://asgi.readthedocs.io/en/latest/specs/lifespan.html) offers [Lifespan State](https://asgi.readthedocs.io/en/latest/specs/lifespan.html#lifespan-state) (if the app server supports it):
> Applications often want to persist data from the lifespan cycle to request/response handling. For example, a database connection can be established in the lifespan cycle and persisted to the request/response cycle. The `scope["state"]` namespace provides a place to store these sorts of things. The server will ensure that a _shallow copy_ of the namespace is passed into each subsequent request/response call into the application. Since the server manages the application lifespan and often the event loop as well this ensures that the application is always accessing the database connection (or other stored object) that corresponds to the right event loop and lifecycle, without using context variables, global mutable state or having to worry about references to stale/closed connections.
In Falcon apps, another alternative is simply storing data inside resource or middleware attributes, but it might be still nice to make use of this `state`, e.g., to interact with generic middleware etc. | open | 2024-10-03T16:03:07Z | 2024-10-04T05:50:34Z | https://github.com/falconry/falcon/issues/2352 | [
"enhancement"
] | vytas7 | 0 |
opengeos/leafmap | jupyter | 34 | get the coordinates of a user-drawn polygon | First of all, congratulations Dr. Qiusheng for your fantastic work !
I became a 100% geemap user and one of the very useful functions when working with feature exploration (1) is to manually draw a polygon in an area of interest, (2) extract the coordinates with Map.draw_last_feature and then (3) export this polygon as a shapefile or even a featureCollection in the case of ee.
I looked for something similar in the leafmap but I couldn't find it, although it is possible to easily draw the polygon and see its individual points, but it seems to me that there is a lack of a similar routine to manipulate the coordinates extracted from the map.
That would be of great value.
Best wishes
Andre
| closed | 2021-06-13T14:02:04Z | 2021-06-13T18:28:39Z | https://github.com/opengeos/leafmap/issues/34 | [
"Feature Request"
] | andrebelem | 8 |
ets-labs/python-dependency-injector | flask | 194 | How to inject something on a function argument? | First of all, great work!
I'm coming from the PHP world where I'm used to having method level injections so if something only used in 1-2 methods then it doesn't necessarily need to get injected on the class level. I trying to figure out how could I do this but I'm getting lost in the documentation.
I wish to do something like this:
```python
from dependency_injector.containers import DeclarativeContainer
from dependency_injector.providers import Factory
from django.http import HttpRequest, HttpResponse
from django.views.generic import View
class MyContainer(DeclarativeContainer):
response = Factory(HttpResponse)
class MyView(MyContainer, View):
def get(self, request: HttpRequest, response: HttpResponse)
return response.write('Hello, World!')
```
So something like upon calling a method the unprovided arguments get instances from the factories based on their name.
Does something similar exists in this framework? | closed | 2018-04-19T21:59:20Z | 2018-04-24T17:56:34Z | https://github.com/ets-labs/python-dependency-injector/issues/194 | [
"question"
] | adaliszk | 5 |
huggingface/transformers | python | 36,157 | Add functionality to save model when training unexpectedly terminates | ### Feature request
I'm thinking of implementing it like this:
```python
try:
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
finally:
trainer._save_checkpoint(trainer.model, None)
```
I want to utilize the characteristics of 'finally' to ensure that the model is saved at least once at the end,
even if the training terminates unexpectedly.
### Motivation
Sometimes we need to terminate training unintentionally due to scheduling or various other issues.
If the model checkpoint hasn't been saved even after training has progressed to some extent,
all the training resources used until now are wasted.
### Your contribution
Therefore, I want to add functionality to save the model checkpoint unconditionally
even if the process is terminated by an error or kill signal unintentionally.
And I want to control this through train_args. | closed | 2025-02-13T07:37:29Z | 2025-02-14T11:30:52Z | https://github.com/huggingface/transformers/issues/36157 | [
"Feature request"
] | jp1924 | 3 |
simple-login/app | flask | 1,767 | PGP email backup | Why are emails encrypted only when they're forwarded, and not before they're stored on server in the 7 day backup? | open | 2023-06-05T10:54:39Z | 2023-06-05T11:40:39Z | https://github.com/simple-login/app/issues/1767 | [] | eligibleshield | 0 |
PokeAPI/pokeapi | api | 1,038 | Dual normal-types displaying normal-type as ability | I've been working on a simple JS-based Pokedex, and I'll just give a quick insight into how it works:
I fetch the data of an individual Pokemon using a variable set to 1 as an id endpoint, and display the data in the HTML after.
When the left / right arrow keys are pressed, the variable increments / decrements and the function runs again, the next / previous Pokemon's data is fetched and displayed in the markup.
This probably isn't the most elegant method but it works well enough, except that for any dual normal types (i.e. Pidgey, which is normal/flying), the type's name will be set to one of it's abilities. This is consistent across all of them. | closed | 2024-02-08T15:39:13Z | 2024-02-08T16:11:24Z | https://github.com/PokeAPI/pokeapi/issues/1038 | [] | Wither19 | 1 |
ageitgey/face_recognition | machine-learning | 600 | Optionally include confidence, when running face_locations | * face_recognition version: 1.2.3
* Python version: 3.7.0
* Operating System: OSX
### Description
`dlib` allows you to see the [confidence of a detected face](https://stackoverflow.com/questions/44648689/getting-probability-of-detected-face-in-dlib). However, this functionality is not exposed in `face_recognition`. Could there be an option to `face_locations`, or a different method, that includes the detection confidence?
### What I Did
Read the docs, read the code. | open | 2018-08-24T10:11:02Z | 2018-08-24T10:11:02Z | https://github.com/ageitgey/face_recognition/issues/600 | [] | turian | 0 |
graphdeco-inria/gaussian-splatting | computer-vision | 699 | On the issue that the mask constraint doesn't work for 3D Gaussian | Hello, I made a mask, the white part of the mask is the object, the black part is the background, I added the constraint of this mask to the loss, I hope the Gaussian ellipsoid does not rebuild the background part, but the reality is that the Gaussian ellipsoid will still rebuild the background part.
Do you know how to fix this? Or is it that the Gaussian ellipsoid itself can't fit into the constraints of the mask, I think it's because the Gaussian ellipsoid is too large for pixels to precisely control the pixel-level area. | open | 2024-03-08T08:25:29Z | 2024-05-08T11:55:03Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/699 | [] | 520jz | 2 |
iperov/DeepFaceLab | machine-learning | 5,521 | Infinite loop when trying to use data_src/dst faceset pack util | ## Expected behavior
Lauching the script and getting a .pak file
## Actual behavior
After "performing faceset packing" message nothing happens, it can't even reach progress bar
## Steps to reproduce
laucnh any of faceset pack scripts
## Where exactly
The loop is at core.joblib.SubprocessorBase in Subprocessor.Cli.run() method
``` Python
#waiting subprocesses their success(or not) initialization
while True:
for cli in self.clis[:]:
cli_init_dispatcher(cli)
if all ([cli.state == 0 for cli in self.clis]):
break
io.process_messages(0.005)
if len(self.clis) == 0:
raise Exception ( "Unable to start subprocesses." )
```
I know nothing about multiprocessing and have no idea what this loop does but I tried putting cli_init_dispatcher() fuction outside run() method because I saw it sometimes helps on Windows but it didnt help as well
I also printed cli.state for every cli and they remain [1, 1, 0, 1, 1, 1, 1, 1] every step of the loop in my case
I also tried breaking loop after several steps by manually setting states to 0 and it reaches progress bar but it remains at zero and nothing happens
## Other relevant information
- **Operating system and version:** Windows 10 Pro, AMD Ryzen 7 3750H, Geforce GTX 1650 Ti
- DeepFaceLab_NVIDIA_up_to_RTX2080Ti | closed | 2022-05-20T09:22:09Z | 2022-05-26T08:55:30Z | https://github.com/iperov/DeepFaceLab/issues/5521 | [] | dispasha | 1 |
man-group/arctic | pandas | 965 | Library was not correctly initialized in Arctic | Hi guys,
I have been using the free mongodb version for over a year together with arctic and everything was working fine.
I recently upgraded my mongodb from a shared to a dedicated service.
From my personal machine and a server everything is running f.
I have a second server and I get the following error message
arctic.exceptions.LibraryNotFoundException: Library X was not correctly initialised in <Arctic at xxxxx...
.. the error message goes on talking about SSL: CERTIFICATE_VERIFY_FAILED
I have the server IP on the white list. The weird thing is the code is working from my laptop and the other server but not on the second server i.e. I can rule out something is wrong with the code or the databases in mondodb.
Any idea what I could do?
Many thanks
Tom | open | 2022-07-04T09:03:38Z | 2022-07-05T06:41:13Z | https://github.com/man-group/arctic/issues/965 | [] | tomnewg | 3 |
microsoft/nni | pytorch | 5,162 | Dispatcher stream error, tuner may have crashed | **Describe the issue**:
**Environment**:
- NNI version: 2.9
- Training service (local|remote|pai|aml|etc): Local
- Client OS: Windows 10
- Server OS (for remote mode only):
- Python version: 3.7.13 (also tried with 3.9.12)
- PyTorch/TensorFlow version: torch 1.12.1+cu116
- Is conda/virtualenv/venv used?: Yes
- Is running in Docker?: No
**Configuration**:
- Experiment config (remember to remove secrets!):
```json
{
"params": {
"experimentType": "hpo",
"trialCommand": "python model.py",
"trialCodeDirectory": "C:\\Users\\**removed**",
"trialConcurrency": 2,
"maxTrialNumber": 50,
"useAnnotation": false,
"debug": false,
"logLevel": "info",
"experimentWorkingDirectory": "C:\\Users\\sashah8\\nni-experiments",
"tuner": {
"name": "TPE",
"classArgs": {
"optimize_mode": "maximize"
}
},
"trainingService": {
"platform": "local",
"trialCommand": "python model.py",
"trialCodeDirectory": "C:\\Users\\**removed*",
"debug": false,
"maxTrialNumberPerGpu": 1,
"reuseMode": false
}
},
"execDuration": "9s",
"nextSequenceId": 2,
"revision": 4
}
```
- Search space:
```python
{
'features': {'_type': 'choice', '_value': [128, 256, 512, 1024]},
'lr': {'_type': 'loguniform', '_value': [0.0001, 0.1]},
'momentum': {'_type': 'uniform', '_value': [0, 1]},
}
```
**Log message**:
- command prompt output:
```
[2022-10-16 11:19:45] Creating experiment, Experiment ID: dhe2x64y
[2022-10-16 11:19:45] Starting web server...
[2022-10-16 11:19:45] Setting up...
[2022-10-16 11:19:45] Web portal URLs: http://10.76.56.148:8057 http://169.254.31.88:8057 http://169.254.194.255:8057 http://169.254.35.0:8057 http://127.0.0.1:8057
Error: Dispatcher stream error, tuner may have crashed.
at EventEmitter.<anonymous> (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\nnimanager.js:647:32)
at EventEmitter.emit (node:events:526:28)
at WebSocketChannelImpl.handleError (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\tuner_command_channel\websocket_channel.js:107:22)
at WebSocketChannelImpl.heartbeat (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\tuner_command_channel\websocket_channel.js:91:18)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
[2022-10-16 11:20:32] Stopping experiment, please wait...
[2022-10-16 11:20:32] Experiment stopped
(nni_HPO) PS C:\Users\sashah8\CS - 791\HW3 - Hyperparameter Optimization> python main.py
[2022-10-16 11:20:56] Creating experiment, Experiment ID: xo1l8hi6
[2022-10-16 11:20:56] Starting web server...
[2022-10-16 11:20:57] Setting up...
[2022-10-16 11:20:59] Web portal URLs: http://10.76.56.148:8057 http://169.254.31.88:8057 http://169.254.194.255:8057 http://169.254.35.0:8057 http://127.0.0.1:8057
Error: Dispatcher stream error, tuner may have crashed.
at EventEmitter.<anonymous> (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\nnimanager.js:647:32)
at EventEmitter.emit (node:events:526:28)
at WebSocketChannelImpl.handleError (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\tuner_command_channel\websocket_channel.js:107:22)
at WebSocketChannelImpl.heartbeat (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\tuner_command_channel\websocket_channel.js:91:18)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
AssertionError [ERR_ASSERTION]: Actual status: ERROR
at NNIManager.manageTrials (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\nnimanager.js:561:29)
at async Promise.all (index 3)
at async NNIManager.run (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\nnimanager.js:621:9) {
cause: AssertionError [ERR_ASSERTION]: Actual status: ERROR
at NNIManager.manageTrials (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\nnimanager.js:561:29)
at async Promise.all (index 3)
at async NNIManager.run (C:\Users\sashah8\Anaconda3\envs\nni_HPO\lib\site-packages\nni_node\core\nnimanager.js:621:9) {
generatedMessage: false,
code: 'ERR_ASSERTION',
actual: false,
expected: true,
operator: '=='
}
}
[2022-10-16 11:25:29] Stopping experiment, please wait...
[2022-10-16 11:25:29] Experiment stopped
```
- nnimanager.log:
```
[2022-10-16 11:20:57] INFO (main) Start NNI manager
[2022-10-16 11:20:57] INFO (NNIDataStore) Datastore initialization done
[2022-10-16 11:20:57] INFO (RestServer) Starting REST server at port 8057, URL prefix: "/"
[2022-10-16 11:20:57] WARNING (NNITensorboardManager) Tensorboard may not installed, if you want to use tensorboard, please check if tensorboard installed.
[2022-10-16 11:20:57] INFO (RestServer) REST server started.
[2022-10-16 11:20:57] INFO (NNIManager) Starting experiment: xo1l8hi6
[2022-10-16 11:20:57] INFO (NNIManager) Setup training service...
[2022-10-16 11:20:57] INFO (LocalTrainingService) Construct local machine training service.
[2022-10-16 11:20:57] INFO (NNIManager) Setup tuner...
[2022-10-16 11:20:59] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2022-10-16 11:21:00] INFO (NNIManager) Add event listeners
[2022-10-16 11:21:00] INFO (LocalTrainingService) Run local machine training service.
[2022-10-16 11:21:00] INFO (NNIManager) NNIManager received command from dispatcher: ID,
[2022-10-16 11:21:00] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.00011599497432385631, "momentum": 0.5394097683782016}, "parameter_index": 0}
[2022-10-16 11:21:00] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.010676342217290587, "momentum": 0.5847482630995584}, "parameter_index": 0}
[2022-10-16 11:21:05] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 0,
hyperParameters: {
value: '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.00011599497432385631, "momentum": 0.5394097683782016}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-10-16 11:21:05] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 1,
hyperParameters: {
value: '{"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.010676342217290587, "momentum": 0.5847482630995584}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-10-16 11:21:24] INFO (NNIManager) Trial job CDKdy status changed from WAITING to RUNNING
[2022-10-16 11:21:37] INFO (NNIManager) Trial job Y3Rh9 status changed from WAITING to RUNNING
[2022-10-16 11:22:22] INFO (NNIManager) Trial job CDKdy status changed from RUNNING to SUCCEEDED
[2022-10-16 11:22:25] INFO (NNIManager) Trial job Y3Rh9 status changed from RUNNING to SUCCEEDED
[2022-10-16 11:22:25] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.01999772623507946, "momentum": 0.4162149739537774}, "parameter_index": 0}
[2022-10-16 11:22:25] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 3, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.0007597819239704861, "momentum": 0.6596009006777764}, "parameter_index": 0}
[2022-10-16 11:22:30] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 2,
hyperParameters: {
value: '{"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.01999772623507946, "momentum": 0.4162149739537774}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-10-16 11:22:30] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 3,
hyperParameters: {
value: '{"parameter_id": 3, "parameter_source": "algorithm", "parameters": {"features": 128, "lr": 0.0007597819239704861, "momentum": 0.6596009006777764}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-10-16 11:22:42] INFO (NNIManager) Trial job WTp3h status changed from WAITING to RUNNING
[2022-10-16 11:22:52] INFO (NNIManager) Trial job G7Gqp status changed from WAITING to RUNNING
[2022-10-16 11:23:36] INFO (NNIManager) Trial job WTp3h status changed from RUNNING to SUCCEEDED
[2022-10-16 11:23:39] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 4, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.0007749163910847291, "momentum": 0.426688977511853}, "parameter_index": 0}
[2022-10-16 11:23:46] INFO (NNIManager) Trial job G7Gqp status changed from RUNNING to SUCCEEDED
[2022-10-16 11:23:46] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 4,
hyperParameters: {
value: '{"parameter_id": 4, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.0007749163910847291, "momentum": 0.426688977511853}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-10-16 11:23:46] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 5, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.0005541282701768431, "momentum": 0.7236941232867435}, "parameter_index": 0}
[2022-10-16 11:23:46] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 5,
hyperParameters: {
value: '{"parameter_id": 5, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.0005541282701768431, "momentum": 0.7236941232867435}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-10-16 11:23:58] INFO (NNIManager) Trial job AnbTs status changed from WAITING to RUNNING
[2022-10-16 11:24:09] INFO (NNIManager) Trial job G21j9 status changed from WAITING to RUNNING
[2022-10-16 11:24:54] INFO (NNIManager) Trial job AnbTs status changed from RUNNING to SUCCEEDED
[2022-10-16 11:24:56] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 6, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.005231829898637109, "momentum": 0.3158966535554839}, "parameter_index": 0}
[2022-10-16 11:25:03] INFO (NNIManager) Trial job G21j9 status changed from RUNNING to SUCCEEDED
[2022-10-16 11:25:03] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 6,
hyperParameters: {
value: '{"parameter_id": 6, "parameter_source": "algorithm", "parameters": {"features": 256, "lr": 0.005231829898637109, "momentum": 0.3158966535554839}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-10-16 11:25:03] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 7, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.015826850876395914, "momentum": 0.032269555491401314}, "parameter_index": 0}
[2022-10-16 11:25:03] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 7,
hyperParameters: {
value: '{"parameter_id": 7, "parameter_source": "algorithm", "parameters": {"features": 512, "lr": 0.015826850876395914, "momentum": 0.032269555491401314}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-10-16 11:25:16] INFO (NNIManager) Trial job Idgwf status changed from WAITING to RUNNING
```
- dispatcher.log:
```
[2022-10-16 11:21:00] INFO (nni.tuner.tpe/MainThread) Using random seed 1981398779
[2022-10-16 11:21:00] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started
```
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
~~~
# Importing required libraries
from nni.experiment import Experiment
# Defining the hyperparameter search space
search_space = {
'features': {'_type': 'choice', '_value': [128, 256, 512, 1024]},
'lr': {'_type': 'loguniform', '_value': [0.0001, 0.1]},
'momentum': {'_type': 'uniform', '_value': [0, 1]},
}
# Setting up the NNI experiment on local machine
experiment = Experiment('local')
# Conducting NNI evaluation in trail mode
experiment.config.trial_command = 'python model.py'
# experiment.config.trial_code_directory = '.'
# Configuring the search space
experiment.config.search_space = search_space
# Configuring the tuning algorithm
experiment.config.tuner.name = 'TPE'
experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
# Setting up number of trials to run -> Sets of hyperparameters and trial concurrency
experiment.config.max_trial_number = 50 # Change to a higher number
experiment.config.trial_concurrency = 2
# Running the experiment on portal
experiment.run(8057)
# Stopping the experiment
experiment.stop()
# For still viewing the experiment
# experiment.view()
~~~
Run this with model.py file from https://github.com/microsoft/nni/tree/dab51f799f77aa72c18774faffaedf8d0ee2c977/examples/tutorials/hpo_quickstart_pytorch | open | 2022-10-16T15:41:10Z | 2022-11-09T02:16:06Z | https://github.com/microsoft/nni/issues/5162 | [] | imsaumil | 7 |
itamarst/eliot | numpy | 15 | Turn eliot.filter into a binary, and document its existence | open | 2014-04-15T13:41:40Z | 2018-09-22T20:59:11Z | https://github.com/itamarst/eliot/issues/15 | [] | itamarst | 2 | |
davidsandberg/facenet | computer-vision | 619 | How to train nn4 model? | According to the source code. I found nn4.py in tmp directory. I would like to know how to train nn4 model instead of inception_resnet_v1? | closed | 2018-01-18T08:11:32Z | 2018-01-22T06:34:37Z | https://github.com/davidsandberg/facenet/issues/619 | [] | saypadith | 0 |
deezer/spleeter | tensorflow | 741 | Windows 10 - I can't run the tests with 'poetry run pytest tests/' | <!-- Please respect the title [Discussion] tag. -->
Hello all,
I've tried everything I can - different python versions, fresh environments, etc. but I can't get the tests to run from the command line. Can anyone give me some suggestions, based upon this output? I would super appreciate it.
(py3812) C:\Python\spleeter>poetry run pytest tests/
===================================================================== test session starts =====================================================================
platform win32 -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- C:\Anaconda\envs\py3812\python.exe
cachedir: .pytest_cache
rootdir: C:\Python\spleeter, configfile: pyproject.toml
plugins: anyio-3.5.0, forked-1.4.0
collected 53 items
tests/test_command.py::test_version
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\_pytest\main.py", line 269, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\_pytest\main.py", line 323, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_callers.py", line 39, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\_pytest\main.py", line 348, in pytest_runtestloop
INTERNALERROR> item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pluggy\_callers.py", line 39, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pytest_forked\__init__.py", line 51, in pytest_runtest_protocol
INTERNALERROR> reports = forked_run_report(item)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\pytest_forked\__init__.py", line 73, in forked_run_report
INTERNALERROR> ff = py.process.ForkedFunc(runforked)
INTERNALERROR> File "C:\Anaconda\envs\py3812\lib\site-packages\py\_process\forkedfunc.py", line 45, in __init__
INTERNALERROR> pid = os.fork()
INTERNALERROR> AttributeError: module 'os' has no attribute 'fork'
=================================================================== no tests ran in 10.05s ====================================================================
Exception ignored in: <function ForkedFunc.__del__ at 0x000002123589CA60>
Traceback (most recent call last):
File "C:\Anaconda\envs\py3812\lib\site-packages\py\_process\forkedfunc.py", line 110, in __del__
if self.pid is not None: # only clean up in main process
AttributeError: 'ForkedFunc' object has no attribute 'pid'
(py3812) C:\Python\spleeter>
Any thoughts? Thanks! | open | 2022-03-18T07:02:50Z | 2022-03-18T10:59:16Z | https://github.com/deezer/spleeter/issues/741 | [
"question"
] | robertmckean | 1 |
microsoft/nlp-recipes | nlp | 257 | [BUG] Failure with nltk download sometimes due to utils_nlp/dataset/__init__.py | This file - https://github.com/microsoft/nlp/blob/master/utils_nlp/dataset/__init__.py causes failure on Azure cluster with `File exists error` when a previous run on an nc24 gpu cluster is cancelled and a new run is started I usually get this error. I'm wondering if we should handle this since a user might likely do this if he/she wants to change configs in the notebook immediately rather than waiting for entire run to complete.
### How do we replicate the bug?
Run the entailment_xnli_bert_azureml notebook on nc24 with 2 nodes.
**Expected behavior** :
The experiment failed. Finalizing run...
Traceback (most recent call last):
File "train.py", line 7, in <module>
from utils_nlp.dataset.xnli_torch_dataset import XnliDataset
File "/mnt/batch/tasks/shared/LS_root/jobs/maidaipbert-eastus/azureml/nlp-entailment-bert_1565318473_b939a491/mounts/workspaceblobstore/azureml/NLP-Entailment-BERT_1565318473_b939a491/utils_nlp/dataset/__init__.py", line 8, in <module>
nltk.download("stopwords", quiet=True)
File "/azureml-envs/azureml_462b384392302935cd8ea420bcf8170e/lib/python3.6/site-packages/nltk/downloader.py", line 670, in download
for msg in self.incr_download(info_or_id, download_dir, force):
File "/azureml-envs/azureml_462b384392302935cd8ea420bcf8170e/lib/python3.6/site-packages/nltk/downloader.py", line 555, in incr_download
for msg in self._download_package(info, download_dir, force):
File "/azureml-envs/azureml_462b384392302935cd8ea420bcf8170e/lib/python3.6/site-packages/nltk/downloader.py", line 612, in _download_package
os.mkdir(os.path.join(download_dir, info.subdir))
FileExistsError: [Errno 17] File exists: '/root/nltk_data/corpora'
<!--- For example: -->
<!--- * Create a conda environment for gpu -->
<!--- * Run unit test `test_timer.py` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for the timer should pass successfully. -->
### Other Comments
| closed | 2019-08-09T03:14:59Z | 2019-11-26T18:37:36Z | https://github.com/microsoft/nlp-recipes/issues/257 | [
"bug",
"release-blocker"
] | janhavi13 | 3 |
huggingface/pytorch-image-models | pytorch | 2,080 | model.config.id2label = {id: label for label, id in config.label2id.items()} | model.config.id2label = {id: label for label, id in config.label2id.items()} | closed | 2024-01-20T12:36:45Z | 2024-01-20T16:51:58Z | https://github.com/huggingface/pytorch-image-models/issues/2080 | [] | ghost | 0 |
gyli/PyWaffle | matplotlib | 31 | Add Font Awesome through Python package | Font Awesome v6 now provides Python package on Pypi https://pypi.org/project/fontawesomefree/#description
Add Font Awesome through Python package, so no need to add a copy of otf files in PyWaffle.
This solution might not reduce the content that need to be downloaded when installing, and is not able to use system's font first (https://github.com/gyli/PyWaffle/issues/25), while it does avoid some unnecessary downloads when there are other Python packages also using Font Awesome. | closed | 2022-06-05T20:56:20Z | 2022-06-07T17:56:57Z | https://github.com/gyli/PyWaffle/issues/31 | [] | gyli | 0 |
sgl-project/sglang | pytorch | 4,273 | [Help] where is the dockerfile for rocm/sgl-dev:20250305 | Want to compile for Amd 6700 xt
Or rocm/sgl-dev:vllm20250114 in https://github.com/sgl-project/sglang/blob/main/docker/Dockerfile.rocm
| open | 2025-03-10T18:35:35Z | 2025-03-18T17:26:07Z | https://github.com/sgl-project/sglang/issues/4273 | [] | inevity | 2 |
MagicStack/asyncpg | asyncio | 937 | Can't connect to default unix socket (tries to connect to localhost) | Following the documentation:
> The default behavior when host is not specified, or is empty, is to connect to a Unix-domain socket in /tmp (or whatever socket directory was specified when PostgreSQL was built).
https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-HOST
But, if I run:
`asyncio.run(asyncpg.connect("postgresql://rhymewave_com"))`
Then I get:
`OSError: Multiple exceptions: [Errno 111] Connect call failed ('::1', 5432, 0, 0), [Errno 111] Connect call failed ('127.0.0.1', 5432)`
Expected behaviour is for it to connect to unix socket, not localhost.
---
This problem becomes worse, as I can't even figure out an explicit host that works. If I use `postgresql:///rhymewave_com?host=/var/run/postgresql`, then I get:
`FileNotFoundError: [Errno 2] No such file or directory`
Even when I use `psql` it doesn't work with an explicit host:
```
> psql rhymewave_com
# Works fine
> psql rhymewave_com -h /var/run/postgresql
> psql: error: could not connect to server: No such file or directory
> Is the server running locally and accepting
> connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
```
Despite `psql --help` listing this:
> -h, --host=HOSTNAME database server host or socket directory (**default: "/var/run/postgresql"**)
* **asyncpg version**: 0.21 (from Debian) and 0.26 (from PyPI)
* **PostgreSQL version**: 13.7
* **Python version**: 3.9
* **Platform**: Debian | closed | 2022-07-17T15:34:30Z | 2022-07-18T19:51:42Z | https://github.com/MagicStack/asyncpg/issues/937 | [] | Dreamsorcerer | 2 |
ray-project/ray | machine-learning | 51,486 | CI test windows://python/ray/tests:test_state_api_log is consistently_failing | CI test **windows://python/ray/tests:test_state_api_log** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aad4-a541-45a9-b1ef-d27f9a1da383
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4168-a0da-6cbdc8cbd2df
DataCaseName-windows://python/ray/tests:test_state_api_log-END
Managed by OSS Test Policy | closed | 2025-03-18T23:08:07Z | 2025-03-19T21:55:01Z | https://github.com/ray-project/ray/issues/51486 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 2 |
keras-team/autokeras | tensorflow | 1,698 | Using custom metrics with regression I am not able to predict and save the model. | ### Bug Description
Using custom metrics with regression I am not able to predict and save the model.
### Bug Reproduction
reg = ak.StructuredDataRegressor(
column_names = headers_for_training,
max_trials = GlobalVariables.MaxTrials,
loss='mean_squared_error',
metrics=[tf.keras.metrics.RootMeanSquaredError(), tf.keras.losses.MeanSquaredError(), tf.keras.metrics.MeanAbsolutePercentageError()],
overwrite = True,
tuner = 'random'
)
### Error
Traceback (most recent call last):
File "StartProcess.py", line 203, in <module>
StartApplication().parse_arguments()
File "StartProcess.py", line 193, in parse_arguments
pre_process_data.complete_trainning()
File "C:\Users\netcall\Desktop\Auto_AI_Model\PreprocessData.py", line 158, in complete_trainning
AutokerasSupportedTasks.structured_data_regression_selection(project_path, foldername, X_train, X_test, y_train, y_test, X, self.headers_for_training)
File "C:\Users\netcall\Desktop\Auto_AI_Model\ModelBuild.py", line 479, in structured_data_regression_selection
model = reg.export_model()
File "C:\Users\netcall\Desktop\Auto_AI_Model\.venv\lib\site-packages\autokeras\auto_model.py", line 502, in export_model
return self.tuner.get_best_model()
File "C:\Users\netcall\Desktop\Auto_AI_Model\.venv\lib\site-packages\autokeras\engine\tuner.py", line 64, in get_best_model
model = tf.keras.models.load_model(self.best_model_path, custom_objects=ak.CUSTOM_OBJECTS)
File "C:\Users\netcall\Desktop\Auto_AI_Model\.venv\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\netcall\Desktop\Auto_AI_Model\.venv\lib\site-packages\keras\engine\base_layer.py", line 796, in from_config
return cls(**config)
TypeError: __init__() got an unexpected keyword argument 'reduction
| open | 2022-03-17T13:29:12Z | 2022-03-17T13:29:12Z | https://github.com/keras-team/autokeras/issues/1698 | [] | ENANick | 0 |
Yorko/mlcourse.ai | seaborn | 369 | Docker Image | The Docker image needs to be updated to the latest packages versioning, maybe... | closed | 2018-10-10T12:09:24Z | 2018-10-11T13:47:20Z | https://github.com/Yorko/mlcourse.ai/issues/369 | [
"enhancement"
] | AdityaSoni19031997 | 1 |
supabase/supabase-py | flask | 775 | Cannot order by multiple columns | # Bug report
## Describe the bug
When trying to order multiple columns by using multiple .order() modifiers on a select function, the result only orders by the first column specified.
## To Reproduce
Here is my code to get some rows from a table, which is then ordered first by last_chatted and then by created_at.
```
response = (
supabase.table("clones")
.select("*")
.eq("owner", uid)
.order("last_chatted", desc=True)
.order("created_at", desc=True)
.execute()
)
```
## Expected behavior
The response should be ordered by last_chatted, and then created_at. However, this is not the case and the result is only ordered by last_chatted. I have confirmed that when I only order by last_chatted and created_at separately the code works as expected. I have also confirmed using the sort by visualization on my Supabase dashboard that the chained ordering is different from the result I get.
## System information
- OS: macOS
- supabase: 2.4.2
## Additional context
The same issue was reported in Discord in December, but no GitHub issue was created.
| closed | 2024-04-17T23:04:42Z | 2024-08-19T18:37:05Z | https://github.com/supabase/supabase-py/issues/775 | [
"bug"
] | justinsunyt | 3 |
pywinauto/pywinauto | automation | 1,158 | ComboLBox not select function | ## Expected Behavior
Use select function to choose item from ComboLBox variable. (see green block)

## Actual Behavior
Can't find *select* function from ComboLBox variable.
## Steps to Reproduce the Problem
1.
2.
3.
## Short Example of Code to Demonstrate the Problem
## Specifications
- Pywinauto version: 0.68
- Python version and bitness: 3.7
- Platform and OS: Windows 10
| closed | 2021-12-20T09:38:11Z | 2021-12-21T02:39:18Z | https://github.com/pywinauto/pywinauto/issues/1158 | [] | cross-hello | 1 |
axnsan12/drf-yasg | rest-api | 684 | Cannot set SerializerMethodField type as nullable | Marking a SerializerMethodField method with a nullable return type does not result in the expected openapi format
```python
def get_profile(self, obj) -> Optional[str]:
if some_condition:
return "my profile"
return None
```
Results in:
```json
"profile": {
"title": "Profile",
"type": "string",
"readOnly": true
}
```
The expected result in OpenAPI 3.0 would be:
```json
"profile": {
"title": "Profile",
"type": "string",
"nullable": true,
"readOnly": true
}
```
And for cases where the method returns either a serializer or `None`, I don't know how to use the decorator `drf_yasg.utils.swagger_serializer_method` to specify that it is also nullable:
```python
@swagger_serializer_method(serializer_or_field=APIProfileSerializer)
def get_profile(self, obj):
if some_condition:
return APIProfileSerializer(obj.profile).data
return None
``` | open | 2020-12-29T12:28:09Z | 2025-03-07T12:13:24Z | https://github.com/axnsan12/drf-yasg/issues/684 | [
"triage"
] | OskarPersson | 0 |
gradio-app/gradio | deep-learning | 10,251 | stop btn click event with cancels do NOT execute then | ### Describe the bug
I have a long RAG app, sometimes the llm loop output, I have a button to cancel, and then save the looped output, something like:
```python
stop_btn.click(fn=None,inputs=None,cancels=[submit_event]).then(fn=store_chat,inputs=[chatbot])
```
but the store_chat never run
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
...
stop_btn.click(fn=None,inputs=None,cancels=[submit_event, query_event]).then(fn=store_chat,inputs=[chatbot])
...
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
macOS gradio 5.6.0
```
### Severity
I can work around it | closed | 2024-12-25T00:40:51Z | 2024-12-29T17:47:10Z | https://github.com/gradio-app/gradio/issues/10251 | [
"bug",
"needs repro"
] | liaoweiguo | 2 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 233 | 接口报 500 错误 | https://api.douyin.wtf/tiktok_video_data?video_id=7257925577091386642
这个接口报 500 错误 Internal Server Error
| closed | 2023-08-04T03:17:01Z | 2023-08-04T09:28:09Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/233 | [
"BUG"
] | 419606974 | 2 |
iperov/DeepFaceLab | machine-learning | 5,225 | data_src faceset extract error | Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce GTX 1650
[0] Which GPU indexes to choose? :
0
[wf] Face type ( f/wf/head ?:help ) :
wf
[0] Max number of faces from image ( ?:help ) :
0
[512] Image size ( 256-2048 ?:help ) :
512
[90] Jpeg quality ( 1-100 ?:help ) :
90
[n] Write debug images to aligned_debug? ( y/n ) :
n
Extracting faces...
Running on GeForce GTX 1650
0%| | 0/655 [00:00<?, ?it/s]
Error while processing data: Traceback (most recent call last):
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
return fn(*args)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
target_list, run_metadata)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[64,642,362] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node Pad_1}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Add_29/_4049]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[64,642,362] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node Pad_1}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 71, in _subprocess_run
result = self.process_data (data)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Extractor.py", line 107, in process_data
rects_extractor=self.rects_extractor,
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Extractor.py", line 150, in rects_stage
rects = data.rects = rects_extractor.extract (rotated_image, is_bgr=True)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\facelib\S3FDExtractor.py", line 193, in extract
olist = self.model.run ([ input_image[None,...] ] )
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 167, in run
return nn.tf_sess.run ( self.run_output, feed_dict=feed_dict)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
run_metadata_ptr)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
run_metadata)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[64,642,362] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_1 (defined at D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:97) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Add_29/_4049]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[64,642,362] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_1 (defined at D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:97) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node Pad_1:
Relu (defined at D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\facelib\S3FDExtractor.py:93)
Input Source operations connected to node Pad_1:
Relu (defined at D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\facelib\S3FDExtractor.py:93)
Original stack trace for 'Pad_1':
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 62, in _subprocess_run
self.on_initialize(client_dict)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Extractor.py", line 73, in on_initialize
self.rects_extractor = facelib.S3FDExtractor(place_model_on_cpu=place_model_on_cpu)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\facelib\S3FDExtractor.py", line 170, in __init__
self.model.build_for_run ([ ( tf.float32, nn.get4Dshape (None,None,3) ) ])
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 154, in build_for_run
self.run_output = self.__call__(self.run_placeholders)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\facelib\S3FDExtractor.py", line 94, in forward
x = tf.nn.relu(self.conv1_2(x))
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
return self.forward(*args, **kwargs)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 97, in forward
x = tf.pad (x, self.padding, mode='CONSTANT')
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3422, in pad
result = gen_array_ops.pad(tensor, paddings, name=name)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6484, in pad
"Pad", input=input, paddings=paddings, name=name)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack() | open | 2021-01-01T08:16:44Z | 2023-06-08T22:01:33Z | https://github.com/iperov/DeepFaceLab/issues/5225 | [] | a7medehab | 2 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,375 | Accessibility issue: "Report form stepper" | ### What version of GlobaLeaks are you using?
4.10.18 - development environment
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Windows, macOS, Android, iOS
### Describe the issue
This accessibility report is about the questionniare stepper/navigator. Please mark it with flag "Accessibility". Thanks.
| Index | Issue Description | Ref. Check UNI CEI EN 301549:2021 | conformity level |
| ---- | ---- | ---- | ---- |
|1 | The questionnaire stepper is not usable with keyboard: elements have not the "button" semantical value. The stepper functionality is entirely to review in accessibility terms.| C.9.2.1.1 | A |
|2 | the questionnaire stepper has not sufficient HTML base semantics and WAI-ARIA attributes to correctly communicate the stepper status (tab "active", correlation between tab and content, etc). The tab "active" has to be communicated for example with aria-current="page", or with supplementary text for screen reader only.| C.9.4.1.2 | A |
### Proposed solution
_No response_ | open | 2023-03-02T11:58:41Z | 2023-05-02T10:02:16Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3375 | [
"T: Enhancement",
"C: Client",
"F: Accessibility"
] | larrykind | 0 |
scikit-optimize/scikit-optimize | scikit-learn | 461 | Test failures on master | ```
surrogate = 'gp', n_jobs = 1
@pytest.mark.parametrize("surrogate", ['gp', None])
@pytest.mark.parametrize("n_jobs", [1, -1]) # test sequential and parallel
def test_searchcv_runs(surrogate, n_jobs):
"""
Test whether the cross validation search wrapper around sklearn
models runs properly with available surrogates and with single
or multiple workers.
Parameters
----------
* `surrogate` [str or None]:
A class of the scikit-optimize surrogate used. None means
to use default surrogate.
* `n_jobs` [int]:
Number of parallel processes to use for computations.
"""
X, y = load_iris(True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.75, random_state=0
)
# None search space is only supported when only `step` function is used
assert_raises(ValueError, BayesSearchCV(SVC(), None).fit, (X, y))
# check if invalid dimensions are raising errors
with pytest.raises(ValueError):
BayesSearchCV(SVC(), {'C': '1 ... 100.0'})
with pytest.raises(TypeError):
BayesSearchCV(SVC(), ['C', (1.0, 1)])
# create an instance of a surrogate if it is not a string
if surrogate is not None:
optimizer_kwargs = {'base_estimator': surrogate}
else:
optimizer_kwargs = None
opt = BayesSearchCV(
SVC(),
{
'C': Real(1e-6, 1e+6, prior='log-uniform'),
'gamma': Real(1e-6, 1e+1, prior='log-uniform'),
'degree': Integer(1, 8),
'kernel': Categorical(['linear', 'poly', 'rbf']),
},
n_jobs=n_jobs, n_iter=11,
optimizer_kwargs=optimizer_kwargs
)
> opt.fit(X_train, y_train)
skopt/tests/test_searchcv.py:70:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
skopt/searchcv.py:575: in fit
groups=groups, n_jobs=n_jobs_adjusted
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = BayesSearchCV(cv=None, error_score='raise',
estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
...s',
random_state=None, refit=False, return_train_score=True,
scoring=None, search_spaces=None, verbose=0)
X = array([[ 5.9, 3. , 4.2, 1.5],
[ 5.8, 2.6, 4. , 1.2],
[ 6.8, 3. , 5.5, 2.1],
[ 4.7, 3.2,... 2.9, 5.6, 1.8],
[ 5.8, 2.7, 4.1, 1. ],
[ 7.7, 3.8, 6.7, 2.2],
[ 4.6, 3.2, 1.4, 0.2]])
y = array([1, 1, 2, 0, 2, 0, 0, 1, 2, 2, 2, 2, 1, 2, 1, 1, 2, 2, 2, 2, 1, 2, 1,
0, 2, 1, 1, 1, 1, 2, 0, 0, 2, 1, 0,... 0, 2, 1, 2, 1, 0, 2, 0, 2, 0, 0, 2, 0, 2, 1, 1, 1,
2, 2, 1, 1, 0, 1, 2, 2, 0, 1, 1, 1, 1, 0, 0, 0, 2, 1, 2, 0])
space_id = range(0, 1), groups = None, n_jobs = 1
def step(self, X, y, space_id, groups=None, n_jobs=1):
"""Generate n_jobs parameters and evaluate them in parallel.
Having a separate function for a single step for search allows to
save easily checkpoints for the parameter search and restore from
possible failures.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The training input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csc_matrix``.
y : array-like, shape = [n_samples] or [n_samples, n_outputs]
The target values (class labels) as integers or strings.
space_id : hashable
Identifier of parameter search space. Add search spaces with
groups : array-like, with shape (n_samples,), optional
Group labels for the samples used while splitting the dataset into
train/test set.
n_jobs : int, default=1
Number of parameters to evaluate in parallel.
Returns
-------
params_dict: dictionary with parameter values.
"""
# convert n_jobst to int > 0 if necessary
if n_jobs < 0:
n_jobs = max(1, cpu_count() + n_jobs + 1)
# use the cached optimizer for particular parameter space
if space_id not in self.search_spaces_:
raise ValueError("Unknown space %s" % space_id)
# get the search space for a step
search_space = self.search_spaces_[space_id]
if isinstance(search_space, tuple):
search_space, _ = search_space
# create optimizer if not created already
if space_id not in self.optimizer_:
self.optimizer_[space_id] = self._make_optimizer(search_space)
optimizer = self.optimizer_[space_id]
# get parameter values to evaluate
params = optimizer.ask(n_points=n_jobs)
params_dict = [point_asdict(search_space, p) for p in params]
# self.cv_results_ is reset at every call to _fit, keep current
all_cv_results = self.cv_results_
# record performances with different points
refit = self.refit
self.refit = False # do not fit yet - will be fit later
> self._fit(X, y, groups, params_dict)
E AttributeError: 'BayesSearchCV' object has no attribute '_fit'
skopt/searchcv.py:506: AttributeError
____________________________________________________________________________________________________________________ test_searchcv_runs[1-None] _____________________________________________________________________________________________________________________
surrogate = None, n_jobs = 1
@pytest.mark.parametrize("surrogate", ['gp', None])
@pytest.mark.parametrize("n_jobs", [1, -1]) # test sequential and parallel
def test_searchcv_runs(surrogate, n_jobs):
"""
Test whether the cross validation search wrapper around sklearn
models runs properly with available surrogates and with single
or multiple workers.
Parameters
----------
* `surrogate` [str or None]:
A class of the scikit-optimize surrogate used. None means
to use default surrogate.
* `n_jobs` [int]:
Number of parallel processes to use for computations.
"""
X, y = load_iris(True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.75, random_state=0
)
# None search space is only supported when only `step` function is used
assert_raises(ValueError, BayesSearchCV(SVC(), None).fit, (X, y))
# check if invalid dimensions are raising errors
with pytest.raises(ValueError):
BayesSearchCV(SVC(), {'C': '1 ... 100.0'})
with pytest.raises(TypeError):
BayesSearchCV(SVC(), ['C', (1.0, 1)])
# create an instance of a surrogate if it is not a string
if surrogate is not None:
optimizer_kwargs = {'base_estimator': surrogate}
else:
optimizer_kwargs = None
opt = BayesSearchCV(
SVC(),
{
'C': Real(1e-6, 1e+6, prior='log-uniform'),
'gamma': Real(1e-6, 1e+1, prior='log-uniform'),
'degree': Integer(1, 8),
'kernel': Categorical(['linear', 'poly', 'rbf']),
},
n_jobs=n_jobs, n_iter=11,
optimizer_kwargs=optimizer_kwargs
)
> opt.fit(X_train, y_train)
skopt/tests/test_searchcv.py:70:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
skopt/searchcv.py:575: in fit
groups=groups, n_jobs=n_jobs_adjusted
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = BayesSearchCV(cv=None, error_score='raise',
estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
...s', random_state=None, refit=False,
return_train_score=True, scoring=None, search_spaces=None,
verbose=0)
X = array([[ 5.9, 3. , 4.2, 1.5],
[ 5.8, 2.6, 4. , 1.2],
[ 6.8, 3. , 5.5, 2.1],
[ 4.7, 3.2,... 2.9, 5.6, 1.8],
[ 5.8, 2.7, 4.1, 1. ],
[ 7.7, 3.8, 6.7, 2.2],
[ 4.6, 3.2, 1.4, 0.2]])
y = array([1, 1, 2, 0, 2, 0, 0, 1, 2, 2, 2, 2, 1, 2, 1, 1, 2, 2, 2, 2, 1, 2, 1,
0, 2, 1, 1, 1, 1, 2, 0, 0, 2, 1, 0,... 0, 2, 1, 2, 1, 0, 2, 0, 2, 0, 0, 2, 0, 2, 1, 1, 1,
2, 2, 1, 1, 0, 1, 2, 2, 0, 1, 1, 1, 1, 0, 0, 0, 2, 1, 2, 0])
space_id = range(0, 1), groups = None, n_jobs = 1
def step(self, X, y, space_id, groups=None, n_jobs=1):
"""Generate n_jobs parameters and evaluate them in parallel.
Having a separate function for a single step for search allows to
save easily checkpoints for the parameter search and restore from
possible failures.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The training input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csc_matrix``.
y : array-like, shape = [n_samples] or [n_samples, n_outputs]
The target values (class labels) as integers or strings.
space_id : hashable
Identifier of parameter search space. Add search spaces with
groups : array-like, with shape (n_samples,), optional
Group labels for the samples used while splitting the dataset into
train/test set.
n_jobs : int, default=1
Number of parameters to evaluate in parallel.
Returns
-------
params_dict: dictionary with parameter values.
"""
# convert n_jobst to int > 0 if necessary
if n_jobs < 0:
n_jobs = max(1, cpu_count() + n_jobs + 1)
# use the cached optimizer for particular parameter space
if space_id not in self.search_spaces_:
raise ValueError("Unknown space %s" % space_id)
# get the search space for a step
search_space = self.search_spaces_[space_id]
if isinstance(search_space, tuple):
search_space, _ = search_space
# create optimizer if not created already
if space_id not in self.optimizer_:
self.optimizer_[space_id] = self._make_optimizer(search_space)
optimizer = self.optimizer_[space_id]
# get parameter values to evaluate
params = optimizer.ask(n_points=n_jobs)
params_dict = [point_asdict(search_space, p) for p in params]
# self.cv_results_ is reset at every call to _fit, keep current
all_cv_results = self.cv_results_
# record performances with different points
refit = self.refit
self.refit = False # do not fit yet - will be fit later
> self._fit(X, y, groups, params_dict)
E AttributeError: 'BayesSearchCV' object has no attribute '_fit'
skopt/searchcv.py:506: AttributeError
_____________________________________________________________________________________________________________________ test_searchcv_runs[-1-gp] _____________________________________________________________________________________________________________________
surrogate = 'gp', n_jobs = -1
@pytest.mark.parametrize("surrogate", ['gp', None])
@pytest.mark.parametrize("n_jobs", [1, -1]) # test sequential and parallel
def test_searchcv_runs(surrogate, n_jobs):
"""
Test whether the cross validation search wrapper around sklearn
models runs properly with available surrogates and with single
or multiple workers.
Parameters
----------
* `surrogate` [str or None]:
A class of the scikit-optimize surrogate used. None means
to use default surrogate.
* `n_jobs` [int]:
Number of parallel processes to use for computations.
"""
X, y = load_iris(True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.75, random_state=0
)
# None search space is only supported when only `step` function is used
assert_raises(ValueError, BayesSearchCV(SVC(), None).fit, (X, y))
# check if invalid dimensions are raising errors
with pytest.raises(ValueError):
BayesSearchCV(SVC(), {'C': '1 ... 100.0'})
with pytest.raises(TypeError):
BayesSearchCV(SVC(), ['C', (1.0, 1)])
# create an instance of a surrogate if it is not a string
if surrogate is not None:
optimizer_kwargs = {'base_estimator': surrogate}
else:
optimizer_kwargs = None
opt = BayesSearchCV(
SVC(),
{
'C': Real(1e-6, 1e+6, prior='log-uniform'),
'gamma': Real(1e-6, 1e+1, prior='log-uniform'),
'degree': Integer(1, 8),
'kernel': Categorical(['linear', 'poly', 'rbf']),
},
n_jobs=n_jobs, n_iter=11,
optimizer_kwargs=optimizer_kwargs
)
> opt.fit(X_train, y_train)
skopt/tests/test_searchcv.py:70:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
skopt/searchcv.py:575: in fit
groups=groups, n_jobs=n_jobs_adjusted
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = BayesSearchCV(cv=None, error_score='raise',
estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
...s',
random_state=None, refit=False, return_train_score=True,
scoring=None, search_spaces=None, verbose=0)
X = array([[ 5.9, 3. , 4.2, 1.5],
[ 5.8, 2.6, 4. , 1.2],
[ 6.8, 3. , 5.5, 2.1],
[ 4.7, 3.2,... 2.9, 5.6, 1.8],
[ 5.8, 2.7, 4.1, 1. ],
[ 7.7, 3.8, 6.7, 2.2],
[ 4.6, 3.2, 1.4, 0.2]])
y = array([1, 1, 2, 0, 2, 0, 0, 1, 2, 2, 2, 2, 1, 2, 1, 1, 2, 2, 2, 2, 1, 2, 1,
0, 2, 1, 1, 1, 1, 2, 0, 0, 2, 1, 0,... 0, 2, 1, 2, 1, 0, 2, 0, 2, 0, 0, 2, 0, 2, 1, 1, 1,
2, 2, 1, 1, 0, 1, 2, 2, 0, 1, 1, 1, 1, 0, 0, 0, 2, 1, 2, 0])
space_id = range(0, 1), groups = None, n_jobs = 4
def step(self, X, y, space_id, groups=None, n_jobs=1):
"""Generate n_jobs parameters and evaluate them in parallel.
Having a separate function for a single step for search allows to
save easily checkpoints for the parameter search and restore from
possible failures.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The training input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csc_matrix``.
y : array-like, shape = [n_samples] or [n_samples, n_outputs]
The target values (class labels) as integers or strings.
space_id : hashable
Identifier of parameter search space. Add search spaces with
groups : array-like, with shape (n_samples,), optional
Group labels for the samples used while splitting the dataset into
train/test set.
n_jobs : int, default=1
Number of parameters to evaluate in parallel.
Returns
-------
params_dict: dictionary with parameter values.
"""
# convert n_jobst to int > 0 if necessary
if n_jobs < 0:
n_jobs = max(1, cpu_count() + n_jobs + 1)
# use the cached optimizer for particular parameter space
if space_id not in self.search_spaces_:
raise ValueError("Unknown space %s" % space_id)
# get the search space for a step
search_space = self.search_spaces_[space_id]
if isinstance(search_space, tuple):
search_space, _ = search_space
# create optimizer if not created already
if space_id not in self.optimizer_:
self.optimizer_[space_id] = self._make_optimizer(search_space)
optimizer = self.optimizer_[space_id]
# get parameter values to evaluate
params = optimizer.ask(n_points=n_jobs)
params_dict = [point_asdict(search_space, p) for p in params]
# self.cv_results_ is reset at every call to _fit, keep current
all_cv_results = self.cv_results_
# record performances with different points
refit = self.refit
self.refit = False # do not fit yet - will be fit later
> self._fit(X, y, groups, params_dict)
E AttributeError: 'BayesSearchCV' object has no attribute '_fit'
skopt/searchcv.py:506: AttributeError
____________________________________________________________________________________________________________________ test_searchcv_runs[-1-None] ____________________________________________________________________________________________________________________
surrogate = None, n_jobs = -1
@pytest.mark.parametrize("surrogate", ['gp', None])
@pytest.mark.parametrize("n_jobs", [1, -1]) # test sequential and parallel
def test_searchcv_runs(surrogate, n_jobs):
"""
Test whether the cross validation search wrapper around sklearn
models runs properly with available surrogates and with single
or multiple workers.
Parameters
----------
* `surrogate` [str or None]:
A class of the scikit-optimize surrogate used. None means
to use default surrogate.
* `n_jobs` [int]:
Number of parallel processes to use for computations.
"""
X, y = load_iris(True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.75, random_state=0
)
# None search space is only supported when only `step` function is used
assert_raises(ValueError, BayesSearchCV(SVC(), None).fit, (X, y))
# check if invalid dimensions are raising errors
with pytest.raises(ValueError):
BayesSearchCV(SVC(), {'C': '1 ... 100.0'})
with pytest.raises(TypeError):
BayesSearchCV(SVC(), ['C', (1.0, 1)])
# create an instance of a surrogate if it is not a string
if surrogate is not None:
optimizer_kwargs = {'base_estimator': surrogate}
else:
optimizer_kwargs = None
opt = BayesSearchCV(
SVC(),
{
'C': Real(1e-6, 1e+6, prior='log-uniform'),
'gamma': Real(1e-6, 1e+1, prior='log-uniform'),
'degree': Integer(1, 8),
'kernel': Categorical(['linear', 'poly', 'rbf']),
},
n_jobs=n_jobs, n_iter=11,
optimizer_kwargs=optimizer_kwargs
)
> opt.fit(X_train, y_train)
skopt/tests/test_searchcv.py:70:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
skopt/searchcv.py:575: in fit
groups=groups, n_jobs=n_jobs_adjusted
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = BayesSearchCV(cv=None, error_score='raise',
estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
...s', random_state=None,
refit=False, return_train_score=True, scoring=None,
search_spaces=None, verbose=0)
X = array([[ 5.9, 3. , 4.2, 1.5],
[ 5.8, 2.6, 4. , 1.2],
[ 6.8, 3. , 5.5, 2.1],
[ 4.7, 3.2,... 2.9, 5.6, 1.8],
[ 5.8, 2.7, 4.1, 1. ],
[ 7.7, 3.8, 6.7, 2.2],
[ 4.6, 3.2, 1.4, 0.2]])
y = array([1, 1, 2, 0, 2, 0, 0, 1, 2, 2, 2, 2, 1, 2, 1, 1, 2, 2, 2, 2, 1, 2, 1,
0, 2, 1, 1, 1, 1, 2, 0, 0, 2, 1, 0,... 0, 2, 1, 2, 1, 0, 2, 0, 2, 0, 0, 2, 0, 2, 1, 1, 1,
2, 2, 1, 1, 0, 1, 2, 2, 0, 1, 1, 1, 1, 0, 0, 0, 2, 1, 2, 0])
space_id = range(0, 1), groups = None, n_jobs = 4
def step(self, X, y, space_id, groups=None, n_jobs=1):
"""Generate n_jobs parameters and evaluate them in parallel.
Having a separate function for a single step for search allows to
save easily checkpoints for the parameter search and restore from
possible failures.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The training input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csc_matrix``.
y : array-like, shape = [n_samples] or [n_samples, n_outputs]
The target values (class labels) as integers or strings.
space_id : hashable
Identifier of parameter search space. Add search spaces with
groups : array-like, with shape (n_samples,), optional
Group labels for the samples used while splitting the dataset into
train/test set.
n_jobs : int, default=1
Number of parameters to evaluate in parallel.
Returns
-------
params_dict: dictionary with parameter values.
"""
# convert n_jobst to int > 0 if necessary
if n_jobs < 0:
n_jobs = max(1, cpu_count() + n_jobs + 1)
# use the cached optimizer for particular parameter space
if space_id not in self.search_spaces_:
raise ValueError("Unknown space %s" % space_id)
# get the search space for a step
search_space = self.search_spaces_[space_id]
if isinstance(search_space, tuple):
search_space, _ = search_space
# create optimizer if not created already
if space_id not in self.optimizer_:
self.optimizer_[space_id] = self._make_optimizer(search_space)
optimizer = self.optimizer_[space_id]
# get parameter values to evaluate
params = optimizer.ask(n_points=n_jobs)
params_dict = [point_asdict(search_space, p) for p in params]
# self.cv_results_ is reset at every call to _fit, keep current
all_cv_results = self.cv_results_
# record performances with different points
refit = self.refit
self.refit = False # do not fit yet - will be fit later
> self._fit(X, y, groups, params_dict)
E AttributeError: 'BayesSearchCV' object has no attribute '_fit'
skopt/searchcv.py:506: AttributeError
_______________________________________________________________________________________________________________ test_searchcv_runs_multiple_subspaces _______________________________________________________________________________________________________________
def test_searchcv_runs_multiple_subspaces():
"""
Test whether the BayesSearchCV runs without exceptions when
multiple subspaces are given.
"""
X, y = load_iris(True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.75, random_state=0
)
opt = BayesSearchCV(
SVC(),
[
({
'C': Real(1e-6, 1e+6, prior='log-uniform'),
'gamma': Real(1e-6, 1e+1, prior='log-uniform'),
}, 10),
{
'degree': Integer(1, 8),
'kernel': Categorical(['linear', 'poly', 'rbf']),
}
],
n_iter=10
)
> opt.fit(X_train, y_train)
skopt/tests/test_searchcv.py:103:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
skopt/searchcv.py:575: in fit
groups=groups, n_jobs=n_jobs_adjusted
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = BayesSearchCV(cv=None, error_score='raise',
estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
...s', random_state=None, refit=False,
return_train_score=True, scoring=None, search_spaces=None,
verbose=0)
X = array([[ 5.9, 3. , 4.2, 1.5],
[ 5.8, 2.6, 4. , 1.2],
[ 6.8, 3. , 5.5, 2.1],
[ 4.7, 3.2,... 2.9, 5.6, 1.8],
[ 5.8, 2.7, 4.1, 1. ],
[ 7.7, 3.8, 6.7, 2.2],
[ 4.6, 3.2, 1.4, 0.2]])
y = array([1, 1, 2, 0, 2, 0, 0, 1, 2, 2, 2, 2, 1, 2, 1, 1, 2, 2, 2, 2, 1, 2, 1,
0, 2, 1, 1, 1, 1, 2, 0, 0, 2, 1, 0,... 0, 2, 1, 2, 1, 0, 2, 0, 2, 0, 0, 2, 0, 2, 1, 1, 1,
2, 2, 1, 1, 0, 1, 2, 2, 0, 1, 1, 1, 1, 0, 0, 0, 2, 1, 2, 0])
space_id = range(0, 2), groups = None, n_jobs = 1
def step(self, X, y, space_id, groups=None, n_jobs=1):
"""Generate n_jobs parameters and evaluate them in parallel.
Having a separate function for a single step for search allows to
save easily checkpoints for the parameter search and restore from
possible failures.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The training input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csc_matrix``.
y : array-like, shape = [n_samples] or [n_samples, n_outputs]
The target values (class labels) as integers or strings.
space_id : hashable
Identifier of parameter search space. Add search spaces with
groups : array-like, with shape (n_samples,), optional
Group labels for the samples used while splitting the dataset into
train/test set.
n_jobs : int, default=1
Number of parameters to evaluate in parallel.
Returns
-------
params_dict: dictionary with parameter values.
"""
# convert n_jobst to int > 0 if necessary
if n_jobs < 0:
n_jobs = max(1, cpu_count() + n_jobs + 1)
# use the cached optimizer for particular parameter space
if space_id not in self.search_spaces_:
raise ValueError("Unknown space %s" % space_id)
# get the search space for a step
search_space = self.search_spaces_[space_id]
if isinstance(search_space, tuple):
search_space, _ = search_space
# create optimizer if not created already
if space_id not in self.optimizer_:
self.optimizer_[space_id] = self._make_optimizer(search_space)
optimizer = self.optimizer_[space_id]
# get parameter values to evaluate
params = optimizer.ask(n_points=n_jobs)
params_dict = [point_asdict(search_space, p) for p in params]
# self.cv_results_ is reset at every call to _fit, keep current
all_cv_results = self.cv_results_
# record performances with different points
refit = self.refit
self.refit = False # do not fit yet - will be fit later
> self._fit(X, y, groups, params_dict)
E AttributeError: 'BayesSearchCV' object has no attribute '_fit'
skopt/searchcv.py:506: AttributeError
________________________________________________________________________________________________________________________ test_dump_and_load _________________________________________________________________________________________________________________________
@pytest.mark.fast_test
def test_dump_and_load():
res = gp_minimize(bench3,
[(-2.0, 2.0)],
x0=[0.],
acq_func="LCB",
n_calls=2,
n_random_starts=0,
random_state=1)
# Test normal dumping and loading
with tempfile.TemporaryFile() as f:
dump(res, f)
> res_loaded = load(f)
skopt/tests/test_utils.py:48:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
skopt/utils.py:172: in load
return load_(filename, **kwargs)
../scikit-learn/sklearn/externals/joblib/numpy_pickle.py:568: in load
obj = _unpickle(fobj)
../scikit-learn/sklearn/externals/joblib/numpy_pickle.py:508: in _unpickle
obj = unpickler.load()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sklearn.externals.joblib.numpy_pickle.NumpyUnpickler object at 0x7f012a1f2828>
def load(self):
"""Read a pickled object representation from the open file.
Return the reconstituted object hierarchy specified in the file.
"""
# Check whether Unpickler was initialized correctly. This is
# only needed to mimic the behavior of _pickle.Unpickler.dump().
if not hasattr(self, "_file_read"):
raise UnpicklingError("Unpickler.__init__() was not called by "
"%s.__init__()" % (self.__class__.__name__,))
self._unframer = _Unframer(self._file_read, self._file_readline)
self.read = self._unframer.read
self.readline = self._unframer.readline
self.metastack = []
self.stack = []
self.append = self.stack.append
self.proto = 0
read = self.read
dispatch = self.dispatch
try:
while True:
key = read(1)
if not key:
> raise EOFError
E EOFError
../../anaconda3/lib/python3.6/pickle.py:1048: EOFError
___________________________________________________________________________________________________________________ test_dump_and_load_optimizer ____________________________________________________________________________________________________________________
@pytest.mark.fast_test
def test_dump_and_load_optimizer():
base_estimator = ExtraTreesRegressor(random_state=2)
opt = Optimizer([(-2.0, 2.0)], base_estimator, n_random_starts=1,
acq_optimizer="sampling")
opt.run(bench1, n_iter=3)
with tempfile.TemporaryFile() as f:
dump(opt, f)
> load(f)
skopt/tests/test_utils.py:78:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
skopt/utils.py:172: in load
return load_(filename, **kwargs)
../scikit-learn/sklearn/externals/joblib/numpy_pickle.py:568: in load
obj = _unpickle(fobj)
../scikit-learn/sklearn/externals/joblib/numpy_pickle.py:508: in _unpickle
obj = unpickler.load()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sklearn.externals.joblib.numpy_pickle.NumpyUnpickler object at 0x7f010f9d73c8>
def load(self):
"""Read a pickled object representation from the open file.
Return the reconstituted object hierarchy specified in the file.
"""
# Check whether Unpickler was initialized correctly. This is
# only needed to mimic the behavior of _pickle.Unpickler.dump().
if not hasattr(self, "_file_read"):
raise UnpicklingError("Unpickler.__init__() was not called by "
"%s.__init__()" % (self.__class__.__name__,))
self._unframer = _Unframer(self._file_read, self._file_readline)
self.read = self._unframer.read
self.readline = self._unframer.readline
self.metastack = []
self.stack = []
self.append = self.stack.append
self.proto = 0
read = self.read
dispatch = self.dispatch
try:
while True:
key = read(1)
if not key:
> raise EOFError
E EOFError
````
There's also many
```
/home/andy/checkout/scikit-learn/sklearn/utils/deprecation.py:75: DeprecationWarning: Function y_train_mean is deprecated; Attribute y_train_mean was deprecated in version 0.19 and will be removed in 0.21.
```
messages, that basically make the whole test output unreadable :-/
The BayesSearchCV (and the deprecation) is probably because I'm running scikit-learn master, not sure about the rest. (I'm surprised the other people here are not running scikit-learn master?) | closed | 2017-08-02T19:44:50Z | 2017-08-11T14:43:45Z | https://github.com/scikit-optimize/scikit-optimize/issues/461 | [
"Bug"
] | amueller | 20 |
huggingface/transformers | tensorflow | 36,054 | Issue with Trainer Training - AttributeError: 'NoneType' object has no attribute 'shape' | ### Description:
I am working on fine-tuning an image-text model using the Hugging Face AutoModelForImageTextToText and LlavaProcessor. While attempting to train the model using the SFTTrainer, I encountered an error related to a NoneType object during the training loop. The error occurs specifically in the _merge_input_ids_with_image_features method in the modeling_llava.py file.
### **Note:**
I have load the data(json) from my GDrive
### Error Details:
`AttributeError: 'NoneType' object has no attribute 'shape'`
<img width="912" alt="Image" src="https://github.com/user-attachments/assets/6bba17d1-195e-46f5-8397-b5a6eb089343" />
### Error Occurrence:
The error occurs after calling trainer.train(), and it seems that during the training, the image_features passed into the _merge_input_ids_with_image_features function is None, causing the AttributeError when the code tries to access its shape.
### Code Snippet Leading to the Error:
```
trainer = SFTTrainer(
model=model,
train_dataset=train_dataset,
peft_config=peft_config,
tokenizer=tokenizer,
args=sft_config,
)
Train model
trainer.train()
```
### Relevant Model Function:
**The error occurs within the following function in modeling_llava.py:**
```
def _merge_input_ids_with_image_features(self, image_features, inputs_embeds, input_ids, attention_mask, labels):
num_images, num_image_patches, embed_dim = image_features.shape # Error here
batch_size, sequence_length = input_ids.shape
# Further processing...
```
### Potential Causes:
- image_features might not be properly processed or passed to the model.
- The image preprocessing function might not return the correct features, or the dataset might not have the expected structure.
### Request:
1. Could you help me troubleshoot this issue and suggest how to fix the NoneType error?
2. What might cause the image_features variable to be None?
3. How can I ensure that image_features is properly populated and passed to the model?
### Code:
**Load the base model**
```
model = AutoModelForImageTextToText.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map=device_map
)
model.config.use_cache = False
model.config.pretraining_tp = 1
```
**Load LLaMA tokenizer**
```
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right" # Fix weird overflow issue with fp16 training
processor = LlavaProcessor.from_pretrained(model_name)
```
**Prompt Template**
```
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "is there any fracture"},
{"type": "image"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
```
**Reload the Dataset from Google Drive**
```
import json
with open("/content/drive/MyDrive/fineTune model1/sub_datset1.json", "r", encoding="utf-8") as f:
reloaded_dataset1 = json.load(f)
with open("/content/drive/MyDrive/fineTune model1/sub_datset2.json", "r", encoding="utf-8") as f:
reloaded_dataset2 = json.load(f)
with open("/content/drive/MyDrive/fineTune model1/sub_datset3.json", "r", encoding="utf-8") as f:
reloaded_dataset3 = json.load(f)
```
**Converting to Hugging Face Dataset**
```
from datasets import Dataset
Convert the reformatted data into a Hugging Face Dataset
hf_dataset1 = Dataset.from_dict({
"image": [item["image"] for item in reloaded_dataset1],
"question": [item["question"] for item in reloaded_dataset1],
"answer": [item["answer"] for item in reloaded_dataset1]
})
Convert the reformatted data into a Hugging Face Dataset
hf_dataset2 = Dataset.from_dict({
"image": [item["image"] for item in reloaded_dataset2],
"question": [item["question"] for item in reloaded_dataset2],
"answer": [item["answer"] for item in reloaded_dataset2]
})
Convert the reformatted data into a Hugging Face Dataset
hf_dataset3 = Dataset.from_dict({
"image": [item["image"] for item in reloaded_dataset3],
"question": [item["question"] for item in reloaded_dataset3],
"answer": [item["answer"] for item in reloaded_dataset3]
})
```
**Merge the dataset**
```
from datasets import concatenate_datasets
Concatenate the datasets
merged_dataset1 = concatenate_datasets([hf_dataset1, hf_dataset2, hf_dataset3])
Print the size of the merged dataset
print(len(merged_dataset1))
```
**Split the Dataset into Train, Validation, and Test**
```
Import the required function from the sklearn.model_selection module
from sklearn.model_selection import train_test_split
from datasets import DatasetDict
#Convert Hugging Face Dataset to Pandas DataFrame for splitting
df = merged_dataset1.to_pandas()
Split into train (80%) and temp (20%)
train_df, temp_df = train_test_split(df, test_size=0.2, random_state=42)
Split temp into validation (10%) and test (10%)
val_df, test_df = train_test_split(temp_df, test_size=0.5, random_state=42)
print(f"Train size: {len(train_df)}, Validation size: {len(val_df)}, Test size: {len(test_df)}")
```
**Convert back to Hugging Face Dataset**
```
train_dataset = Dataset.from_pandas(train_df)
val_dataset = Dataset.from_pandas(val_df)
test_dataset = Dataset.from_pandas(test_df)
```
**Create DatasetDict**
```
final_dataset = DatasetDict({
"train": train_dataset,
"validation": val_dataset,
"test": test_dataset
})
```
### Preprocessing Function
```
from PIL import Image
import base64
from io import BytesIO
import torch
Define your preprocess function
def preprocess_function(samples):
# # Debugging: Print the type and first image entry in the batch
# print(f"Type of samples['image']: {type(samples['image'])}")
# print(f"First image entry (base64): {samples['image'][0]}")
# Initialize an empty list for images
images = []
# Decode and process each image
for img_data in samples["image"]:
if isinstance(img_data, str): # Assuming base64 encoding
try:
# Decode the image from base64 and convert to RGB
img = Image.open(BytesIO(base64.b64decode(img_data))).convert("RGB")
except Exception as e:
print(f"Error loading base64 image: {e}")
img = None
elif isinstance(img_data, Image.Image): # If it's already a PIL Image object
img = img_data.convert("RGB")
else:
print(f"Unsupported image type: {type(img_data)}")
img = None
if img is not None:
images.append(img)
else:
print("Image could not be processed or is None.")
# Now, process the question and images using your processor
inputs = processor(
text=samples["question"],
images=images,
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)
# Ensure the processor tokenizes the answer correctly
labels = processor.tokenizer(
text=samples["answer"],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
# Add labels to the input dictionary
inputs["labels"] = torch.tensor(labels)
# Debugging: Check if pixel_values is present and has the correct shape
print(f"Inputs dictionary: {inputs.keys()}")
if "pixel_values" in inputs:
print(f"Shape of pixel_values: {inputs['pixel_values'].shape}")
else:
print("pixel_values not found in inputs.")
return inputs
```
**Data preprocessing**
```
train_dataset = final_dataset['train']
test_dataset = final_dataset['test']
eval_dataset=final_dataset["validation"]
print(train_dataset.column_names)
```
`#output: ['image', 'question', 'answer', 'index_level_0']`
**Apply preprocessing function**
```
train_dataset = train_dataset.map(preprocess_function)
Now remove the unnecessary columns
train_dataset = train_dataset.remove_columns(["image", "question", "answer", "index_level_0"])
Set the format to PyTorch tensors
train_dataset.set_format(type="torch")
test_dataset = test_dataset.map(preprocess_function, remove_columns=["image", "question", "answer"])
test_dataset.set_format(type="torch")
eval_dataset=eval_dataset.map(preprocess_function, remove_columns=["image", "question", "answer"])
eval_dataset.set_format(type="torch")
print(train_dataset.column_names)
```
`#output: ['input_ids', 'attention_mask', 'pixel_values', 'labels']`
**Prepare for finetuning**
```
from trl import SFTConfig
from trl.trainer.utils import ConstantLengthDataset
#Check GPU compatibility with bfloat16
if compute_dtype == torch.float16 and use_4bit:
major, _ = torch.cuda.get_device_capability()
if major >= 8:
print("=" * 80)
print("Your GPU supports bfloat16: accelerate training with bf16=True")
print("=" * 80)
Load LoRA configuration
peft_config = LoraConfig(
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
r=lora_r,
bias="none",
task_type="CAUSAL_LM",
target_modules=["q_proj", "v_proj","k_proj", "o_proj"]
)
sft_config = SFTConfig(
# SFT-specific settings
max_seq_length=max_seq_length,
dataset_text_field="text",
output_dir=output_dir,
num_train_epochs=num_train_epochs,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
optim=optim,
save_steps=save_steps,
logging_steps=logging_steps,
learning_rate=learning_rate,
weight_decay=weight_decay,
fp16=fp16,
bf16=bf16,
max_grad_norm=max_grad_norm,
max_steps=max_steps,
warmup_ratio=warmup_ratio,
group_by_length=False,
lr_scheduler_type=lr_scheduler_type,
report_to="tensorboard",
)
tokenizer.chat_template = "default"
def formatting_func(example):
if isinstance(example["input_ids"], torch.Tensor):
return example["input_ids"].squeeze().tolist()
elif isinstance(example["input_ids"], list): # Check if it's already a list
return example["input_ids"] # Return as is
elif isinstance(example["input_ids"], dict): # Check if it's a dictionary
return example["input_ids"].get("input_ids", []) # Attempt to extract input_ids if it's a dictionary
else:
return [] # Return an empty list in other cases
train_dataset = ConstantLengthDataset(
tokenizer,
train_dataset,
formatting_func=formatting_func,
seq_length=128,
)
trainer = SFTTrainer(
model=model,
train_dataset=train_dataset,
peft_config=peft_config,
tokenizer=tokenizer,
args=sft_config,
)
Train model
trainer.train()
``` | closed | 2025-02-05T17:45:32Z | 2025-02-06T10:07:25Z | https://github.com/huggingface/transformers/issues/36054 | [] | Md-Nasif03 | 7 |
graphql-python/graphene-sqlalchemy | graphql | 313 | using interface with polymorphic models | I have some polymorphic models
```
class A(Base):
name = Column()
class B(A):
for_b = Column()
class C(A):
for_c = Column()
```
I then made graphql object for them like this
```
class Ainterface(Interface):
name = String()
class Meta:
interfaces = (Node,)
class AType(SQLAlchemyObjectType):
class Meta:
interfaces = (Ainterface)
model = A
class BType(SQLAlchemyObjectType):
class Meta:
interfaces = (Ainterface)
model = B
class CType(SQLAlchemyObjectType):
class Meta:
interfaces = (Ainterface)
model = C
class AllObjects(Union):
@classmethod
def resolve_type(cls, instance, info):
if isinstance(instance, A):
return AType
if isinstance(instance, B):
return BType
if isinstance(instance, C):
return CType
class Meta:
types = (AType, BType,CType)
```
what hurts is the fact that even though name is shared in all the children of class A
i cannot do something like this
```
{
objects: {
name
... on BType {
for_b
}
... on CType {
for_c
}
}
}
```
to query the data, is there a way that I can work this to be able to query | closed | 2021-07-19T12:12:36Z | 2023-05-28T00:46:28Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/313 | [
"enhancement"
] | kimutaiRop | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 180 | Distributed wrapper for loss functions | Try to make a DistributedLoss wrapper that makes loss functions work across multiple gpus, i.e. DistributedDataParallel.
@JohnGiorgi Maybe you have some thoughts. Is [this function in your library](https://github.com/JohnGiorgi/DeCLUTR/blob/1dda901fbfcfd9de5767daf06cbd5e8d1e81a86c/declutr/common/model_utils.py#L28) sufficient to make loss functions work with DistributedDataParallel? | closed | 2020-08-15T21:51:22Z | 2020-09-14T09:56:37Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/180 | [
"enhancement",
"fixed in dev branch"
] | KevinMusgrave | 4 |
deezer/spleeter | tensorflow | 452 | [Bug] Some GPU Training related error | <!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Step to reproduce
1. Installed using `pip`
2. Ran `spleeter train -p configs\hi_config.json -d D:/hi`
3. Got an error see output
## Output
*new output*
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Apply unet for vocals_spectrogram
WARNING:tensorflow:From c:\program files\python37\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
INFO:tensorflow:Apply unet for piano_spectrogram
INFO:tensorflow:Apply unet for drums_spectrogram
INFO:tensorflow:Apply unet for bass_spectrogram
INFO:tensorflow:Apply unet for other_spectrogram
INFO:tensorflow:Apply unet for guitar_spectrogram
INFO:tensorflow:Apply unet for humperc_spectrogram
INFO:tensorflow:Apply unet for keys_spectrogram
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from hi\model.ckpt-0
Traceback (most recent call last):
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1350, in _run_fn
target_list, run_metadata)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: Key batch_normalization_72/beta not found in checkpoint
[[{{node save/RestoreV2}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 1290, in restore
{self.saver_def.filename_tensor_name: save_path})
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
run_metadata)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key batch_normalization_72/beta not found in checkpoint
[[node save/RestoreV2 (defined at c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
Original stack trace for 'save/RestoreV2':
File "c:\program files\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\program files\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Program Files\Python37\Scripts\spleeter.exe\__main__.py", line 9, in <module>
sys.exit(entrypoint())
File "c:\program files\python37\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\program files\python37\lib\site-packages\spleeter\__main__.py", line 46, in main
entrypoint(arguments, params)
File "c:\program files\python37\lib\site-packages\spleeter\commands\train.py", line 98, in entrypoint
evaluation_spec)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 473, in train_and_evaluate
return executor.run()
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 613, in run
return self.run_local()
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 714, in run_local
saving_listeners=saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1161, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1195, in _train_model_default
saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1490, in _train_with_estimator_spec
log_step_count_steps=log_step_count_steps) as mon_sess:
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 584, in MonitoredTrainingSession
stop_grace_period_secs=stop_grace_period_secs)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1014, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 725, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1207, in __init__
_WrappedSession.__init__(self, self._create_session())
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1212, in _create_session
return self._sess_creator.create_session()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 878, in create_session
self.tf_sess = self._session_creator.create_session()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 638, in create_session
self._scaffold.finalize()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 237, in finalize
self._saver.build()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 840, in build
self._build(self._filename, build_save=True, build_restore=True)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 878, in _build
build_restore=build_restore)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 502, in _build_internal
restore_sequentially, reshape)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 381, in _AddShardedRestoreOps
name="restore_shard"))
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 328, in _AddRestoreOps
restore_sequentially)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 575, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py", line 1696, in restore_v2
name=name)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 1300, in restore
names_to_keys = object_graph_key_mapping(save_path)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 1618, in object_graph_key_mapping
object_graph_string = reader.get_tensor(trackable.OBJECT_GRAPH_PROTO_KEY)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 915, in get_tensor
return CheckpointReader_GetTensor(self, compat.as_bytes(tensor_str))
tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\program files\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\program files\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Program Files\Python37\Scripts\spleeter.exe\__main__.py", line 9, in <module>
File "c:\program files\python37\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\program files\python37\lib\site-packages\spleeter\__main__.py", line 46, in main
entrypoint(arguments, params)
File "c:\program files\python37\lib\site-packages\spleeter\commands\train.py", line 98, in entrypoint
evaluation_spec)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 473, in train_and_evaluate
return executor.run()
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 613, in run
return self.run_local()
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 714, in run_local
saving_listeners=saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1161, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1195, in _train_model_default
saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1490, in _train_with_estimator_spec
log_step_count_steps=log_step_count_steps) as mon_sess:
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 584, in MonitoredTrainingSession
stop_grace_period_secs=stop_grace_period_secs)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1014, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 725, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1207, in __init__
_WrappedSession.__init__(self, self._create_session())
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1212, in _create_session
return self._sess_creator.create_session()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 878, in create_session
self.tf_sess = self._session_creator.create_session()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 647, in create_session
init_fn=self._scaffold.init_fn)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\session_manager.py", line 290, in prepare_session
config=config)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\session_manager.py", line 220, in _restore_checkpoint
saver.restore(sess, ckpt.model_checkpoint_path)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 1306, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Key batch_normalization_72/beta not found in checkpoint
[[node save/RestoreV2 (defined at c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
Original stack trace for 'save/RestoreV2':
File "c:\program files\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\program files\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Program Files\Python37\Scripts\spleeter.exe\__main__.py", line 9, in <module>
sys.exit(entrypoint())
File "c:\program files\python37\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\program files\python37\lib\site-packages\spleeter\__main__.py", line 46, in main
entrypoint(arguments, params)
File "c:\program files\python37\lib\site-packages\spleeter\commands\train.py", line 98, in entrypoint
evaluation_spec)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 473, in train_and_evaluate
return executor.run()
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 613, in run
return self.run_local()
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 714, in run_local
saving_listeners=saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1161, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1195, in _train_model_default
saving_listeners)
File "c:\program files\python37\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1490, in _train_with_estimator_spec
log_step_count_steps=log_step_count_steps) as mon_sess:
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 584, in MonitoredTrainingSession
stop_grace_period_secs=stop_grace_period_secs)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1014, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 725, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1207, in __init__
_WrappedSession.__init__(self, self._create_session())
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1212, in _create_session
return self._sess_creator.create_session()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 878, in create_session
self.tf_sess = self._session_creator.create_session()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 638, in create_session
self._scaffold.finalize()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 237, in finalize
self._saver.build()
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 840, in build
self._build(self._filename, build_save=True, build_restore=True)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 878, in _build
build_restore=build_restore)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 502, in _build_internal
restore_sequentially, reshape)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 381, in _AddShardedRestoreOps
name="restore_shard"))
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 328, in _AddRestoreOps
restore_sequentially)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\training\saver.py", line 575, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py", line 1696, in restore_v2
name=name)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "c:\program files\python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
## Environment
| ----------------- | ------------------------------- |
| OS | Windows 10 |
| Installation type| pip |
| RAM available | 32gb |
| Hardware spec | Ryzen 5 3600x/GTX 1060 6gb|
| open | 2020-07-10T13:17:14Z | 2020-08-03T05:22:40Z | https://github.com/deezer/spleeter/issues/452 | [
"bug",
"invalid"
] | wesleyr36 | 5 |
keras-team/keras | machine-learning | 21,080 | Will Keras welcome new backend contributions? | Suppose we could develop a brand-new backend for Keras, such as [Paddle](https://github.com/PaddlePaddle/Paddle). Would Keras welcome our new backend contributions?
In fact, I can understand that adding a new backend will increase the maintenance burden of the keras-team. Therefore, I would like to ask for keras-team's opinion. | closed | 2025-03-21T14:19:45Z | 2025-03-21T17:25:58Z | https://github.com/keras-team/keras/issues/21080 | [] | pass-lin | 1 |
ethanopp/fitly | dash | 8 | Strava activity stream has not attribute 'keys' - Cannot generate df_samples | First thing - I'm trying to understand the strava data scraping while I'm having issues with the below code.
```
def build_df_samples(self):
seconds = 1
streams = get_strava_client().get_activity_streams(self.id, types=types)
self.df_samples = pd.DataFrame(columns=types)
# Write each row to a dataframe
for item in types:
if item in streams.keys():
self.df_samples[item] = pd.Series(streams[item].data, index=None)
```
Which will return a "...has no attribute keys()" error when pulling strava data.
I can add a "if streams..." before the for loop as a workaround but then the df_sample is empty and the below line fails:
```
self.df_samples = self.df_samples.resample(str(seconds) + 'S').mean()
```
Throwing:
```Error pulling strava data: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Float64Index'```
I'll keep digging and will revert back with my findings. I haven't finished investigating but it seems to be happening on all activities.
Second thing - As a side note, I have old data only with time and nothing else. That might trigger a problem down the line. Example:

| closed | 2020-08-13T00:43:45Z | 2020-08-13T23:58:26Z | https://github.com/ethanopp/fitly/issues/8 | [] | pierretamisier | 4 |
rthalley/dnspython | asyncio | 603 | dnspython on Android | I use kivy for creating Android apps. In my project I use exchangelib, it use dnspython. When I run project on Android, I see error : dns.resolver.NoResolverConfiguration: None. Can you check this issue? I see error only on Android, on unix/windows it works. | closed | 2020-11-17T20:03:03Z | 2020-11-19T16:59:02Z | https://github.com/rthalley/dnspython/issues/603 | [] | xx113355 | 1 |
huggingface/transformers | tensorflow | 36,100 | qwen2_5_vl processor padding side is wrong. | ### System Info



the padding side should be left as qwen2 vl do .
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
run conditional generation using qwen2_5_vl using flash attention 2 .
### Expected behavior

| closed | 2025-02-08T03:38:16Z | 2025-03-18T08:04:22Z | https://github.com/huggingface/transformers/issues/36100 | [
"bug"
] | habaohaba | 4 |
iperov/DeepFaceLab | deep-learning | 5,454 | SRC and DST swapped in latest build of the software (or perhaps... in every build?) | Not sure how or WHY this has not been mentioned to anyone (or how the developers didn't notice this or fix it)... but the SRC and DST files in the latest build of the software are SWAPPED.
Interestingly, the accompanying guide is completely CORRECT, describing the videos/pictures which are used to provide the data on the face TO BE SWAPPED onto the destination material, is called the SOURCE. However, in the latest release, it's called "DST".
Made me very confused when I followed a video guide and the uploader was describing the "DST VIDEO" as the one which provides all of the faces. I actually left a comment on the video telling that person that the descriptions he used were wrong, before I discovered that it is actually the software itself which is wrong.
-Rob | open | 2022-01-04T22:11:11Z | 2023-06-08T22:52:12Z | https://github.com/iperov/DeepFaceLab/issues/5454 | [] | FactsMachine-Rob | 2 |
saulpw/visidata | pandas | 2,248 | vdsql 'ibis' NameError | **Small description**
vdsql can't resolve `ibis`.
**Expected result**
vdsql is able to open a .ddb file.
**Actual result with screenshot**
```
saul.pw/VisiData v3.0.1
opening C:\Users\LeimgruberF\tmp\data.ddb as vdsql
Traceback (most recent call last):
File "C:\Users\LeimgruberF\.pyenv\pyenv-win\versions\3.10.11\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\LeimgruberF\.pyenv\pyenv-win\versions\3.10.11\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\LeimgruberF\lg\.venv\Scripts\visidata.exe\__main__.py", line 7, in <module>
File "C:\Users\LeimgruberF\lg\.venv\lib\site-packages\visidata\main.py", line 377, in vd_cli
rc = main_vd()
File "C:\Users\LeimgruberF\lg\.venv\lib\site-packages\visidata\main.py", line 299, in main_vd
vs = vd.openSource(p, create=True, **opts) or vd.fail(f'could not open {p}')
File "C:\Users\LeimgruberF\lg\.venv\lib\site-packages\visidata\_open.py", line 159, in openSource
vs = vd.openPath(Path(p), filetype=filetype, create=create) # convert to Path and recurse
File "C:\Users\LeimgruberF\lg\.venv\lib\site-packages\visidata\_open.py", line 139, in openPath
return openfunc(p)
File "C:\Users\LeimgruberF\lg\.venv\lib\site-packages\visidata\apps\vdsql\_ibis.py", line 68, in open_vdsql
setattr(ibis, p.ext, ext_aliases.get(p.ext))
NameError: name 'ibis' is not defined
```
**Steps to reproduce with sample data and a .vd**
Followed instructions from: https://github.com/saulpw/visidata/tree/develop/visidata/apps/vdsql#install-latest-release
Used Poetry instead of plain pip (poetry itself uses pip):
`pyproject.toml`:
```
[tool.poetry.dependencies]
...
vdsql = { path = "../dev/visidata/visidata/apps/vdsql", develop = false }
```
`visidata -f vdsql C:\Users\LeimgruberF\tmp\data.ddb`
**Additional context**
Python 3.10.11
VisiData 3.0.1 | closed | 2024-01-11T13:44:10Z | 2024-01-30T10:07:55Z | https://github.com/saulpw/visidata/issues/2248 | [
"bug",
"fixed"
] | fleimgruber | 4 |
ultralytics/ultralytics | python | 18,737 | How do you configure the hyperparameters with a `.yaml` file for finetuning a model? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
In the older `yolov5` repo, you can simply load a `scratch` yaml file to use as hyperparameters for finetuning a `yolov5` model. How do you do this with the `YOLO` class in the newer `ultralytics` repo? I only see the parameters defined as part of the method but no parameter to accept a hyperparameter `.yaml` file. Also, these default hyperparameters, do they change according to the model you choose? How do we know if they have to be changed from their default values or not depending on the model used?
### Additional
_No response_ | closed | 2025-01-17T12:20:40Z | 2025-01-20T18:53:24Z | https://github.com/ultralytics/ultralytics/issues/18737 | [
"question",
"detect"
] | mesllo-bc | 2 |
DistrictDataLabs/yellowbrick | matplotlib | 478 | Deprecate DecisionBoundaryVisualizer in 0.8 | Move `DecisionBoundaryVisualizer` to contrib for version 0.8 | closed | 2018-06-15T15:26:15Z | 2018-07-12T23:15:05Z | https://github.com/DistrictDataLabs/yellowbrick/issues/478 | [
"type: task",
"priority: high"
] | bbengfort | 1 |
litestar-org/litestar | asyncio | 3,520 | Enhancement: email service provider integration | ### Summary
In the Django world https://github.com/anymail/django-anymail is the kind of de-facto standard for integrating email service providers via a unified API. It would be nice to have an equivalent functionality in Litestar for the fullstack use case.
### Basic Example
_No response_
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | closed | 2024-05-24T22:48:19Z | 2025-03-20T15:54:43Z | https://github.com/litestar-org/litestar/issues/3520 | [
"Enhancement"
] | fkromer | 3 |
opengeos/leafmap | streamlit | 661 | NAIP STAC Item added to map as layer disappears on zoom out, needs a very close zoom level to appear. | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.30.1
- Python version: 3.10
- Operating System: Ubuntu
### Description
I want to zoom out and see my image on the map. But it disappears at far away zoom levels. Also the default zoom level that is set when the map opens won't show the image.
### What I Did
```python
import pystac_client
import planetary_computer
from shapely.geometry import Point
area_of_interest = Point((-121.034, 36.990)) # wright solar farm lon lat
catalog = pystac_client.Client.open(
"https://planetarycomputer.microsoft.com/api/stac/v1",
modifier=planetary_computer.sign_inplace,
)
range_old = "2010-01-01/2013-01-01"
range_new = "2020-01-01/2021-01-01"
search_old = catalog.search(
collections=["naip"], intersects=area_of_interest, datetime=range_old
)
search_new = catalog.search(
collections=["naip"], intersects=area_of_interest, datetime=range_new
)
items_old = search_old.item_collection()
items_new = search_new.item_collection()
print(f"{len(items_old)} Items found in the 'old' range")
print(f"{len(items_new)} Items found in the 'new' range")
map = leafmap.Map()
leafmap.stac_assets(collection="naip", item=items_old[0].id, titiler_endpoint="pc")
m = leafmap.Map()
m.add_stac_layer(
collection="naip",
item='ca_m_3612108_ne_10_1_20120622_20120904',
assets=["image"],
name="Old image 2012 before solar development",
)
m
```
| closed | 2024-01-16T22:29:36Z | 2024-02-06T15:32:44Z | https://github.com/opengeos/leafmap/issues/661 | [
"bug"
] | rbavery | 1 |
google/seq2seq | tensorflow | 379 | WMT 2016 En-De Download Link is broken | Hi,
the WMT'2016 en-de download link you provided before [seq2seq page](https://google.github.io/seq2seq/data/) is broken now, which is:
https://drive.google.com/open?id=0B_bZck-ksdkpM25jRUN2X2UxMm8
Appreciate it very much if you can update the link, thanks! | closed | 2021-12-03T12:19:35Z | 2022-02-22T03:48:30Z | https://github.com/google/seq2seq/issues/379 | [] | hemingkx | 1 |
plotly/dash-table | dash | 540 | Clean up requirements.txt | requirements.txt contains dependencies to a variety of packages - usage has already been update so that CI does not use this file anymore (ran into troubles vs. expected versions).
Thinking this file may be used in the Heroku deployment integrated in GitHub. Usage to be reviewed and cleaned up. | open | 2019-08-08T14:38:17Z | 2020-02-06T23:41:41Z | https://github.com/plotly/dash-table/issues/540 | [
"dash-type-bug",
"dash-stage-revision_needed",
"size: 1",
"dash-attribute-maintainability"
] | Marc-Andre-Rivet | 0 |
JaidedAI/EasyOCR | pytorch | 956 | Pytorch next() method no longer used | Hello,
Simple issue, I was trying to train a model using the custom trainer provided with EasyOCR but my torch version was too recent and the ".next()" method has been replaced with the ".__next__()" method witch caused trouble.
| open | 2023-02-28T15:02:41Z | 2023-03-05T12:33:12Z | https://github.com/JaidedAI/EasyOCR/issues/956 | [] | j4555 | 2 |
chiphuyen/stanford-tensorflow-tutorials | nlp | 106 | about using dataset for train/val | I have a question,
we use dataset api to run code in train and val preoces,
in every epoch we run the corresponding initializer
but in this way, every epoch the training set is the same(if I set shuffle seed to a constant)
but I want to shuffle them in every epoch, what should I do?(I set shuffle seed) | open | 2018-03-23T07:56:28Z | 2018-03-23T07:56:28Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/106 | [] | ArchWing | 0 |
jupyter-book/jupyter-book | jupyter | 1,717 | Sections do not seem to build | ### Describe the bug
parts:
- caption: CAPTION
chapters:
- file: file1.ipynb
title: Section Heading
sections:
- file: file2.ipynb
title: Subsection
I expect the TOC to look like
Section Heading
Subsection
But only Section Heading appears. I can't find any arrow to open/close subsections.
### Reproduce the bug
parts:
- caption: CAPTION
chapters:
- file: file1.ipynb
title: Section Heading
sections:
- file: file2.ipynb
title: Subsection
### List your environment
jupyter-book 0.12.3 | open | 2022-04-27T18:42:07Z | 2023-08-27T02:06:11Z | https://github.com/jupyter-book/jupyter-book/issues/1717 | [
"bug",
":label: toc"
] | orifox | 2 |
deezer/spleeter | tensorflow | 171 | [Bug] Models downloaded but doesn't split the audio | <!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
Downloads the 4 stem model files but doesn't split the audio file
<!-- Give us a clear and concise description of the bug you are reporting. -->
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
Wrote "spleeter separate -i 'strokes yolo.mp3' -p spleeter:4stems -o splits" in powershell
and it downloads the 4 stems model, but it doesn't proceed to split the track
## Output
INFO:spleeter:Downloading model archive https://github.com/deezer/spleeter/releases/download/v1.4.0/4stems.tar.gz
INFO:spleeter:Validating archive checksum
INFO:spleeter:Extracting downloaded 4stems archive
INFO:spleeter:4stems model file(s) extracted
Expected output:
INFO:spleeter:File splits\strokes yolo/guitar.wav written
INFO:spleeter:File splits\strokes yolo/vocals.wav written
INFO:spleeter:File splits\strokes yolo/bass.wav written
INFO:spleeter:File splits\strokes yolo/drums.wav written
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows |
| Installation type | pip |
| RAM available | 8GB |
| Hardware spec | i5 3437U Integrated graphics |
## Additional context
Works fine when splitting with 2 stems
| closed | 2019-12-09T08:49:31Z | 2019-12-18T14:50:46Z | https://github.com/deezer/spleeter/issues/171 | [
"bug",
"invalid"
] | Nexius03 | 2 |
amidaware/tacticalrmm | django | 1,760 | Authentication Timeout in nats | **Server Info (please complete the following information):**
- OS: Ubuntu 22.04.3
- Browser: firefox
- RMM Version (as shown in top left of web UI): v0.17.5
**Installation Method:**
- [x] Standard
- [ ] Docker
**Describe the bug**:
1. this is fresh install to the new vps
2. client installs successfully, however i'm unable to control it
3. i can see constant timeouts in nats while in debug mode, here is http request trace between nginx and nat-server
T 127.0.0.1:55290 -> 127.0.0.1:9235 [AP] #147045
GET /natsws HTTP/1.1..Host: api.xxx.yyy..Upgrade: websocket..Connection: upgrade..X-Forwarded-Host: api.xx
x.yyy:443..X-Forwarded-For: a.b.c.d, e.f.g.h..X-Forwarded-Proto: https..X-Real-IP: a.b.c
.d..User-Agent: Go-http-client/1.1..Sec-WebSocket-Key: o+oHYyk8+8vjllySpL45Cg==..Sec-WebSocket-Version: 13..Sec-Web
socket-Extensions: permessage-deflate; server_no_context_takeover; client_no_context_takeover....
##
T 127.0.0.1:9235 -> 127.0.0.1:55290 [AP] #147047
HTTP/1.1 101 Switching Protocols..Upgrade: websocket..Connection: Upgrade..Sec-WebSocket-Accept: X8/o3ZdLtmvpkARZM9NP
IGbTaJc=....
##
T 127.0.0.1:9235 -> 127.0.0.1:55290 [AP] #147049
.~..INFO {"server_id":"NBC4ULHQ7VNMUFKYFPCUKB6CBYRSOKIQTPGDMJNS55MBAYULHAKB2OPZ","server_name":"NBC4ULHQ7VNMUFKYFPCUK
B6CBYRSOKIQTPGDMJNS55MBAYULHAKB2OPZ","version":"2.10.10","proto":1,"git_commit":"983a1d2","go":"go1.21.6","host":"127
.0.0.1","port":9235,"headers":true,"auth_required":true,"max_payload":67108864,"client_id":10345,"client_ip":"127.0.0
.1","xkey":"XBWN5XQCV3VEKQRH4BVQMJUHWF3OIBSICYW2DAPVTK7UVWUKEL7A3OUX"} ..
##
T 127.0.0.1:9235 -> 127.0.0.1:55290 [AP] #147051
..-ERR 'Authentication Timeout'......Authentication Timeout
| closed | 2024-02-21T15:15:02Z | 2024-02-21T15:26:49Z | https://github.com/amidaware/tacticalrmm/issues/1760 | [] | optiproplus | 1 |
comfyanonymous/ComfyUI | pytorch | 6,450 | problems with ComfyUI and TripoAPI | ### Your question
Following a tutorial, I a get the following error.
TripoAPI.image_to_3d() missing 8 required positional arguments: 'model_version', 'style', 'texture', 'pbr', 'model_seed', 'texture_seed', 'texture_quality', and 'texture_alignment'
my cumfyUI has worked on other workflows. But this simple Tripo workflow fails.
Any ideas?
Thanks
### Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node Type:** TripoAPI_Zho
- **Exception Type:** TypeError
- **Exception Message:** TripoAPI.text_to_3d() missing 7 required positional arguments: 'model_version', 'texture', 'pbr', 'image_seed', 'model_seed', 'texture_seed', and 'texture_quality'
## Stack Trace
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Tripo-API-ZHO-main\tripoapi.py", line 52, in generate_mesh
result = self.api.text_to_3d(prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
## System Information
- **ComfyUI Version:** v0.2.2
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** nt
- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.4.1+cu124
## Devices
- **Name:** cuda:0 NVIDIA GeForce GTX 1050 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 4294836224
- **VRAM Free:** 3516871476
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
```
2025-01-10 09:52:17,256 - root - INFO - Total VRAM 4096 MB, total RAM 16236 MB
2025-01-10 09:52:17,257 - root - INFO - pytorch version: 2.4.1+cu124
2025-01-10 09:52:17,288 - root - INFO - Set vram state to: NORMAL_VRAM
2025-01-10 09:52:17,288 - root - INFO - Device: cuda:0 NVIDIA GeForce GTX 1050 : cudaMallocAsync
2025-01-10 09:52:19,480 - root - INFO - Using pytorch cross attention
2025-01-10 09:52:25,426 - root - INFO - [Prompt Server] web root: C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\web
2025-01-10 09:52:27,615 - root - INFO -
Import times for custom nodes:
2025-01-10 09:52:27,615 - root - INFO - 0.0 seconds: C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2025-01-10 09:52:27,624 - root - INFO - 0.0 seconds: C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Tripo-API-ZHO-main
2025-01-10 09:52:27,624 - root - INFO - 0.0 seconds: C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Tripo-API-ZHO
2025-01-10 09:52:27,625 - root - INFO - 0.0 seconds: C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Tripo
2025-01-10 09:52:27,626 - root - INFO - 0.5 seconds: C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
2025-01-10 09:52:27,627 - root - INFO -
2025-01-10 09:52:27,646 - root - INFO - Starting server
2025-01-10 09:52:27,647 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2025-01-10 09:55:18,368 - root - INFO - got prompt
2025-01-10 09:55:18,646 - root - ERROR - !!! Exception during processing !!! TripoAPI.image_to_3d() missing 8 required positional arguments: 'model_version', 'style', 'texture', 'pbr', 'model_seed', 'texture_seed', 'texture_quality', and 'texture_alignment'
2025-01-10 09:55:18,669 - root - ERROR - Traceback (most recent call last):
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Tripo-API-ZHO-main\tripoapi.py", line 63, in generate_mesh
result = self.api.image_to_3d(image_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: TripoAPI.image_to_3d() missing 8 required positional arguments: 'model_version', 'style', 'texture', 'pbr', 'model_seed', 'texture_seed', 'texture_quality', and 'texture_alignment'
2025-01-10 09:55:18,671 - root - INFO - Prompt executed in 0.30 seconds
2025-01-10 09:58:26,697 - root - INFO - got prompt
2025-01-10 09:58:26,764 - root - ERROR - !!! Exception during processing !!! TripoAPI.image_to_3d() missing 8 required positional arguments: 'model_version', 'style', 'texture', 'pbr', 'model_seed', 'texture_seed', 'texture_quality', and 'texture_alignment'
2025-01-10 09:58:26,766 - root - ERROR - Traceback (most recent call last):
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Tripo-API-ZHO-main\tripoapi.py", line 63, in generate_mesh
result = self.api.image_to_3d(image_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: TripoAPI.image_to_3d() missing 8 required positional arguments: 'model_version', 'style', 'texture', 'pbr', 'model_seed', 'texture_seed', 'texture_quality', and 'texture_alignment'
2025-01-10 09:58:26,778 - root - INFO - Prompt executed in 0.08 seconds
2025-01-13 14:39:21,202 - root - INFO - got prompt
2025-01-13 14:39:21,590 - root - ERROR - !!! Exception during processing !!! TripoAPI.image_to_3d() missing 8 required positional arguments: 'model_version', 'style', 'texture', 'pbr', 'model_seed', 'texture_seed', 'texture_quality', and 'texture_alignment'
2025-01-13 14:39:21,601 - root - ERROR - Traceback (most recent call last):
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Tripo-API-ZHO-main\tripoapi.py", line 63, in generate_mesh
result = self.api.image_to_3d(image_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: TripoAPI.image_to_3d() missing 8 required positional arguments: 'model_version', 'style', 'texture', 'pbr', 'model_seed', 'texture_seed', 'texture_quality', and 'texture_alignment'
2025-01-13 14:39:21,605 - root - INFO - Prompt executed in 0.30 seconds
2025-01-13 14:40:45,618 - root - INFO - got prompt
2025-01-13 14:40:45,693 - root - ERROR - !!! Exception during processing !!! TripoAPI.image_to_3d() missing 8 required positional arguments: 'model_version', 'style', 'texture', 'pbr', 'model_seed', 'texture_seed', 'texture_quality', and 'texture_alignment'
2025-01-13 14:40:45,698 - root - ERROR - Traceback (most recent call last):
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Tripo-API-ZHO-main\tripoapi.py", line 63, in generate_mesh
result = self.api.image_to_3d(image_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: TripoAPI.image_to_3d() missing 8 required positional arguments: 'model_version', 'style', 'texture', 'pbr', 'model_seed', 'texture_seed', 'texture_quality', and 'texture_alignment'
2025-01-13 14:40:45,713 - root - INFO - Prompt executed in 0.09 seconds
2025-01-13 14:49:54,668 - root - INFO - got prompt
2025-01-13 14:49:54,676 - root - ERROR - !!! Exception during processing !!! TripoAPI.text_to_3d() missing 7 required positional arguments: 'model_version', 'texture', 'pbr', 'image_seed', 'model_seed', 'texture_seed', and 'texture_quality'
2025-01-13 14:49:54,682 - root - ERROR - Traceback (most recent call last):
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\Tripo-API-ZHO-main\tripoapi.py", line 52, in generate_mesh
result = self.api.text_to_3d(prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: TripoAPI.text_to_3d() missing 7 required positional arguments: 'model_version', 'texture', 'pbr', 'image_seed', 'model_seed', 'texture_seed', and 'texture_quality'
2025-01-13 14:49:54,706 - root - INFO - Prompt executed in 0.04 seconds
2025-01-13 14:49:55,100 - asyncio - ERROR - Task exception was never retrieved
future: <Task finished name='Task-173' coro=<RequestHandler.start() done, defined at C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_protocol.py:493> exception=AssertionError()>
Traceback (most recent call last):
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_protocol.py", line 536, in start
request = self._request_factory(message, payload, self, writer, handler)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_app.py", line 479, in _make_request
return _cls(
^^^^^
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_request.py", line 841, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\Dell\OneDrive\Documents\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_request.py", line 196, in __init__
assert transport is not None
^^^^^^^^^^^^^^^^^^^^^
AssertionError
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
{"last_node_id":3,"last_link_id":2,"nodes":[{"id":3,"type":"TripoGLBViewer_ZHO","pos":{"0":448.3926696777344,"1":416.67193603515625},"size":[600,500],"flags":{},"order":1,"mode":0,"inputs":[{"name":"mesh","type":"MESH_GLB","link":2}],"outputs":[],"properties":{"Node name for S&R":"TripoGLBViewer_ZHO"},"widgets_values":[null]},{"id":2,"type":"TripoAPI_Zho","pos":{"0":-15,"1":275},"size":{"0":400,"1":200},"flags":{},"order":0,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":null}],"outputs":[{"name":"MESH_GLB","type":"MESH_GLB","links":[2],"slot_index":0,"shape":3},{"name":"TASK_ID","type":"TASK_ID","links":null,"shape":3,"slot_index":1}],"properties":{"Node name for S&R":"TripoAPI_Zho"},"widgets_values":["text-to-3d","team mascot\n"]}],"links":[[2,2,0,3,0,"MESH_GLB"]],"groups":[],"config":{},"extra":{"ds":{"scale":1,"offset":[456.4000244140625,-79.60003662109375]}},"version":0.4}
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
```
### Other
_No response_ | closed | 2025-01-13T07:53:39Z | 2025-01-14T06:10:51Z | https://github.com/comfyanonymous/ComfyUI/issues/6450 | [
"User Support"
] | davidwkleber | 2 |
ydataai/ydata-profiling | jupyter | 1,425 | ZeroDivision Error executing ProfileReport | ### Current Behaviour
ZeroDivisionError while executing
```
report = ProfileReport(df, title='None', correlations=None)
```
with a spark dataframe.
```
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<command-4043649982577705> in <cell line: 37>()
35
36 # print(df.schema)# display(df)
---> 37 profile_data(spark, data_df=df, start_date=start_date)
38 # results = profile_data(spark, data_df=df, start_date=start_date)
<command-4043649982577705> in profile_data(spark, data_df, start_date)
12
13 # json_object = report.to_json()
---> 14 report_html = report.to_html()
15 displayHTML(report_html)
16 # jobj = json.loads(json_object)
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/profile_report.py in to_html(self)
468
469 """
--> 470 return self.html
471
472 def to_json(self) -> str:
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/profile_report.py in html(self)
275 def html(self) -> str:
276 if self._html is None:
--> 277 self._html = self._render_html()
278 return self._html
279
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/profile_report.py in _render_html(self)
383 from ydata_profiling.report.presentation.flavours import HTMLReport
384
--> 385 report = self.report
386
387 with tqdm(
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/profile_report.py in report(self)
269 def report(self) -> Root:
270 if self._report is None:
--> 271 self._report = get_report_structure(self.config, self.description_set)
272 return self._report
273
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/profile_report.py in description_set(self)
251 def description_set(self) -> BaseDescription:
252 if self._description_set is None:
--> 253 self._description_set = describe_df(
254 self.config,
255 self.df,
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/describe.py in describe(config, df, summarizer, typeset, sample)
72 # Variable-specific
73 pbar.total += len(df.columns)
---> 74 series_description = get_series_descriptions(
75 config, df, summarizer, typeset, pbar
76 )
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/multimethod/__init__.py in __call__(self, *args, **kwargs)
313 func = self[tuple(func(arg) for func, arg in zip(self.type_checkers, args))]
314 try:
--> 315 return func(*args, **kwargs)
316 except TypeError as ex:
317 raise DispatchError(f"Function {func.__code__}") from ex
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/spark/summary_spark.py in spark_get_series_descriptions(config, df, summarizer, typeset, pbar)
90 args = [(name, df) for name in df.columns]
91 with multiprocessing.pool.ThreadPool(12) as executor:
---> 92 for i, (column, description) in enumerate(
93 executor.imap_unordered(multiprocess_1d, args)
94 ):
/usr/lib/python3.9/multiprocessing/pool.py in next(self, timeout)
868 if success:
869 return value
--> 870 raise value
871
872 __next__ = next # XXX
/usr/lib/python3.9/multiprocessing/pool.py in worker(inqueue, outqueue, initializer, initargs, maxtasks, wrap_exception)
123 job, i, func, args, kwds = task
124 try:
--> 125 result = (True, func(*args, **kwds))
126 except Exception as e:
127 if wrap_exception and func is not _helper_reraises_exception:
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/spark/summary_spark.py in multiprocess_1d(args)
86 """
87 column, df = args
---> 88 return column, describe_1d(config, df.select(column), summarizer, typeset)
89
90 args = [(name, df) for name in df.columns]
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/multimethod/__init__.py in __call__(self, *args, **kwargs)
313 func = self[tuple(func(arg) for func, arg in zip(self.type_checkers, args))]
314 try:
--> 315 return func(*args, **kwargs)
316 except TypeError as ex:
317 raise DispatchError(f"Function {func.__code__}") from ex
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/spark/summary_spark.py in spark_describe_1d(config, series, summarizer, typeset)
62 }[dtype]
63
---> 64 return summarizer.summarize(config, series, dtype=vtype)
65
66
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/summarizer.py in summarize(self, config, series, dtype)
40 object:
41 """
---> 42 _, _, summary = self.handle(str(dtype), config, series, {"type": str(dtype)})
43 return summary
44
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/handler.py in handle(self, dtype, *args, **kwargs)
60 funcs = self.mapping.get(dtype, [])
61 op = compose(funcs)
---> 62 return op(*args)
63
64
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/handler.py in func2(*x)
19 return f(*x)
20 else:
---> 21 return f(*res)
22
23 return func2
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/handler.py in func2(*x)
19 return f(*x)
20 else:
---> 21 return f(*res)
22
23 return func2
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/handler.py in func2(*x)
15 def func(f: Callable, g: Callable) -> Callable:
16 def func2(*x) -> Any:
---> 17 res = g(*x)
18 if type(res) == bool:
19 return f(*x)
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/multimethod/__init__.py in __call__(self, *args, **kwargs)
313 func = self[tuple(func(arg) for func, arg in zip(self.type_checkers, args))]
314 try:
--> 315 return func(*args, **kwargs)
316 except TypeError as ex:
317 raise DispatchError(f"Function {func.__code__}") from ex
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/ydata_profiling/model/spark/describe_supported_spark.py in describe_supported_spark(config, series, summary)
29 summary["is_unique"] = n_unique == count
30 summary["n_unique"] = n_unique
---> 31 summary["p_unique"] = n_unique / count
32
33 return config, series, summary
ZeroDivisionError: division by zero
```
### Expected Behaviour
Get a data profiling report in json or html
### Data Description

### Code that reproduces the bug
```Python
from ydata_profiling import ProfileReport
df = spark.read.format('delta').load(f"abfss://***@***.dfs.core.windows.net/***")
df = df.withColumn("FakeNum", lit(0.0)) # Suggestions by other users to workaround the error.
# Suggestion by other users to set correlation to None.
report = ProfileReport(data_df, title='None', correlations=None)
# Both methods below throw the same error.
json_object = report.to_json()
report_html = report.to_html()
```
### pandas-profiling version
v4.5.0
### Dependencies
```Text
arrow==1.2.3
json5==0.9.14
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.19.0
jsonschema-specifications==2023.7.1
makefun==1.15.1
MarkupSafe==2.0.1
matplotlib==3.4.3
matplotlib-inline==0.1.2
numpy==1.22.4
pandas==1.3.4
pandocfilters==1.4.3
ydata-profiling==4.5.0
zipp==3.16.2
```
### OS
Databricks
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2023-08-11T15:44:31Z | 2023-08-24T15:41:48Z | https://github.com/ydataai/ydata-profiling/issues/1425 | [
"needs-triage"
] | mldhamacher | 2 |
dpgaspar/Flask-AppBuilder | rest-api | 1,463 | Arrays throw: jinja2.exceptions.UndefinedError: 'None' has no attribute 'label' | ### Environment
Flask-Appbuilder version: 3.0.1
Arch Linux with Python 3.8.5 (Python 2 not installed)
pip freeze output:
```
apispec==3.3.2
attrs==20.1.0
Babel==2.8.0
click==7.1.2
colorama==0.4.3
defusedxml==0.6.0
dnspython==2.0.0
email-validator==1.1.1
Flask==1.1.2
Flask-AppBuilder==3.0.1
Flask-Babel==1.0.0
Flask-JWT-Extended==3.24.1
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.4
Flask-WTF==0.14.3
idna==2.10
itsdangerous==1.1.0
Jinja2==2.11.2
jsonschema==3.2.0
MarkupSafe==1.1.1
marshmallow==3.7.1
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
prison==0.1.3
PyJWT==1.7.1
pyrsistent==0.16.0
python-dateutil==2.8.1
python3-openid==3.2.0
pytz==2020.1
PyYAML==5.3.1
six==1.15.0
SQLAlchemy==1.3.19
SQLAlchemy-Utils==0.36.8
Werkzeug==1.0.1
WTForms==2.3.3
```
### Describe the expected results
Users should be able to view and edit Arrays.
### Describe the actual results
Tell us what happens instead.
An exception is thrown whenever I use an array.
```pytb
2020-09-01 14:02:01,978:ERROR:app:Exception on /softwareproductview/add [GET]
Traceback (most recent call last):
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/security/decorators.py", line 109, in wraps
return f(self, *args, **kwargs)
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/views.py", line 588, in add
return self.render_template(
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 280, in render_template
return render_template(
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask/templating.py", line 137, in render_template
return _render(
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/general/model/add.html", line 2, in top-level template code
{% import 'appbuilder/general/lib.html' as lib %}
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/base.html", line 1, in top-level template code
{% extends base_template %}
File "/home/konrad/projekte/hito/database-frontend/app/templates/base.html", line 1, in top-level template code
{% extends 'appbuilder/baselayout.html' %}
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
{% import 'appbuilder/baselib.html' as baselib %}
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 46, in top-level template code
{% block body %}
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 19, in block "body"
{% block content %}
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/general/model/add.html", line 7, in block "content"
{% block add_form %}
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/general/model/add.html", line 8, in block "add_form"
{{ widgets.get('add')(form_action=form_action)|safe }}
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/widgets.py", line 37, in __call__
return template.render(args)
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/general/widgets/form.html", line 45, in top-level template code
{{ lib.render_field(field, begin_sep_label, end_sep_label, begin_sep_field, end_sep_field) }}
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/jinja2/runtime.py", line 679, in _invoke
rv = self._func(*arguments)
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/general/lib.html", line 230, in template
{{ field.label.text }}
File "/home/konrad/projekte/hito/database-frontend/venv/lib/python3.8/site-packages/jinja2/environment.py", line 471, in getattr
return getattr(obj, attribute)
jinja2.exceptions.UndefinedError: 'None' has no attribute 'label'
2020-09-01 14:02:01,981:INFO:werkzeug:127.0.0.1 - - [01/Sep/2020 14:02:01] "GET /softwareproductview/add HTTP/1.1" 500 -
```
### Steps to reproduce
#### views.py
```python
from flask_appbuilder import Model
from sqlalchemy import Column, String
import sqlalchemy.dialects.postgresql as pg
class Softwareproduct(Model):
suffix = Column(String(200), primary_key=True)
databasesystems = Column(pg.ARRAY(String),nullable=False)
```
#### models.py
```python
from flask import render_template
from flask_appbuilder.models.sqla.interface import SQLAInterface
from flask_appbuilder import ModelView, ModelRestApi
from . import appbuilder, db
from .models import Softwareproduct
class SoftwareproductView(ModelView):
datamodel = SQLAInterface(Softwareproduct)
@appbuilder.app.errorhandler(404)
def page_not_found(e):
return (
render_template(
"404.html", base_template=appbuilder.base_template, appbuilder=appbuilder
),
404,
)
appbuilder.add_view(
SoftwareproductView,
"Software Product",
icon = "fa-folder-open-o",
category = "Software Product",
category_icon = "fa-envelope"
)
```
Call `flask run` and then open `http://127.0.0.1:5000/softwareproductview/add` | closed | 2020-09-01T12:09:50Z | 2020-12-12T20:55:26Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1463 | [
"stale"
] | KonradHoeffner | 1 |
slackapi/python-slack-sdk | asyncio | 1,063 | Expose slack bookmarks api methods | Requesting support for the bookmarks.add and bookmarks.list endpoints.
I can see using a web browser developer tools sniffer the API call that is being made when adding a bookmark to a slack channel but it appears you do not currently support directly calling this API endpoint
```
>>> response = slack_client.api_call(api_method='bookmarks.add', json={'channel': my_channel, 'link': my_link, 'title': my_title})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
.
.
.
.
File "/usr/local/lib/python3.9/site-packages/slack_sdk/web/slack_response.py", line 201, in validate
raise e.SlackApiError(message=msg, response=self)
slack_sdk.errors.SlackApiError: The request to the Slack API failed.
The server responded with: {'ok': False, 'error': 'not_allowed_token_type'}
```
### Category (place an `x` in each of the `[ ]`)
- [ x] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [ ] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2021-07-14T18:25:38Z | 2021-11-24T18:08:12Z | https://github.com/slackapi/python-slack-sdk/issues/1063 | [
"question",
"web-client",
"Version: 3x"
] | johndrusso | 1 |
recommenders-team/recommenders | machine-learning | 1,512 | [FEATURE] Store datasets as sparse matrix instead of np to increase models' scalability | ### Description
I feel like np array is the common format in Python. Here are two simple examples:
1. In [AffinityMatrix implementation](https://github.com/microsoft/recommenders/blob/98d661edc6a9965c7f42b76dc5317af3ae74d5e0/recommenders/datasets/sparse.py#L136), the sparse matrix version of affinity matrix is converted to np before returning.
2. In [VAE models' implementation](https://github.com/microsoft/recommenders/blob/98d661edc6a9965c7f42b76dc5317af3ae74d5e0/recommenders/models/vae/multinomial_vae.py#L428), batches are generated from a dense np array.
One of the major downsides of using dense np array as the default format makes models incapable of dealing with large scale datasets. A better approach would be keeping the affinity matrix as a sparse matrix and only converting it to dense format if necessary. More concretely, would it be possible to do the following enhancements wherever it applies?
1. `gen_affinity_matrix(self)` function returns a sparse version of `self.AM`.
2. `x_train`, `x_valid`, `x_val_tr` and `x_val_te` should all be in sparse matrix format. Instead of generating batches from a dense format of `x_train`, we could slice the sparse matrix `x_train` then only densify batches before training.
### Expected behavior with the suggested feature
Instead of dealing with dense format of datasets most of time, working with sparse matrix format of datasets and only densifying them if needed would make models a lot more scalable.
| open | 2021-08-27T18:51:16Z | 2021-12-17T09:37:42Z | https://github.com/recommenders-team/recommenders/issues/1512 | [
"enhancement"
] | k9luo | 1 |
deepspeedai/DeepSpeed | machine-learning | 7,048 | nv-ds-chat CI test failure | The Nightly CI for https://github.com/deepspeedai/DeepSpeed/actions/runs/13402653435 failed.
| closed | 2025-02-18T00:44:50Z | 2025-02-19T17:14:11Z | https://github.com/deepspeedai/DeepSpeed/issues/7048 | [
"ci-failure"
] | github-actions[bot] | 0 |
waditu/tushare | pandas | 1,498 | 通过公众号获取的存储数据示例链接访问失败 | 通过公众号获取的存储数据示例链接访问失败:http://file.tushare.org/tsdata/wx/mysql.zip | open | 2021-01-20T14:25:56Z | 2021-01-20T14:25:56Z | https://github.com/waditu/tushare/issues/1498 | [] | 18565372664 | 0 |
ray-project/ray | tensorflow | 50,672 | [core] Split giant ray core C++ targets into small ones (raylet client) | This is a sub-issue of https://github.com/ray-project/ray/issues/50586 to split the raylet client bazel target.
- [x] Split out `raylet_client_connection_lib` from the `raylet_client_lib` target.
- [x] Flatten dependencies related to `src/ray/common` and `src/ray/util`.
| closed | 2025-02-17T23:33:00Z | 2025-02-18T13:26:40Z | https://github.com/ray-project/ray/issues/50672 | [] | rueian | 0 |
plotly/dash | data-science | 2,887 | [BUG] No module named _plotly_utils | I am using the typescript template to generate dash components and getting the following error when I run npm run build:
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\projects\dash-salt\.venv\Scripts\dash-generate-components.exe\__main__.py", line 4, in <module>
File "I:\dash-salt\.venv\Lib\site-packages\dash\__init__.py", line 22, in <module>
from ._callback import callback, clientside_callback # noqa: F401,E402
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\projects\dash-salt\.venv\Lib\site-packages\dash\_callback.py", line 34, in <module>
from .long_callback.managers import BaseLongCallbackManager
File "I:\projects\dash-salt\.venv\Lib\site-packages\dash\long_callback\__init__.py", line 1, in <module>
from .managers.celery_manager import ( # noqa: F401,E402
File "I:\projects\dash-salt\.venv\Lib\site-packages\dash\long_callback\managers\celery_manager.py", line 5, in <module>
from _plotly_utils.utils import PlotlyJSONEncoder
ModuleNotFoundError: No module named '_plotly_utils'
```
- replace the result of `pip list | grep dash` below
```
$ poetry show
Warning: Found deprecated priority 'default' for source 'jpmc-internal' in pyproject.toml. You can achieve the same effect by changing the priority to 'primary' and putting the source first.
ansi2html 1.9.1 Convert text with ANSI color codes to HTML or to LaTeX
blinker 1.8.2 Fast, simple object-to-object and broadcast signaling
build 1.2.1 A simple, correct Python build frontend
certifi 2024.6.2 Python package for providing Mozilla's CA Bundle.
charset-normalizer 3.3.2 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.
click 8.1.7 Composable command line interface toolkit
colorama 0.4.6 Cross-platform colored terminal text.
coverage 7.5.3 Code coverage measurement for Python
dash 2.14.1 A Python framework for building reactive web-apps. Developed by Plotly.
dash-core-components 2.0.0 Core component suite for Dash
dash-html-components 2.0.0 Vanilla HTML components for Dash
dash-table 5.0.0 Dash table
flask 3.0.3 A simple framework for building complex web applications.
idna 3.7 Internationalized Domain Names in Applications (IDNA)
importlib-metadata 7.1.0 Read metadata from Python packages
iniconfig 2.0.0 brain-dead simple config-ini parsing
itsdangerous 2.2.0 Safely pass data to untrusted environments and back.
jinja2 3.1.4 A very fast and expressive template engine.
markupsafe 2.1.5 Safely add untrusted strings to HTML/XML markup.
nest-asyncio 1.6.0 Patch asyncio to allow nested event loops
packaging 24.1 Core utilities for Python packages
plotly 5.22.0 An open-source, interactive data visualization library for Python
pluggy 1.5.0 plugin and hook calling mechanisms for python
pyproject-hooks 1.1.0 Wrappers to call pyproject.toml-based build backend hooks.
pytest 8.2.2 pytest: simple powerful testing with Python
pytest-cov 5.0.0 Pytest plugin for measuring coverage.
requests 2.32.3 Python HTTP for Humans.
retrying 1.3.4 Retrying
setuptools 70.0.0 Easily download, build, install, upgrade, and uninstall Python packages
six 1.16.0 Python 2 and 3 compatibility utilities
tenacity 8.3.0 Retry code until it succeeds
typing-extensions 4.12.2 Backported and Experimental Type Hints for Python 3.8+
urllib3 2.2.1 HTTP library with thread-safe connection pooling, file post, and more.
werkzeug 3.0.3 The comprehensive WSGI web application library.
wheel 0.43.0 A built-package format for Python
zipp 3.19.2 Backport of pathlib-compatible object wrapper for zip files
```
This is running on Windows with Python 3.11.9.
When I run this outside of the poetry virtual environment, it runs fine, but I have the same pip package versions.
**Expected behavior**
Expecting npm run build to build successfully in my virtual environment.
| closed | 2024-06-15T08:14:12Z | 2024-06-15T08:39:43Z | https://github.com/plotly/dash/issues/2887 | [] | tsveti22 | 1 |
litestar-org/litestar | api | 3,019 | Enhancement: Pass the caught exception to `exception_logging_handler` for more flexible logging decision | ### Summary
Pass `Exception` instance to `LogConfig.exception_logging_handler` callable, like for exception handlers.
This would allow to judge if / how to log exception based on exception type:
- `HTTPException` (at least 4XX ones) - log as INFO, without stacktrace. These are client errors, and expected to happen every now and then. Absolutely no need to log them as ERROR and with stacktrace.
- Some `Timeout` error - log as WARNING, as it's bound to happen every now and then, but it's not a bug, really
- Any other `Exception` - log as ERROR with stacktrace, as yes it looks like a bug as it slipped in.
Currently, the `exception_logging_handler` is passed pre-formatted stacktrace, so it cannot judge what the exception actually was. But it would make a lot of sense to just pass the caught exception, instead of the stacktrace, and let the function decide when and how to represent the exception.
As a workaround, it's possible to grab the exception via `sys.exc_info()` (I believe). But it would be "cleaner" if the exception was passed in, and one wouldn't have to do that. It would also signal the developer that it's actually okay to determine logic based on exception type, now you have to kinda think outside the box.
### Basic Example
_No response_
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | closed | 2024-01-24T09:04:01Z | 2025-03-20T15:54:22Z | https://github.com/litestar-org/litestar/issues/3019 | [
"Enhancement"
] | tuukkamustonen | 3 |
seleniumbase/SeleniumBase | web-scraping | 3,326 | Integrating selenium base with dotnet | Actually, I am writing code in C# programming language using dotnet framework.
I am using undetected chrome-driver but its not enough to bypass detection.
Is there any possibility to integrate selenium base uc mode specifically for C#.
It would be more feasible rather than using sb from python and handle it in enterprise level. | closed | 2024-12-09T09:36:21Z | 2024-12-09T13:32:21Z | https://github.com/seleniumbase/SeleniumBase/issues/3326 | [
"won't change / not planned"
] | Anurag130899 | 1 |
jonaswinkler/paperless-ng | django | 440 | Pre-receive Hook: Wait for file - software duplex | Hi there,
this is more a question than an issue - maybe a feature request. In any case, thank you for the great work with paperless-ng!
I was using a self-cooked minimalistic paperless web-app which used folders (because it was easier to implement) successfully for quite some time now but decided to switch to paperless-ng because it is... well.. better in just about every way :)
With one exception to my usual workflow:
Our scanner has an ADF, but unfortunately scans only single-sided. So in my web-app, I used a python script that combines two PDFs from the front (odd page numbers) and backs (even page numbers) by interleaving the pages.
Obviously, this would not work with paperless-ng out of the box as the consumer would consume the first pdf before the second arrived.
So I was wondering if it was possible to write a pre-consumption script that if a file appears in the consumption directory, that contains the word "front", it waits in an infinite loop (with a timeout of some sort) until the second file arrives, then combines these two and exits afterwards, such that the consumption carries on as usual.
To prevent the second file to be consumed from a different worker, the pre-consumption script should also check for the word "back" and ignore this file altogether.
Do you see any problem with that approach? Or even better: do you have a better idea to solve this elegantly? Obviously, I could do all outside of paperless-ng with another service, but I would very much like to keep it all in one place as this is much easier to backup and transfer to other systems then having yet another service running.
thanks!
Cheers
Max
| closed | 2021-01-25T08:47:41Z | 2021-01-27T10:35:16Z | https://github.com/jonaswinkler/paperless-ng/issues/440 | [] | raumneun | 4 |
FactoryBoy/factory_boy | django | 401 | post_generation objects can't be overridden with a computed "pre" value | This should work:
```python
class BookFactory(factory.Factory):
@factory.post_generation
def category(self, create, override, extra):
if create and override:
category.register(self)
class TrilogyFactory(factory.Factory):
book1 = factory.SubFactory(BookFactory)
book2 = factory.SubFactory(BookFactory, category=factory.SelfAttribute('..book1.category'))
book3 = factory.SubFactory(BookFactory, category=factory.SelfAttribute('..book1.category'))
```
However, it fails: ``override`` receives the ``SelfAttribute`` declaration instead of receiving a resolved value.
| closed | 2017-08-01T13:33:35Z | 2018-05-05T15:06:23Z | https://github.com/FactoryBoy/factory_boy/issues/401 | [
"Bug"
] | rbarrois | 1 |
saulpw/visidata | pandas | 2,531 | Macro recordings that include a different macro include its invocation and its steps | The following shows that macro `test` will have **both** the invocation and the steps of the `r1` macro, leading to problems. Being able to nest macros would be more powerful (in case I change `r1` I don't have to change any other macros that depend on it) but I suspect simply not recording the invocation would be easier to implement.
r1
```
{"sheet": "", "col": "", "row": "", "longname": "columns-sheet", "input": "", "keystrokes": "Shift+C", "comment": "open Columns Sheet: edit column properties for current sheet"}
{"sheet": "", "col": "name", "row": "", "longname": "select-col-regex", "input": {"regex": "^(author|created_utc|selftext|title|url)$", "flags": "I"}, "keystrokes": "|", "comment": "select rows matching regex in current column"}
...
```
=======
test
```
{"sheet": "", "col": "", "row": "", "longname": "exec-longname", "input": "exec-r1", "keystrokes": "Space", "comment": "execute command by its longname", "replayable": true}
{"sheet": "", "col": "", "row": "", "longname": "exec-r1", "input": "", "comment": "", "replayable": true}
{"sheet": "", "col": "", "row": "", "longname": "columns-sheet", "input": "", "keystrokes": "Shift+C", "comment": "open Columns Sheet: edit column properties for current sheet", "replayable": true}
...
```
| closed | 2024-09-23T12:25:30Z | 2024-11-09T04:27:17Z | https://github.com/saulpw/visidata/issues/2531 | [
"bug"
] | reagle | 4 |
skypilot-org/skypilot | data-science | 4,050 | [Storage] `sky storage delete -a` aborted when deletion of one storage failed | <!-- Describe the bug report / feature request here -->
`sky storage delete -a` aborted for all storage when deletion of one storage failed. Expected behaviour: print out error for corresponding storage while still delete all other storages.
```bash
sky storage delete -a
Deleting 7 storages: skypilot-workdir-txia-60a9b7bc, skypilot-workdir-txia-f8997026, skypilot-workdir-txia-cf42367e, skypilot-workdir-txia-4e6768c9, skypilot_test_compress, comp-289a68bd-c3ed-42bd-93b8-ee3735cdb835-0, comp-3525982f-f669-4c43-b2e7-f5a1e9789a2b-1. Proceed? [Y/n]: ^[[A
Error: invalid input
Deleting 7 storages: skypilot-workdir-txia-60a9b7bc, skypilot-workdir-txia-f8997026, skypilot-workdir-txia-cf42367e, skypilot-workdir-txia-4e6768c9, skypilot_test_compress, comp-289a68bd-c3ed-42bd-93b8-ee3735cdb835-0, comp-3525982f-f669-4c43-b2e7-f5a1e9789a2b-1. Proceed? [Y/n]:
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadBucket operation: Forbidden
The above exception was the direct cause of the following exception:
sky.exceptions.StorageBucketGetError: Failed to access existing bucket 'skypilot-workdir-txia-60a9b7bc'. This is likely because it is a private bucket you do not have access to.
To fix:
1. If you are trying to create a new bucket: use a different name.
2. If you are trying to connect to an existing bucket: make sure your cloud credentials have access to it. To debug, consider running `aws s3 ls skypilot-workdir-txia-60a9b7bc`.
```
| closed | 2024-10-08T18:42:57Z | 2025-01-17T00:40:28Z | https://github.com/skypilot-org/skypilot/issues/4050 | [
"good first issue"
] | cblmemo | 0 |
quokkaproject/quokka | flask | 40 | Error 500 while registering but account is created (because of missing SMTP config) | Thats the problem.
Here some log:
30.09 22:49:14 quokka ERROR Exception on /accounts/register [POST]
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/flask_security/views.py", line 117, in register
user = register_user(**form.to_dict())
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/flask_security/registerable.py", line 41, in register_user
user=user, confirmation_link=confirmation_link)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/flask_security/utils.py", line 260, in send_mail
mail.send(msg)
File "/Library/Python/2.7/site-packages/flask_mail.py", line 415, in send
with self.connect() as connection:
File "/Library/Python/2.7/site-packages/flask_mail.py", line 123, in **enter**
self.host = self.configure_host()
File "/Library/Python/2.7/site-packages/flask_mail.py", line 137, in configure_host
host = smtplib.SMTP(self.mail.server, self.mail.port)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/smtplib.py", line 250, in **init**
(code, msg) = self.connect(host, port)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/smtplib.py", line 310, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/smtplib.py", line 285, in _get_socket
return socket.create_connection((host, port), timeout)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 61] Connection refused
30.09 22:49:14 werkzeug INFO 127.0.0.1 - - [30/Sep/2013 22:49:14] "POST /accounts/register HTTP/1.1" 500 -
| closed | 2013-09-30T20:56:40Z | 2015-07-16T02:56:48Z | https://github.com/quokkaproject/quokka/issues/40 | [
"bug"
] | yeradis | 2 |
dsdanielpark/Bard-API | nlp | 178 | Ensure the order of images in the response | **Solution you'd like**
I encountered this problem when I tried to match the image descriptions in the reponse and image urls.
Since this library is converting the image urls to set, not list, the order of the urls are lost.
I changed the code and the description slightly to address this issue.
| closed | 2023-09-04T08:18:01Z | 2023-09-04T17:10:54Z | https://github.com/dsdanielpark/Bard-API/issues/178 | [] | kota113 | 1 |
keras-team/keras | tensorflow | 20,147 | .keras model with base model trainable has poor performance in TensorFlow 2.17 (Keras 3) | I originally posted this bug in the [TensorFlow github issues section](https://github.com/tensorflow/tensorflow/issues/74170) since I believed it to be due to a higher TF version, but was asked to post here since it may instead be due to Keras 3. I am copying my post below:
I am training an EfficientNet model with a custom head using TensorFlow and Keras, saving the model to a .keras format. If the base model trainable flag is set to False, such that I only train the head, then when I later load the .keras model and evaluate it on a dataset, I get the expected good performance. When I set the trainable flag to True and train a model (which converges well), then when I later load the model and evaluate it on the same dataset the performance has degraded significantly. (I am evaluating the model on the same dataset using the same code both at the end of training, and later on in a separate notebook. It is in this separate notebook where the performance is bad, where again the same dataset is being used and the same code is being used in both evaluation places.)
Saving to a .h5 model does not have this issue, and the performance of the saved model is good. I have spent the day trying different trainable and training flag values in various places to no improvement, thinking originally that it was something to do with the BatchNorm layers in the model. Recompiling the model has not helped.
When I switch back to an older TensorFlow version (2.15.0.post1) with Keras 2 I do not see this issue. Both the trained .keras and .h5 models perform well when later loaded and evaluated on my dataset of interest.
This seems like a bug to me, though I also acknowledge that perhaps I have missed something in the TF/Keras updates. I have searched the TensorFlow API docs for the various methods to no success. If it is the latter I would be very grateful for any advice, thank you. | closed | 2024-08-22T12:45:14Z | 2024-08-28T04:11:55Z | https://github.com/keras-team/keras/issues/20147 | [
"keras-team-review-pending",
"type:Bug"
] | nkinnaird | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.