repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
seleniumbase/SeleniumBase | pytest | 3,394 | SB doesn't work with auth proxy | ```
from seleniumbase import SB
from services.capthca_service import click_captcha
SIGH_UP_URL = "https://skynet.certik.com/signup"
def check_offset():
try:
with SB(uc=True, proxy="xaabdseh:211ks1inip4o@46.203.134.218:5842") as driver:
driver.activate_cdp_mode(SIGH_UP_URL)
driver.cdp.maximize()
driver.press_keys("input[name='email']", "example@gmail.com", timeout=10)
driver.press_keys("input[name='password']", "someapssworD1234", timeout=10)
if driver.cdp.is_element_present("div[role='checkbox']"):
driver.cdp.click("div[role='checkbox']")
driver.disconnect()
driver.connect()
driver.cdp.click('form button[type="button"].w-full.h-10')
driver.sleep(10)
click_captcha(driver)
driver.sleep(5)
except KeyboardInterrupt:
pass
except Exception as e:
check_offset()
```
No matter what proxy with auth I use, I get
ERR_TUNNEL_CONNECTION_FAILED
Although the proxy is working

When using the same proxy using ip whitelist
everything works correctly
| closed | 2025-01-06T16:26:10Z | 2025-01-06T18:00:51Z | https://github.com/seleniumbase/SeleniumBase/issues/3394 | [
"can't reproduce",
"UC Mode / CDP Mode"
] | mkuchuman | 1 |
keras-team/keras | data-science | 20,113 | Conv2D is no longer supporting Masking in TF v2.17.0 | Dear Keras team,
Conv2D layer no longer supports Masking layer in TensorFlow v2.17.0. I've already raised this issue with TensorFlow. However, they requested that I raise the issue here.
Due the dimensions of our input (i.e. (timesteps, width, channels)), size of the input shape (i.e. (2048, 2000, 3)) and size of the dataset (i.e. over 1 million samples), it is not practical to use LSTM, GRU, RNN or ConvLSTM1D layers, and therefore, Conv2D layers worked sufficiently well in our applications. The gaps in the dataset was handled with the Masking layer, and the masking layer was compatible with the Conv layers (among other layers, such as Cropping and Padding) from all TF versions up to (and including) TF v2.16. However, in TF v2.17.0, we get the following user warning "Layer 'conv2d' (of type Conv2D) was passed an input with a mask attached to it. However, this layer does not support masking and will therefore destroy the mask information. Downstream layers will not see the mask".
Is this a bug in TF v2.17.0?
Or is this feature now depreciated in TF v2.17.0?
Would you be able to reintroduce this feature in future versions?
Best
Kav
**LINK TO THE CODE ON COLAB NOTEBOOK:**
https://colab.research.google.com/drive/102k6UNSKb-d03DcmcUtCxmV9Qz9bjZoD?usp=drive_link
**STANDALONE CODE:**
from tensorflow.keras.layers import Conv2D, Masking, Flatten
from tensorflow.keras import Model, Input
batch = 1
timesteps = 10
width = 10
channels = 2
filters = 4
kernel_size = 3
mask_value = -1
x_input = Input(shape=(timesteps, width, channels))
x_masking = Masking(mask_value)(x_input)
x_conv2d = Conv2D(filters, kernel_size)(x_masking)
x_flatten = Flatten()(x_conv2d)
model = Model(x_input, x_flatten)
model.compile(loss='mse')
**RELEVANT LOG OUTPUT**
/usr/local/lib/python3.10/dist-packages/keras/src/layers/layer.py:915: UserWarning: Layer 'conv2d' (of type Conv2D) was passed an input with a mask attached to it. However, this layer does not support masking and will therefore destroy the mask information. Downstream layers will not see the mask.
warnings.warn(
**LINK TO THE ORIGINAL RAISED ISSUE ON TENSORFLOW REPO**
https://github.com/tensorflow/tensorflow/issues/73531 | closed | 2024-08-12T12:52:32Z | 2024-08-15T13:36:52Z | https://github.com/keras-team/keras/issues/20113 | [
"type:support",
"stat:awaiting keras-eng"
] | kavjayawardana | 4 |
OFA-Sys/Chinese-CLIP | computer-vision | 168 | 预训练超参 | 你好,方便提供预训练的超参吗,最好能够就是一个脚本,感谢感谢 | open | 2023-07-20T11:52:30Z | 2023-07-27T10:24:58Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/168 | [] | Hardcandies | 1 |
tensorflow/tensor2tensor | machine-learning | 1,191 | Universal Transformer appears to be buggy and not converging correctly | Summary
---------
Universal Transformer appears to be buggy and not converging correctly:
- Universal transformer does not converge on multi_nli as of the latest tensor2tensor master (9729521bc3cd4952c42dcfda53699e14bee7b409). See below for reproduction
- UT does converge on multi_nli as of August 3 2018 commit 5fff1cad2977f063b981e5d8b839bf9d7008e232 (we didn’t run this fully out, but it was making meaningful progress, unlike below, so we terminated it and considered it successful).
- To confirm this was not simply an odd issue with multi_nli, we tried UT with a number of other problems (exact repo not shown below), including ‘lambada_rc’ and ‘stanford_nli’ (run at commit ca628e4fcb04ff42ed21549a4f73e6dfa68a5f7a from around October 16 2018) All of these failed to converge.
Environment information
-------------------------
Docker image based off nvidia/cuda:9.0-devel-ubuntu16.04
Tf version: tensorflow-gpu=1.11.0
T2t version: Tensor2tensor master at commit 9729521bc3cd4952c42dcfda53699e14bee7b409 on Oct 30 2018.
We also saw this failed behavior on tf-nightly-gpu==1.13.0.dev20181022
Reproduce
-----------
Problem: multi_nli
Model: universal_transformer
Hparams_set: universal_transformer_tiny
python3 /usr/src/t2t/tensor2tensor/bin/t2t-trainer \
--data_dir="DATA_DIR" \
--eval_early_stopping_steps="10000" \
--eval_steps="10000" \
--generate_data="True" \
--hparams="" \
--hparams_set="universal_transformer_tiny" \
--iterations_per_loop="2000" \
--keep_checkpoint_max="80" \
--local_eval_frequency="2000" \
--model="universal_transformer" \
--output_dir="OUTPUT_DIR" \
--problem="multi_nli" \
--t2t_usr_dir="T2T_USR_DIR" \
--tmp_dir="T2T_TMP_DIR"
Run was stopped after 50000 steps due to lack of convergence as loss fluctuates between 1.098 and 1.099.
INFO:tensorflow:Saving dict for global step 50000: global_step = 50000, loss = 1.0991247, metrics-multi_nli/targets/accuracy = 0.31821653, metrics-multi_nli/targets/accuracy_per_sequence = 0.31821653, metrics-multi_nli/targets/accuracy_top5 = 1.0, metrics-multi_nli/targets/approx_bleu_score = 0.7479816, metrics-multi_nli/targets/neg_log_perplexity = -1.099124, metrics-multi_nli/targets/rouge_2_fscore = 0.0, metrics-multi_nli/targets/rouge_L_fscore = 0.31869644
| closed | 2018-10-31T21:02:06Z | 2018-11-20T23:37:07Z | https://github.com/tensorflow/tensor2tensor/issues/1191 | [] | rllin-fathom | 9 |
reloadware/reloadium | pandas | 207 | asyncio ? | asyncio not supported | open | 2024-12-08T14:41:20Z | 2024-12-08T14:41:20Z | https://github.com/reloadware/reloadium/issues/207 | [] | MaxmaxmaximusFree | 0 |
Kanaries/pygwalker | plotly | 502 | PygWalker will not display anything. | Using Windows 11; JupyterLab
Attempts to duplicate the JupyterLab tutorial have failed. Imports have no error.
These 3 are ok:
df = pd.read_csv("https://kanaries-app.s3.ap-northeast-1.amazonaws.com/public-datasets/bike_sharing_dc.csv")
walker = pyg.walk(df)
df.head()
pyg.table(df) =============> AttributeError: module 'pygwalker' has no attribute 'table'
pyg.render(df, spec="./gw_config.json") ==============>pyg.render(df, spec="./gw_config.json") [what does this do?]
I have installed pygwalker twice via conda, today, 03/27/24 | closed | 2024-03-27T22:12:25Z | 2024-03-30T11:04:33Z | https://github.com/Kanaries/pygwalker/issues/502 | [] | timhockswender | 12 |
scikit-learn/scikit-learn | data-science | 30,615 | average_precision_score produces unexpected output when scoring a single sample | ### Describe the bug
When using `average_precision_score` and scoring a single sample, the metric ignores `y_score` and will always produce a score of 1.0 if `y_true = [1]` and otherwise will return a score of 0. I would have expected that it would instead raise an exception.
Potentially related to #30147, however I'm focusing on the minimal example with just a single sample.
### Steps/Code to Reproduce
```python
from sklearn.metrics import average_precision_score
y_score = [0]
y_true = [1]
score = average_precision_score(y_true=y_true, y_score=y_score)
print(score) # 1.0
y_score = [1]
y_true = [1]
score = average_precision_score(y_true=y_true, y_score=y_score)
print(score) # 1.0
y_score = [0.5]
y_true = [1]
score = average_precision_score(y_true=y_true, y_score=y_score)
print(score) # 1.0
y_score = [0]
y_true = [0]
score = average_precision_score(y_true=y_true, y_score=y_score)
print(score) # 0.0
y_score = [1]
y_true = [0]
score = average_precision_score(y_true=y_true, y_score=y_score)
print(score) # 0.0
y_score = [0.5]
y_true = [0]
score = average_precision_score(y_true=y_true, y_score=y_score)
print(score) # 0.0
```
Additionally, you can see that the average_precision_score returns a score opposite of what precision and recall return:
```python
from sklearn.metrics import average_precision_score, precision_score, recall_score
y_score = [0]
y_true = [1]
score = average_precision_score(y_true=y_true, y_score=y_score)
print(score) # 1.0
score = precision_score(y_true=y_true, y_pred=y_score)
print(score) # 0.0
score = recall_score(y_true=y_true, y_pred=y_score)
print(score) # 0.0
```
### Expected Results
I would have expected the metric to raise an exception, similar to what happens when ROC_AUC is called with a single sample:
```python
score = roc_auc_score(y_true=y_true, y_score=y_score)
print(score)
```
```
ValueError: Only one class present in y_true. ROC AUC score is not defined in that case.
```
### Actual Results
Refer to code snippets above.
### Versions
```shell
System:
python: 3.11.10 | packaged by conda-forge | (main, Sep 30 2024, 18:08:57) [GCC 13.3.0]
executable: /opt/conda/envs/ag-311/bin/python
machine: Linux-5.15.0-1056-aws-x86_64-with-glibc2.31
Python dependencies:
sklearn: 1.5.1
pip: 24.2
setuptools: 60.2.0
numpy: 1.26.4
scipy: 1.12.0
Cython: None
pandas: 2.2.3
matplotlib: 3.9.2
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 128
prefix: libopenblas
filepath: /opt/conda/envs/ag-311/lib/libopenblasp-r0.3.28.so
version: 0.3.28
threading_layer: pthreads
architecture: SapphireRapids
user_api: blas
internal_api: openblas
num_threads: 64
prefix: libopenblas
filepath: /opt/conda/envs/ag-311/lib/python3.11/site-packages/scipy.libs/libopenblasp-r0-23e5df77.3.21.dev.so
version: 0.3.21.dev
threading_layer: pthreads
architecture: Cooperlake
user_api: openmp
internal_api: openmp
num_threads: 192
prefix: libgomp
filepath: /opt/conda/envs/ag-311/lib/python3.11/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0
version: None
```
```
| open | 2025-01-09T00:41:41Z | 2025-01-15T04:25:45Z | https://github.com/scikit-learn/scikit-learn/issues/30615 | [
"Bug",
"Needs Investigation"
] | Innixma | 6 |
mwaskom/seaborn | data-science | 2,875 | Unable to figure out the plot name and parameters. | What could be the plot name for this figure attached, I was trying for some time, but I was unable to figure it out.

| closed | 2022-06-23T14:48:00Z | 2022-06-23T14:54:26Z | https://github.com/mwaskom/seaborn/issues/2875 | [] | PrashantSinghSengar | 1 |
aleju/imgaug | deep-learning | 723 | imageio not compatible with Python2.7 | > Using cached imgaug-0.4.0-py2.py3-none-any.whl (948 kB)
> Collecting imageio
> Using cached imageio-2.9.0.tar.gz (3.3 MB)
> ERROR: Package 'imageio' requires a different Python: 2.7.17 not in '>=3.5'
Running `pip install imgaug` or `pip install imgaug==0.2.9` on python2 will cause this error to appear. | open | 2020-10-14T04:21:58Z | 2020-10-17T05:29:37Z | https://github.com/aleju/imgaug/issues/723 | [] | ariccspstk | 1 |
wkentaro/labelme | deep-learning | 986 | [BUG] appimage cannot launch | 
| open | 2022-02-12T09:55:02Z | 2022-09-26T14:47:22Z | https://github.com/wkentaro/labelme/issues/986 | [
"issue::bug",
"status: wip-by-author"
] | newyorkthink | 1 |
2noise/ChatTTS | python | 45 | 这几个参数的取值范围以及意义是什么呢? | params_infer_code = {
'spk_emb': rand_spk, # add sampled speaker
'temperature': .3, # using custom temperature
'top_P': 0.7, # top P decode
'top_K': 20, # top K decode
}
| closed | 2024-05-29T07:31:25Z | 2024-07-15T04:01:57Z | https://github.com/2noise/ChatTTS/issues/45 | [
"stale"
] | alanzhao0128 | 1 |
ultralytics/ultralytics | deep-learning | 18,700 | Cancel training when running in a script | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I am running ultralytics model training on a python grpc server. Before training the server spins up a new `multiprocessing.Process`, which will start the model training. The user might want to manually cancel the training run from the server side. Is there an equivalent way like `ctrl+c` for script training?
### Additional
_No response_ | closed | 2025-01-15T19:32:10Z | 2025-01-16T11:54:47Z | https://github.com/ultralytics/ultralytics/issues/18700 | [
"question"
] | mario-dg | 2 |
dask/dask | pandas | 11,101 | TypeError: can only concatenate str (not "traceback") to str | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
``````
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
**Minimal Complete Verifiable Example**:
```python
# Put your MCVE code here
import dask
import dask.bag as db
import river
# Create a Dask bag from your data
df=pd.DataFrame([[0]*2],columns=['VendorID','fare_amount'])
data = db.from_sequence(df, npartitions=4)
# Define a function to process and train on each partition
def process_and_train(partition):
X_train,X_test,y_train,y_test=get_dask_train_test(partition)
model = river.linear_model.LinearRegression(optimizer=river.optim.SGD(0.01), l2=0.1)
# Stream learning from the DataFrame
for _,row in partition.iterrows():
y = row['fare_amount'] # Target
x = row.drop('fare_amount') # Features
model = model.learn_one(x, y)
print("done")
return model
# Use Dask to process and train in parallel
models = data.map(process_and_train).compute()
```
**Anything else we need to know?**:

**Environment**:
- Dask version:
- Python version:3.10
- Operating System:
- Install method (conda, pip, source):pip
| open | 2024-05-06T14:19:27Z | 2024-05-06T14:19:41Z | https://github.com/dask/dask/issues/11101 | [
"needs triage"
] | sinsniwal | 0 |
Lightning-AI/pytorch-lightning | machine-learning | 19,625 | When I choose to save each epoch model, the previously saved model will be deleted. | ### Bug description

When I choose to save each epoch model, the previously saved model will be deleted.
if previous != current rerutrn Flase
When the names of the passed variable values previous and current are different, they should be retained, but the logic of the code here should be !=
### What version are you seeing the problem on?
master
### How to reproduce the bug
```python
checkpoint_cb = pl.callbacks.ModelCheckpoint(
monitor=None,
dirpath="/data/ly/ckpt/ema/cache",
filename='checkpoint_{epoch}',
every_n_epochs=1,
save_last=True,
save_on_train_epoch_end=True,
)
After this setting, the result will only save the model named 'checkpoint_epoch=1.ckpt' and last.ckpt
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | closed | 2024-03-13T11:19:45Z | 2024-04-12T15:36:25Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19625 | [
"bug",
"needs triage",
"ver: 2.2.x"
] | Li-Yun-star | 1 |
dask/dask | scikit-learn | 11,032 | Client() generate the error concurrent.futures._base.CancelledError: ('head-1-5-read-csv-19ebc21b0abac0313dd0e5004ea2fce7', 0) | **Code Origin**:
import pandas as pd
import dask.dataframe as dd
from dask.distributed import Client
Client("192.168.11.10:8786");
housing = dd.read_csv('datasets/housing/housing.csv',lineterminator='\n')
housing.head()
**Error Info**:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xieyuhan/Apps/anaconda3/lib/python3.11/site-packages/dask/dataframe/core.py", line 1540, in head
return self._head(n=n, npartitions=npartitions, compute=compute, safe=safe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xieyuhan/Apps/anaconda3/lib/python3.11/site-packages/dask/dataframe/core.py", line 1574, in _head
result = result.compute()
^^^^^^^^^^^^^^^^
File "/home/xieyuhan/Apps/anaconda3/lib/python3.11/site-packages/dask/base.py", line 342, in compute
(result,) = compute(self, traverse=False, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xieyuhan/Apps/anaconda3/lib/python3.11/site-packages/dask/base.py", line 628, in compute
results = schedule(dsk, keys, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xieyuhan/Apps/anaconda3/lib/python3.11/site-packages/distributed/client.py", line 2245, in _gather
raise exc
concurrent.futures._base.CancelledError: ('head-1-5-read-csv-19ebc21b0abac0313dd0e5004ea2fce7', 0)
**Environment**:
{
"Python": "3.11.7",
"Platform": "Linux",
"dask": "2023.11.0",
"distributed": "2023.11.0",
"numpy": "1.26.4",
"pandas": "2.1.4",
"cloudpickle": "2.2.1",
"fsspec": "2023.10.0",
"bokeh": "3.3.4",
"fastparquet": null,
"pyarrow": "14.0.2",
"zarr": null
}
**Note**:
Once i removed the code Client(****), which meant to use a local cluster, the code worked fine;
Once i replace the code with " housing = dd.read_csv('datasets/housing/housing.csv',lineterminator='\n',dtype='object')", guess what? it also ran perfectly well.
Please help, this bug really confusing me... [
[housing.csv](https://github.com/dask/dask/files/14813223/housing.csv)
](url) | closed | 2024-03-30T20:06:02Z | 2024-03-31T03:01:22Z | https://github.com/dask/dask/issues/11032 | [
"needs triage"
] | MadadamXie | 1 |
amisadmin/fastapi-amis-admin | sqlalchemy | 34 | 如何实现一个动态内容的下拉选择? | 我希望能在每一次加载页面的时候根据情况生成下拉选择框的成员, 我尝试在`get_form_item`里截取当前item然后往里面添加一些新的选项, 但是新的选项过不了FastApi的校验, 我尝试修改枚举也无效, 似乎fastapi总是持有最开始的那个枚举. 当前代码如下:
```Python
class NtpVersionEnum(Choices):
t = "t"
q = 'asd'
@site.register_admin
class 生成区域掩码(admin.FormAdmin):
page_schema = '生成'
form = Form(title='生成区域参数', submitText='提交')
class schema(BaseModel):
ntp_version: NtpVersionEnum = Field(NtpVersionEnum.t, title='NTP版本')
async def handle(self, request: Request, data: BaseModel, **kwargs) -> BaseApiOut[Any]:
return BaseApiOut(msg='登录成功!', data={'token': 'xxxxxx'})
async def get_form_item(self, request: Request, modelfield: ModelField) -> Form:
item = await super().get_form_item(request, modelfield)
if item.label == 'NTP版本':
global NtpVersionEnum
new_enum = Choices('NtpVersionEnum', {'t': 't', 'q': 'q', 'apple': 'apple'})
print(NtpVersionEnum._member_map_, NtpVersionEnum._member_names_)
NtpVersionEnum._member_map_ = new_enum._member_map_
NtpVersionEnum._member_names_ = new_enum._member_names_
NtpVersionEnum._member_type_ = new_enum._member_type_
objprint.objprint(NtpVersionEnum._member_map_, NtpVersionEnum._member_names_)
objprint.objprint(NtpVersionEnum.__dict__)
item.options.append(Select(value='apple', label='apple'))
return item
```
报错为:
```json
{"detail":[{"loc":["body","ntp_version"],"msg":"value is not a valid enumeration member; permitted: 't', 'q', 'apple'","type":"type_error.enum","ctx":{"enum_values":["t","q","apple"]}}]}
```
说是不允许 apple, 实际上传入的就是 apple.
我已经搜了一些关于如何动态定于应用于fastapi的enum的东西, 不过看起来很难, amis这里是否可能给出一个动态选项并且不使用枚举来约束呢? 如果不太复杂, 也可以给我一些指引, 我来做一个PR | closed | 2022-07-20T10:11:59Z | 2023-03-22T14:24:09Z | https://github.com/amisadmin/fastapi-amis-admin/issues/34 | [] | myuanz | 2 |
jupyter/nbviewer | jupyter | 585 | "Open with Binder" button | Just discuss that with @freeman-lab and @andrewosh , the new version of binder should be able to be queried for "has this notebook/url a binder".
I propose that when we render a notebook that can be started a binder, show binder button next to the "view on GitHub" button.
Thoughts ?
| open | 2016-03-17T19:33:48Z | 2018-07-27T02:24:10Z | https://github.com/jupyter/nbviewer/issues/585 | [
"type:Enhancement"
] | Carreau | 4 |
qubvel-org/segmentation_models.pytorch | computer-vision | 557 | how to get loss graph of train-validation | closed | 2022-02-07T13:45:26Z | 2022-04-16T02:05:03Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/557 | [
"Stale"
] | johnahjohn | 2 | |
microsoft/nni | deep-learning | 5,533 | RuntimeError: max(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument | **Describe the bug**:
- scheduler: AGP
- pruner: TaylorFO
- mode: global
- using evaluator (new api)
- torchvision resnet 18 model
- iterations: 10
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Python version: Python 3.9.12
- PyTorch version: 1.12.0 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
- torchvision: 0.13.0 py39_cu113 pytorch
- Cpu or cuda version: cuda
**Reproduce the problem**
- Code|Example:
| closed | 2023-04-26T23:05:45Z | 2023-05-10T10:44:34Z | https://github.com/microsoft/nni/issues/5533 | [] | kriteshg | 9 |
recommenders-team/recommenders | deep-learning | 1,898 | [BUG] Error in AzureML Spark nightly test | ### Description
<!--- Describe your issue/bug/request in detail -->
After we upgrade the CPU VM to a higher tier (see #1897), we got an error in the Spark nightly:
```
E papermill.exceptions.PapermillExecutionError:
E ---------------------------------------------------------------------------
E Exception encountered at "In [16]":
E ---------------------------------------------------------------------------
E Py4JJavaError Traceback (most recent call last)
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/IPython/core/magics/execution.py:1318, in ExecutionMagics.time(self, line, cell, local_ns)
E 1317 try:
E -> 1318 exec(code, glob, local_ns)
E 1319 out=None
E
E File <timed exec>:45
E
E Cell In[14], line 2, in <lambda>(test, predictions)
E 1 rating_evaluator = ***
E ----> 2 "als": lambda test, predictions: rating_metrics_pyspark(test, predictions),
E 3 "svd": lambda test, predictions: rating_metrics_python(test, predictions),
E 4 "fastai": lambda test, predictions: rating_metrics_python(test, predictions)
E 5 ***
E 8 ranking_evaluator = ***
E 9 "als": lambda test, predictions, k: ranking_metrics_pyspark(test, predictions, k),
E 10 "sar": lambda test, predictions, k: ranking_metrics_python(test, predictions, k),
E (...)
E 16 "lightgcn": lambda test, predictions, k: ranking_metrics_python(test, predictions, k),
E 17 ***
E
E File /mnt/azureml/cr/j/6abc2cf9fbcc4ce985da77dc3549f875/exe/wd/examples/06_benchmarks/benchmark_utils.py:372, in rating_metrics_pyspark(test, predictions)
E 371 def rating_metrics_pyspark(test, predictions):
E --> 372 rating_eval = SparkRatingEvaluation(test, predictions, **COL_DICT)
E 373 return ***
E 374 "RMSE": rating_eval.rmse(),
E 375 "MAE": rating_eval.mae(),
E 376 "R2": rating_eval.exp_var(),
E 377 "Explained Variance": rating_eval.rsquared(),
E 378 ***
E
E File /mnt/azureml/cr/j/6abc2cf9fbcc4ce985da77dc3549f875/exe/wd/recommenders/evaluation/spark_evaluation.py:82, in SparkRatingEvaluation.__init__(self, rating_true, rating_pred, col_user, col_item, col_rating, col_prediction)
E 81 raise ValueError("Empty input dataframe")
E ---> 82 if rating_pred.count() == 0:
E 83 raise ValueError("Empty input dataframe")
E
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/pyspark/sql/dataframe.py:804, in DataFrame.count(self)
E 795 """Returns the number of rows in this :class:`DataFrame`.
E 796
E 797 .. versionadded:: 1.3.0
E (...)
E 802 2
E 803 """
E --> 804 return int(self._jdf.count())
E
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args)
E 1320 answer = self.gateway_client.send_command(command)
E -> 1321 return_value = get_return_value(
E 1322 answer, self.gateway_client, self.target_id, self.name)
E 1324 for temp_arg in temp_args:
E
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/pyspark/sql/utils.py:190, in capture_sql_exception.<locals>.deco(*a, **kw)
E 189 try:
E --> 190 return f(*a, **kw)
E 191 except Py4JJavaError as e:
E
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
E 325 if answer[1] == REFERENCE_TYPE:
E --> 326 raise Py4JJavaError(
E 327 "An error occurred while calling ***0***1***2***.\n".
E 328 format(target_id, ".", name), value)
E 329 else:
E
E <class 'str'>: (<class 'ConnectionRefusedError'>, ConnectionRefusedError(111, 'Connection refused'))
E
E During handling of the above exception, another exception occurred:
E
E ConnectionRefusedError Traceback (most recent call last)
E papermill.exceptions.PapermillExecutionError:
E ---------------------------------------------------------------------------
E Exception encountered at "In [16]":
E ---------------------------------------------------------------------------
E Py4JJavaError Traceback (most recent call last)
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/IPython/core/magics/execution.py:1318, in ExecutionMagics.time(self, line, cell, local_ns)
E 1317 try:
E -> 1318 exec(code, glob, local_ns)
E 1319 out=None
E
E File <timed exec>:45
E
E Cell In[14], line 2, in <lambda>(test, predictions)
E 1 rating_evaluator = ***
E ----> 2 "als": lambda test, predictions: rating_metrics_pyspark(test, predictions),
E 3 "svd": lambda test, predictions: rating_metrics_python(test, predictions),
E 4 "fastai": lambda test, predictions: rating_metrics_python(test, predictions)
E 5 ***
E 8 ranking_evaluator = ***
E 9 "als": lambda test, predictions, k: ranking_metrics_pyspark(test, predictions, k),
E 10 "sar": lambda test, predictions, k: ranking_metrics_python(test, predictions, k),
E (...)
E 16 "lightgcn": lambda test, predictions, k: ranking_metrics_python(test, predictions, k),
E 17 ***
E
E File /mnt/azureml/cr/j/6abc2cf9fbcc4ce985da77dc3549f875/exe/wd/examples/06_benchmarks/benchmark_utils.py:372, in rating_metrics_pyspark(test, predictions)
E 371 def rating_metrics_pyspark(test, predictions):
E --> 372 rating_eval = SparkRatingEvaluation(test, predictions, **COL_DICT)
E 373 return ***
E 374 "RMSE": rating_eval.rmse(),
E 375 "MAE": rating_eval.mae(),
E 376 "R2": rating_eval.exp_var(),
E 377 "Explained Variance": rating_eval.rsquared(),
E 378 ***
E
E File /mnt/azureml/cr/j/6abc2cf9fbcc4ce985da77dc3549f875/exe/wd/recommenders/evaluation/spark_evaluation.py:82, in SparkRatingEvaluation.__init__(self, rating_true, rating_pred, col_user, col_item, col_rating, col_prediction)
E 81 raise ValueError("Empty input dataframe")
E ---> 82 if rating_pred.count() == 0:
E 83 raise ValueError("Empty input dataframe")
E
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/pyspark/sql/dataframe.py:804, in DataFrame.count(self)
E 795 """Returns the number of rows in this :class:`DataFrame`.
E 796
E 797 .. versionadded:: 1.3.0
E (...)
E 802 2
E 803 """
E --> 804 return int(self._jdf.count())
E
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args)
E 1320 answer = self.gateway_client.send_command(command)
E -> 1321 return_value = get_return_value(
E 1322 answer, self.gateway_client, self.target_id, self.name)
E 1324 for temp_arg in temp_args:
E
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/pyspark/sql/utils.py:190, in capture_sql_exception.<locals>.deco(*a, **kw)
E 189 try:
E --> 190 return f(*a, **kw)
E 191 except Py4JJavaError as e:
E
E File /azureml-envs/azureml_18102efe6b97bde44cf802374de73396/lib/python3.9/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
E 325 if answer[1] == REFERENCE_TYPE:
E --> 326 raise Py4JJavaError(
E 327 "An error occurred while calling ***0***1***2***.\n".
E 328 format(target_id, ".", name), value)
E 329 else:
E
E <class 'str'>: (<class 'ConnectionRefusedError'>, ConnectionRefusedError(111, 'Connection refused'))
E
E During handling of the above exception, another exception occurred:
E
E ConnectionRefusedError Traceback (most recent call last)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2122)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
23/03/01 21:24:42 WARN DiskBlockObjectWriter: Error deleting /tmp/blockmgr-5e0c5609-34d6-4017-a8a5-b45bccf081d7/26/temp_shuffle_09251939-1973-40ac-bfbe-093c81f8b7b1
23/03/01 21:24:42 WARN DiskBlockObjectWriter: Error deleting /tmp/blockmgr-5e0c5609-34d6-4017-a8a5-b45bccf081d7/0e/temp_shuffle_caf293c6-d182-442b-a78d-3eeb783e2014
23/03/01 21:24:42 WARN DiskBlockObjectWriter: Error deleting /tmp/blockmgr-5e0c5609-34d6-4017-a8a5-b45bccf081d7/24/temp_shuffle_4f758882-c6c9-408b-a314-77aa39e26848
23/03/01 21:24:42 WARN DiskBlockObjectWriter: Error deleting /tmp/blockmgr-5e0c5609-34d6-4017-a8a5-b45bccf081d7/1a/temp_shuffle_971782ea-211f-4985-9432-a575c6ec8a2a
23/03/01 21:24:42 WARN DiskBlockObjectWriter: Error deleting /tmp/blockmgr-5e0c5609-34d6-4017-a8a5-b45bccf081d7/26/temp_shuffle_f56e696e-de2e-41c4-b77d-81cbf0439801
23/03/01 21:24:42 WARN DiskBlockObjectWriter: Error deleting /tmp/blockmgr-5e0c5609-34d6-4017-a8a5-b45bccf081d7/0c/temp_shuffle_3b78c690-f247-49e7-afe0-85777a2de104
23/03/01 21:24:42 WARN DiskBlockObjectWriter: Error deleting /tmp/blockmgr-5e0c5609-34d6-4017-a8a5-b45bccf081d7/28/temp_shuffle_d671fbef-ab20-4908-ae22-2e866f7ad08e
23/03/01 21:24:42 WARN DiskBlockObjectWriter: Error deleting /tmp/blockmgr-5e0c5609-34d6-4017-a8a5-b45bccf081d7/09/temp_shuffle_cffadd45-508c-4fd9-8d6b-bc6c6f227f03
```
See full stack: https://github.com/microsoft/recommenders/actions/runs/4307778700/jobs/7513250712#step:17:32694
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
AzureML VM spark
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
Rerun https://github.com/microsoft/recommenders/actions/runs/4307778700/jobs/7513250712#step:17:32694
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
Tests in green
### Other Comments
| closed | 2023-03-02T12:22:51Z | 2023-03-30T19:56:53Z | https://github.com/recommenders-team/recommenders/issues/1898 | [
"bug"
] | miguelgfierro | 6 |
modoboa/modoboa | django | 2,886 | Aliases creation on new admin | Hi,
I want to create catch all mail (no-reply@domain.ltd) but when i create the alias I got this error in my web console
POST https://mail.domain.ltd/api/v2/aliases/
Uncaught (in promise) Error: Request failed with status code 404 | closed | 2023-02-24T18:36:29Z | 2023-04-17T19:22:55Z | https://github.com/modoboa/modoboa/issues/2886 | [
"feedback-needed",
"new-ui"
] | FullGreenGN | 1 |
autogluon/autogluon | computer-vision | 3,862 | [timeseries] DirectTabular & RecursiveTabular models fail if static features contain column `unique_id` | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
**Describe the bug**
If column "unique_id" is present in static features, DirectTabular & RecursiveTabular fail during training.
**Expected behavior**
Models train normally. AutoGluon automatically renames the column names reserved for internal usage.
**Error message**
```
Warning: Exception caused DirectTabular to fail during training... Skipping this model.
Grouper for 'unique_id' not 1-dimensional
```
**To Reproduce**
```python
from autogluon.timeseries import TimeSeriesDataFrame, TimeSeriesPredictor
data = TimeSeriesDataFrame("https://autogluon.s3-us-west-2.amazonaws.com/datasets/timeseries/m4_hourly_tiny/train.csv")
data.static_features = pd.DataFrame({"unique_id": 2}, index=data.item_ids)
predictor = TimeSeriesPredictor().fit(data, hyperparameters={"DirectTabular": {}})
```
**Installed Versions**
AutoGluon v1.0 | open | 2024-01-17T12:33:44Z | 2024-01-17T12:33:53Z | https://github.com/autogluon/autogluon/issues/3862 | [
"bug",
"module: timeseries"
] | shchur | 0 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 545 | 'padding_value' (position 3) must be float, not NoneType | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
None
### 基础模型
None
### 操作系统
None
### 详细描述问题
```
# 请在此处粘贴运行代码(请粘贴在本代码块里)
```
tokenizer的设置不应该是一下内容么
chinese_tokenizer_path=/root/autodl-tmp/Chinese-LLaMA-Alpaca-2/scripts/tokenizer/tokenizer.model
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况(请粘贴在本代码块里)
```
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
```
<img width="806" alt="image" src="https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/assets/68682858/70ff2c92-f0fa-4429-a35b-2c9c4db3be92">
| closed | 2024-03-16T08:59:45Z | 2024-04-09T00:11:56Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/545 | [
"stale"
] | liqinga | 3 |
OpenGeoscience/geonotebook | jupyter | 19 | Error on latest version (master as of 09/21/2016 1:52 PM) | [I 13:50:59.147 NotebookApp] Kernel started: a2f9b6a4-bfa9-48a4-9c94-5cff01855bc8
[IPKernelApp] Loading IPython extension: storemagic
[IPKernelApp] Running file in user namespace: /home/chaudhary/.auto_completion_python.py
[IPKernelApp] ERROR | Exception opening comm with target: geonotebook
Traceback (most recent call last):
File "/home/chaudhary/.virtualenvs/geonotebook/local/lib/python2.7/site-packages/ipykernel/comm/manager.py", line 90, in comm_open
f(comm, msg)
File "/home/chaudhary/tools/geonotebook/geonotebook/geonotebook/kernel.py", line 470, in handle_comm_open
vis_url="http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png")
File "/home/chaudhary/tools/geonotebook/geonotebook/geonotebook/kernel.py", line 371, in add_layer
cb = self._remote.add_osm_layer(layer.name, layer.vis_url, params)\
File "/home/chaudhary/tools/geonotebook/geonotebook/geonotebook/kernel.py", line 137, in _protocol_closure
raise e
AssertionError: Protocol add_osm_layer has an arity of 2. Called with 3
| closed | 2016-09-21T17:52:30Z | 2016-09-21T19:00:23Z | https://github.com/OpenGeoscience/geonotebook/issues/19 | [] | aashish24 | 3 |
SYSTRAN/faster-whisper | deep-learning | 699 | faster-whisper 0.10 pypi package has been overriten with version 1.0.0 | ## Problem
Yesterday (2024/2/21) faster-whisper 0.10 package was been overwritten with version 1.0.0
See: https://pypi.org/project/faster-whisper/0.10.0/#files

It seems that a new version was been pushed to https://pypi.org with the 1.0.0 code but with version 0.1.10
This has happened because https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/version.py has not been updated with a new version
## Proposed solution
### Fix 0.10 version
Version 0.10 needs to build from the tag and push it to pypi
### Release a new 1.0.1 version
Do a change like https://github.com/SYSTRAN/faster-whisper/pull/696 and publish a 1.0.1 version with the right version in the version.py file
| closed | 2024-02-21T14:37:19Z | 2024-02-22T12:27:42Z | https://github.com/SYSTRAN/faster-whisper/issues/699 | [] | jordimas | 6 |
pytest-dev/pytest-cov | pytest | 186 | xml report file is empty | Using the following command:
```commandline
pytest --cov-report xml --cov-report html --cov=. tests/
```
the html coverage report is generated under `htmlcov/` but the `coverage.xml` file is empty.
Can you reproduce this behaviour? I'm using:
```
pytest (3.3.2)
pytest-cov (2.5.1)
coverage (4.4.2)
```
Thanks in advance! | closed | 2018-01-11T21:43:24Z | 2018-01-12T17:07:28Z | https://github.com/pytest-dev/pytest-cov/issues/186 | [] | petobens | 9 |
autogluon/autogluon | scikit-learn | 4,497 | [tabular] Add `num_cpus` and `num_gpus` as init args to TabularPredictor | Add `num_cpus` and `num_gpus` as init args to TabularPredictor
If specified, the values are used as the defaults for all instances in the predictor where `num_cpus` and `num_gpus` can be specified.
Once #4496 is resolved, this logic should allow the user to only have to specify the resource requirements during init:
## Mainline
```python
predictor = TabularPredictor(...)
predictor.fit(..., num_cpus=5, num_gpus=2)
predictor.fit_extra(..., num_cpus=5, num_gpus=2)
predictions = predictor.predict(..., num_cpus=5, num_gpus=2)
```
## Proposed Solution
```python
predictor = TabularPredictor(..., num_cpus=5, num_gpus=2)
predictor.fit(...)
predictor.fit_extra(...)
predictions = predictor.predict(...)
```
## Open Questions
- What to do if the user loads the predictor on a new machine?
- What if the new machine doesn't have the specified resources? | open | 2024-09-27T03:15:48Z | 2024-12-27T08:41:47Z | https://github.com/autogluon/autogluon/issues/4497 | [
"enhancement",
"module: tabular",
"priority: 1"
] | Innixma | 2 |
pyeve/eve | flask | 596 | Flask support i18n by Flask-Bable, would eve support it? | Flask support i18n by Flask-Bable, would eve support it?
| closed | 2015-04-07T14:24:47Z | 2015-08-24T07:50:44Z | https://github.com/pyeve/eve/issues/596 | [] | gladuo | 1 |
developmentseed/lonboard | jupyter | 425 | Support `__arrow_c_array__` in viz() | It would be nice to be able to visualize any array. Note that this should be before `__geo_interface__` in the conversion steps.
You might want to do something like the following to ensure the field metadata isn't lost if extension types aren't installed.
```py
if hasattr(obj, "__arrow_c_array__"):
schema, _ = obj.__arrow_c_array__()
class SchemaHolder:
def __init__(self, capsule) -> None:
self.capsule = capsule
def __arrow_c_schema__(self):
return self.capsule
pyarrow_field = pa.field(SchemaHolder(schema))
pyarrow_array = pa.array(obj)
``` | closed | 2024-03-20T20:02:58Z | 2024-03-25T16:29:23Z | https://github.com/developmentseed/lonboard/issues/425 | [] | kylebarron | 0 |
cvat-ai/cvat | computer-vision | 8,897 | What kind of values puts cvat in the class "event" by calling a model | Hi,
I use some own wrote models in cvat, that are exist as functions in nuclio. Here are the first code lines of such a function:
````
def __runTheModelCvat__(context, event):
os.chdir('/app')
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"]))
````
I want to write some informational entries in the systemlog like project, task and picture name, user_id .... everytimes cvat calls the model. What of this informations can I find in the class event ? Where can I find a description of the class and all the informations, that cvat inserts here ?
Best Regards
Rose
| closed | 2025-01-03T12:30:01Z | 2025-01-07T13:01:07Z | https://github.com/cvat-ai/cvat/issues/8897 | [
"question"
] | RoseDeSable | 1 |
remsky/Kokoro-FastAPI | fastapi | 36 | Interruptible Server Stream from Client | closed | 2025-01-13T06:20:03Z | 2025-01-13T11:54:45Z | https://github.com/remsky/Kokoro-FastAPI/issues/36 | [] | remsky | 0 | |
akfamily/akshare | data-science | 5,399 | AKShare 接口问题报告 | AKShare Interface Issue Report | > 欢迎加入《数据科学实战》知识星球,交流财经数据与量化投资相关内容 |
> Welcome to join "Data Science in Practice" Knowledge
> Community for discussions on financial data and quantitative investment.
>
> 详细信息参考 | For detailed information, please visit::https://akshare.akfamily.xyz/learn.html
## 前提 | Prerequisites
遇到任何问题,请先将您的 AKShare 版本升级到**最新版**,可以通过如下命令升级 | Before reporting any issues, please upgrade
your AKShare to the **latest version** using the following command::
```
pip install akshare --upgrade # Python 版本需要大于等于 3.8 | Python version requirement ≥ 3.8
```
## 如何提交问题 | How to Submit an Issue
提交问题的同时,请提交以下相关信息,以更精准的解决问题。| Please provide the following information when
submitting an issue for more accurate problem resolution.
**不符合提交规范的 issues 会被关闭!** | **Issues that don't follow these guidelines will be closed!**
**详细问题描述** | Detailed Problem Description
1. 请先详细阅读文档对应接口的使用方式 | Please read the documentation thoroughly for the
relevant interface:https://akshare.akfamily.xyz
2. 操作系统版本,目前只支持 64 位操作系统 | Operating system version (64-bit only supported)
3. Python 版本,目前只支持 3.8 以上的版本 | Python version (must be 3.8 or above)
4. AKShare 版本,请升级到最新版 | AKShare version (please upgrade to latest)
5. 接口的名称和相应的调用代码 | Interface name and corresponding code
6. 接口报错的截图或描述 | Screenshot or description of the error
7. 期望获得的正确结果 | Expected correct results
| closed | 2024-12-03T06:33:54Z | 2024-12-03T10:45:19Z | https://github.com/akfamily/akshare/issues/5399 | [
"bug"
] | orange1949 | 0 |
ijl/orjson | numpy | 94 | Support custom bidirectional de(serialization) with currently supported types | Right now `orjson` serializes `datetime` to `str`, but cannot be configured to go in the reverse direction since the end result is just a `str`.
I would like to be able to configure `orjson` to work bidirectionally for numerous data types (including `datetime`), but A) don't have the appropriate hooks available, and B) can't override the way that `orjson` deals with types it supports automatically (like `datetime`).
For comparison, I can set up bidrectional (de)serialization of python `datetime` through `json` with the standard `json` module something like this:
```python
import datetime
import json
from typing import *
DT_KEY = ":__json__dt__:"
class MyEncoder(json.JSONEncoder):
def default(self, o: Any) -> Any:
if isinstance(o, datetime.datetime):
return {DT_KEY: o.isoformat()}
#return json.JSONEncoder.default(self, o)
return super().default(o)
def object_hook(obj: Dict[str, Any]) -> Any:
try:
iso_date = obj[DT_KEY]
return datetime.datetime.fromisoformat(iso_date)
except:
return obj
now = datetime.datetime.now()
dumped = json.dumps(now, cls = MyEncoder)
loaded = json.loads(dumped, object_hook = object_hook)
assert loaded == now
```
This is possible because of a few things:
1. `datetime.datetime` is obviously not supported by `json`, so a serialization hook is easy
2. The `object_hook` feature on `loads` allows for deserialization tricks like this for objects/dicts
Unless I'm missing something, the only option for partial satisfaction of what I want is to compose a wrapper class around my `datetime` objects and implement a default serializer for the wrapper class to get my custom "_datetime-in-a-dict_" storage encoding. However, I see no possible way to get `orjson` to deserialize this directly to `datetime`, since there is no deserialization hook comparable to the `json` `object_hook` interception.
Regarding tricking `orjson` to encode `datetime` instances in other ways by wrapping it, this is ok for single instances, but much trickier for instances nested inside containers since they would all need to be found and wrapped in advance (vs being able to catch them directly at all levels via the default hook). Having to dive through containers to wrap all desired `datetime` (or whatever) objects in advance of serialization is obviously not ideal.
There are multiple ways `orjson` could hopefully deal with this, but here are a few suggestions that would be awesome if `orjson` could implement:
1. Add options to disable native `orjson` serialization for currently handled types
* e.g. `OPT_DISABLE_DATETIME` to cause `datetime` objects to get routed through the default/custom serializer code (enabling custom serialization like my core `json` module code above)
* Options for disabling/reimplementing each native type (like `OPT_DISABLE_NDARRAY`) would be excellent for similar reasons
2. Add an `object_hook` option for object/dict decoding, similar to what the `json` module has
| closed | 2020-06-02T04:10:50Z | 2020-06-16T14:06:34Z | https://github.com/ijl/orjson/issues/94 | [] | rwarren | 1 |
streamlit/streamlit | streamlit | 10,754 | make does not install yarn, yet calls yarn, thus failing with "yarn: command not found" | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Currently, when I run `make all`, `make all-devel`, `make mini-devel`, or (the unadvertised) `make build-deps`, it installs some python packages, then some apt packages (it doesn't mention yarn in the output about these), then it prints:
```
cd frontend/ ; yarn install --immutable
/bin/bash: line 1: yarn: command not found
make: *** [Makefile:296: react-init] Error 127
```
This is especially bad because it halts the script before protobuf is installed, so even though I'm only working on the python thing, I don't have that python part.
Due to some other confusing problems also in the build process, I also tried running `uv run make all-devel`, and eventually got the same `yarn: command not found` error.
### Debug info
- Streamlit version: develop branch (1.43.2)
- Python version: 3.10.12
- Operating System: Windows 10 (latest)
- Browser: n/a | open | 2025-03-12T18:59:04Z | 2025-03-18T10:50:30Z | https://github.com/streamlit/streamlit/issues/10754 | [
"type:enhancement",
"area:contribution"
] | wyattscarpenter | 5 |
jupyterlab/jupyter-ai | jupyter | 802 | Render the example notebooks in the documentation? | ### Problem
There is quite a few features which are not documented in the docs but have reasonably good examples in the example notebooks.
### Proposed Solution
Should these examples be rendered in the user-facing docs so that users do not need to seach GitHub to understand what the %ai commands do?
### Additional context
Many projects render notebooks with examples in the documentation, some only use notebooks.
| closed | 2024-05-22T10:51:10Z | 2024-07-22T16:31:46Z | https://github.com/jupyterlab/jupyter-ai/issues/802 | [
"documentation",
"enhancement"
] | krassowski | 0 |
pydata/xarray | numpy | 9,351 | Add open_mfdatatree | ### What is your issue?
> Currently we have an `open_datatree` function which opens a single netcdf file (or zarr store). We could imagine an `open_mfdatatree` function which is analogous to `open_mfdataset`, which can open multiple files at once.
>
> As `DataTree` has a structure essentially the same as that of a filesystem, I'm imagining a use case where the user has a bunch of data files stored in nested directories, e.g.
>
> ```
> project
> /experimental
> data.nc
> /simulation
> /highres
> output.nc
> /lowres
> output.nc
> ```
>
> We could look through all of these folders recursively, open any files found of the correct format, and store them in a single tree.
>
> We could even allow for multiple data files in each folder if we called `open_mfdataset` on all the files found in each folder.
>
> EDIT: We could also save a tree out to multiple folders like this using a `save_mfdatatree` method.
>
> This might be particularly useful for users who want the benefit of a tree-like structure but are using a file format that doesn't support groups.
_Originally posted by @TomNicholas in https://github.com/xarray-contrib/datatree/issues/51#issue-1082703410_ | open | 2024-08-13T16:50:50Z | 2024-08-13T21:14:09Z | https://github.com/pydata/xarray/issues/9351 | [
"enhancement",
"topic-DataTree"
] | keewis | 0 |
microsoft/nni | deep-learning | 5,806 | i have a question | when i run the demo of nni, i found my trial always fail. Did anyone tell me why?

| open | 2024-09-08T14:01:01Z | 2024-09-08T14:01:01Z | https://github.com/microsoft/nni/issues/5806 | [] | coolcoolboy | 0 |
joerick/pyinstrument | django | 357 | Is it possible to profile statistically? | Hi
I need to integrate a profiler into production. We have a highly loaded web application, so profiling individual functions is not an option.
Can you tell me if a profiler can work for you as a statistical profiler? For example, take a snapshot of the stack every, say, 20 seconds. So that later, based on this information, it would be possible to draw some conclusions and profile it already point-by-point.
if not, maybe u know about the options that can be considered?
Thx | closed | 2025-01-13T13:38:36Z | 2025-01-13T14:52:30Z | https://github.com/joerick/pyinstrument/issues/357 | [] | JduMoment | 1 |
scikit-multilearn/scikit-multilearn | scikit-learn | 53 | fix a problem with predict_proba in BR | crashes when testing on some data sets, ex. genbase | closed | 2017-03-10T23:35:08Z | 2020-05-21T16:50:01Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/53 | [] | niedakh | 5 |
autogluon/autogluon | scikit-learn | 3,884 | The package cannot create holdout-based sub-fit folder | When I run the following code:
```
train = TabularDataset(train_df)
test = TabularDataset(test_df)
automl = TabularPredictor(label='net_payment_count', problem_type='regression', eval_metric='mean_absolute_error')
automl.fit(train, presets='best_quality')
```
It prints:
`No path specified. Models will be saved in: "AutogluonModels\ag-20240125_211732" Presets specified: ['best_quality'] Stack configuration (auto_stack=True): num_stack_levels=1, num_bag_folds=8, num_bag_sets=1 Dynamic stacking is enabled (dynamic_stacking=True). AutoGluon will try to determine whether the input data is affected by stacked overfitting and enable or disable stacking as a consequence. Detecting stacked overfitting by sub-fitting AutoGluon on the input data. That is, copies of AutoGluon will be sub-fit on subset(s) of the data. Then, the holdout validation data is used to detect stacked overfitting. Sub-fit(s) time limit is: 3600 seconds. Starting holdout-based sub-fit for dynamic stacking. Context path is: AutogluonModels\ag-20240125_211732/ds_sub_fit/sub_fit_ho. Running the sub-fit in a ray process to avoid memory leakage.`
And then get the following error after 15 minutes:
`FileNotFoundError: [WinError 3] The system cannot find the path specified: 'AutogluonModels\\ag-20240125_211732/ds_sub_fit'`
Note: version - 1.0.0 on Windows machine
Full exception details:
```
FileNotFoundError Traceback (most recent call last)
Cell In[501], [line 5](vscode-notebook-cell:?execution_count=501&line=5)
[2](vscode-notebook-cell:?execution_count=501&line=2) test = TabularDataset(test_df)
[4](vscode-notebook-cell:?execution_count=501&line=4) automl = TabularPredictor(label='net_payment_count', problem_type='regression', eval_metric='mean_absolute_error')
----> [5](vscode-notebook-cell:?execution_count=501&line=5) automl.fit(train, presets='best_quality')
File [c:\Users\suleyman\anaconda3\envs\ml_apps\lib\site-packages\autogluon\core\utils\decorators.py:31](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/core/utils/decorators.py:31), in unpack.<locals>._unpack_inner.<locals>._call(*args, **kwargs)
[28](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/core/utils/decorators.py:28) @functools.wraps(f)
[29](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/core/utils/decorators.py:29) def _call(*args, **kwargs):
[30](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/core/utils/decorators.py:30) gargs, gkwargs = g(*other_args, *args, **kwargs)
---> [31](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/core/utils/decorators.py:31) return f(*gargs, **gkwargs)
File [c:\Users\suleyman\anaconda3\envs\ml_apps\lib\site-packages\autogluon\tabular\predictor\predictor.py:1099](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1099), in TabularPredictor.fit(self, train_data, tuning_data, time_limit, presets, hyperparameters, feature_metadata, infer_limit, infer_limit_batch_size, fit_weighted_ensemble, fit_full_last_level_weighted_ensemble, full_weighted_ensemble_additionally, dynamic_stacking, calibrate_decision_threshold, num_cpus, num_gpus, **kwargs)
[1093](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1093) if dynamic_stacking:
[1094](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1094) logger.log(
[1095](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1095) 20,
[1096](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1096) f"Dynamic stacking is enabled (dynamic_stacking={dynamic_stacking}). "
[1097](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1097) "AutoGluon will try to determine whether the input data is affected by stacked overfitting and enable or disable stacking as a consequence.",
[1098](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1098) )
-> [1099](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1099) num_stack_levels, time_limit = self._dynamic_stacking(**ds_args, ag_fit_kwargs=ag_fit_kwargs, ag_post_fit_kwargs=ag_post_fit_kwargs)
[1101](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1101) if (time_limit is not None) and (time_limit <= 0):
[1102](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1102) raise ValueError(
[1103](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1103) f"Not enough time left to train models for the full fit. Consider specifying a larger time_limit. Time remaining: {time_limit}s"
[1104](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/site-packages/autogluon/tabular/predictor/predictor.py:1104) )
...
--> [598](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/shutil.py:598) with os.scandir(path) as scandir_it:
[599](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/shutil.py:599) entries = list(scandir_it)
[600](file:///C:/Users/suleyman/anaconda3/envs/ml_apps/lib/shutil.py:600) except OSError:
```
_Originally posted by @dataXcoder in https://github.com/autogluon/autogluon/discussions/3882_ | closed | 2024-01-26T07:44:04Z | 2024-04-05T18:46:01Z | https://github.com/autogluon/autogluon/issues/3884 | [
"bug",
"OS: Windows",
"module: tabular",
"Needs Triage",
"priority: 1"
] | shchur | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,233 | Error: metadata-generation-failed - related to scikit-learn? - python 3.11 | Arch Linux, kernel 6.4.2-arch1-1, python 3.11.3 (GCC 13.1.1), pip 23.1.2
Thank you for any help!
I followed these steps:
```
python -m venv .env
pip install ffmpeg
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt
```
Below is the log of `pip install -r requirements.txt`, with the relevant error:
```
Collecting inflect==5.3.0 (from -r requirements.txt (line 1))
Using cached inflect-5.3.0-py3-none-any.whl (32 kB)
Collecting librosa==0.8.1 (from -r requirements.txt (line 2))
Using cached librosa-0.8.1-py3-none-any.whl (203 kB)
Collecting matplotlib==3.5.1 (from -r requirements.txt (line 3))
Using cached matplotlib-3.5.1.tar.gz (35.3 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting numpy==1.20.3 (from -r requirements.txt (line 4))
Using cached numpy-1.20.3.zip (7.8 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting Pillow==8.4.0 (from -r requirements.txt (line 5))
Using cached Pillow-8.4.0.tar.gz (49.4 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting PyQt5==5.15.6 (from -r requirements.txt (line 6))
Using cached PyQt5-5.15.6-cp36-abi3-manylinux1_x86_64.whl (8.3 MB)
Collecting scikit-learn==1.0.2 (from -r requirements.txt (line 7))
Using cached scikit-learn-1.0.2.tar.gz (6.7 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [261 lines of output]
Partial import of sklearn during the build process.
setup.py:128: DeprecationWarning:
`numpy.distutils` is deprecated since NumPy 1.23.0, as a result
of the deprecation of `distutils` itself. It will be removed for
Python >= 3.12. For older Python versions it will remain present.
It is recommended to use `setuptools < 60.0` for those Python versions.
For more details, see:
https://numpy.org/devdocs/reference/distutils_status_migration.html
from numpy.distutils.command.build_ext import build_ext # noqa
INFO: C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -ffat-lto-objects -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -fPIC
INFO: compile options: '-c'
INFO: gcc: test_program.c
INFO: gcc objects/test_program.o -o test_program
INFO: C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -ffat-lto-objects -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -fPIC
INFO: compile options: '-c'
extra options: '-fopenmp'
INFO: gcc: test_program.c
INFO: gcc objects/test_program.o -o test_program -fopenmp
Compiling sklearn/__check_build/_check_build.pyx because it changed.
Compiling sklearn/preprocessing/_csr_polynomial_expansion.pyx because it changed.
Compiling sklearn/cluster/_dbscan_inner.pyx because it changed.
Compiling sklearn/cluster/_hierarchical_fast.pyx because it changed.
Compiling sklearn/cluster/_k_means_common.pyx because it changed.
Compiling sklearn/cluster/_k_means_lloyd.pyx because it changed.
Compiling sklearn/cluster/_k_means_elkan.pyx because it changed.
Compiling sklearn/cluster/_k_means_minibatch.pyx because it changed.
Compiling sklearn/datasets/_svmlight_format_fast.pyx because it changed.
Compiling sklearn/decomposition/_online_lda_fast.pyx because it changed.
Compiling sklearn/decomposition/_cdnmf_fast.pyx because it changed.
Compiling sklearn/ensemble/_gradient_boosting.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_gradient_boosting.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/histogram.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/splitting.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_binning.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_predictor.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_loss.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_bitset.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/common.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/utils.pyx because it changed.
Compiling sklearn/feature_extraction/_hashing_fast.pyx because it changed.
Compiling sklearn/manifold/_utils.pyx because it changed.
Compiling sklearn/manifold/_barnes_hut_tsne.pyx because it changed.
Compiling sklearn/metrics/cluster/_expected_mutual_info_fast.pyx because it changed.
Compiling sklearn/metrics/_pairwise_fast.pyx because it changed.
Compiling sklearn/metrics/_dist_metrics.pyx because it changed.
Compiling sklearn/neighbors/_ball_tree.pyx because it changed.
Compiling sklearn/neighbors/_kd_tree.pyx because it changed.
Compiling sklearn/neighbors/_partition_nodes.pyx because it changed.
Compiling sklearn/neighbors/_quad_tree.pyx because it changed.
Compiling sklearn/tree/_tree.pyx because it changed.
Compiling sklearn/tree/_splitter.pyx because it changed.
Compiling sklearn/tree/_criterion.pyx because it changed.
Compiling sklearn/tree/_utils.pyx because it changed.
Compiling sklearn/utils/sparsefuncs_fast.pyx because it changed.
Compiling sklearn/utils/_cython_blas.pyx because it changed.
Compiling sklearn/utils/arrayfuncs.pyx because it changed.
Compiling sklearn/utils/murmurhash.pyx because it changed.
Compiling sklearn/utils/_fast_dict.pyx because it changed.
Compiling sklearn/utils/_openmp_helpers.pyx because it changed.
Compiling sklearn/utils/_seq_dataset.pyx because it changed.
Compiling sklearn/utils/_weight_vector.pyx because it changed.
Compiling sklearn/utils/_random.pyx because it changed.
Compiling sklearn/utils/_logistic_sigmoid.pyx because it changed.
Compiling sklearn/utils/_readonly_array_wrapper.pyx because it changed.
Compiling sklearn/utils/_typedefs.pyx because it changed.
Compiling sklearn/svm/_newrand.pyx because it changed.
Compiling sklearn/svm/_libsvm.pyx because it changed.
Compiling sklearn/svm/_liblinear.pyx because it changed.
Compiling sklearn/svm/_libsvm_sparse.pyx because it changed.
Compiling sklearn/linear_model/_cd_fast.pyx because it changed.
Compiling sklearn/linear_model/_sgd_fast.pyx because it changed.
Compiling sklearn/linear_model/_sag_fast.pyx because it changed.
Compiling sklearn/_isotonic.pyx because it changed.
[ 1/55] Cythonizing sklearn/__check_build/_check_build.pyx
[ 2/55] Cythonizing sklearn/_isotonic.pyx
[ 3/55] Cythonizing sklearn/cluster/_dbscan_inner.pyx
[ 4/55] Cythonizing sklearn/cluster/_hierarchical_fast.pyx
[ 5/55] Cythonizing sklearn/cluster/_k_means_common.pyx
[ 6/55] Cythonizing sklearn/cluster/_k_means_elkan.pyx
[ 7/55] Cythonizing sklearn/cluster/_k_means_lloyd.pyx
[ 8/55] Cythonizing sklearn/cluster/_k_means_minibatch.pyx
[ 9/55] Cythonizing sklearn/datasets/_svmlight_format_fast.pyx
[10/55] Cythonizing sklearn/decomposition/_cdnmf_fast.pyx
[11/55] Cythonizing sklearn/decomposition/_online_lda_fast.pyx
[12/55] Cythonizing sklearn/ensemble/_gradient_boosting.pyx
[13/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_binning.pyx
[14/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_bitset.pyx
[15/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_gradient_boosting.pyx
[16/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_loss.pyx
[17/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_predictor.pyx
[18/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/common.pyx
[19/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/histogram.pyx
[20/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/splitting.pyx
[21/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/utils.pyx
[22/55] Cythonizing sklearn/feature_extraction/_hashing_fast.pyx
[23/55] Cythonizing sklearn/linear_model/_cd_fast.pyx
[24/55] Cythonizing sklearn/linear_model/_sag_fast.pyx
[25/55] Cythonizing sklearn/linear_model/_sgd_fast.pyx
[26/55] Cythonizing sklearn/manifold/_barnes_hut_tsne.pyx
[27/55] Cythonizing sklearn/manifold/_utils.pyx
[28/55] Cythonizing sklearn/metrics/_dist_metrics.pyx
[29/55] Cythonizing sklearn/metrics/_pairwise_fast.pyx
[30/55] Cythonizing sklearn/metrics/cluster/_expected_mutual_info_fast.pyx
[31/55] Cythonizing sklearn/neighbors/_ball_tree.pyx
[32/55] Cythonizing sklearn/neighbors/_kd_tree.pyx
[33/55] Cythonizing sklearn/neighbors/_partition_nodes.pyx
[34/55] Cythonizing sklearn/neighbors/_quad_tree.pyx
[35/55] Cythonizing sklearn/preprocessing/_csr_polynomial_expansion.pyx
[36/55] Cythonizing sklearn/svm/_liblinear.pyx
[37/55] Cythonizing sklearn/svm/_libsvm.pyx
[38/55] Cythonizing sklearn/svm/_libsvm_sparse.pyx
[39/55] Cythonizing sklearn/svm/_newrand.pyx
[40/55] Cythonizing sklearn/tree/_criterion.pyx
[41/55] Cythonizing sklearn/tree/_splitter.pyx
[42/55] Cythonizing sklearn/tree/_tree.pyx
[43/55] Cythonizing sklearn/tree/_utils.pyx
[44/55] Cythonizing sklearn/utils/_cython_blas.pyx
[45/55] Cythonizing sklearn/utils/_fast_dict.pyx
[46/55] Cythonizing sklearn/utils/_logistic_sigmoid.pyx
[47/55] Cythonizing sklearn/utils/_openmp_helpers.pyx
[48/55] Cythonizing sklearn/utils/_random.pyx
[49/55] Cythonizing sklearn/utils/_readonly_array_wrapper.pyx
[50/55] Cythonizing sklearn/utils/_seq_dataset.pyx
[51/55] Cythonizing sklearn/utils/_typedefs.pyx
[52/55] Cythonizing sklearn/utils/_weight_vector.pyx
[53/55] Cythonizing sklearn/utils/arrayfuncs.pyx
[54/55] Cythonizing sklearn/utils/murmurhash.pyx
[55/55] Cythonizing sklearn/utils/sparsefuncs_fast.pyx
running dist_info
running build_src
INFO: build_src
INFO: building library "libsvm-skl" sources
INFO: building library "liblinear-skl" sources
INFO: building extension "sklearn.__check_build._check_build" sources
INFO: building extension "sklearn.preprocessing._csr_polynomial_expansion" sources
INFO: building extension "sklearn.cluster._dbscan_inner" sources
INFO: building extension "sklearn.cluster._hierarchical_fast" sources
INFO: building extension "sklearn.cluster._k_means_common" sources
INFO: building extension "sklearn.cluster._k_means_lloyd" sources
INFO: building extension "sklearn.cluster._k_means_elkan" sources
INFO: building extension "sklearn.cluster._k_means_minibatch" sources
INFO: building extension "sklearn.datasets._svmlight_format_fast" sources
INFO: building extension "sklearn.decomposition._online_lda_fast" sources
INFO: building extension "sklearn.decomposition._cdnmf_fast" sources
INFO: building extension "sklearn.ensemble._gradient_boosting" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._gradient_boosting" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting.histogram" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting.splitting" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._binning" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._predictor" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._loss" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._bitset" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting.common" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting.utils" sources
INFO: building extension "sklearn.feature_extraction._hashing_fast" sources
INFO: building extension "sklearn.manifold._utils" sources
INFO: building extension "sklearn.manifold._barnes_hut_tsne" sources
INFO: building extension "sklearn.metrics.cluster._expected_mutual_info_fast" sources
INFO: building extension "sklearn.metrics._pairwise_fast" sources
INFO: building extension "sklearn.metrics._dist_metrics" sources
INFO: building extension "sklearn.neighbors._ball_tree" sources
INFO: building extension "sklearn.neighbors._kd_tree" sources
INFO: building extension "sklearn.neighbors._partition_nodes" sources
INFO: building extension "sklearn.neighbors._quad_tree" sources
INFO: building extension "sklearn.tree._tree" sources
INFO: building extension "sklearn.tree._splitter" sources
INFO: building extension "sklearn.tree._criterion" sources
INFO: building extension "sklearn.tree._utils" sources
INFO: building extension "sklearn.utils.sparsefuncs_fast" sources
INFO: building extension "sklearn.utils._cython_blas" sources
INFO: building extension "sklearn.utils.arrayfuncs" sources
INFO: building extension "sklearn.utils.murmurhash" sources
INFO: building extension "sklearn.utils._fast_dict" sources
INFO: building extension "sklearn.utils._openmp_helpers" sources
INFO: building extension "sklearn.utils._seq_dataset" sources
INFO: building extension "sklearn.utils._weight_vector" sources
INFO: building extension "sklearn.utils._random" sources
INFO: building extension "sklearn.utils._logistic_sigmoid" sources
INFO: building extension "sklearn.utils._readonly_array_wrapper" sources
INFO: building extension "sklearn.utils._typedefs" sources
INFO: building extension "sklearn.svm._newrand" sources
INFO: building extension "sklearn.svm._libsvm" sources
INFO: building extension "sklearn.svm._liblinear" sources
INFO: building extension "sklearn.svm._libsvm_sparse" sources
INFO: building extension "sklearn.linear_model._cd_fast" sources
INFO: building extension "sklearn.linear_model._sgd_fast" sources
INFO: building extension "sklearn.linear_model._sag_fast" sources
INFO: building extension "sklearn._isotonic" sources
INFO: building data_files sources
INFO: build_src: building npy-pkg config files
/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
Traceback (most recent call last):
File "/tmp/Real-Time-Voice-Cloning/.env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/tmp/Real-Time-Voice-Cloning/.env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/Real-Time-Voice-Cloning/.env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 149, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 174, in prepare_metadata_for_build_wheel
self.run_setup()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 268, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 319, in <module>
setup_package()
File "setup.py", line 315, in setup_package
setup(**metadata)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/command/dist_info.py", line 31, in run
egg_info.run()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/egg_info.py", line 24, in run
self.run_command("build_src")
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/build_src.py", line 144, in run
self.build_sources()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/build_src.py", line 164, in build_sources
self.build_npy_pkg_config()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/build_src.py", line 235, in build_npy_pkg_config
install_cmd.finalize_options()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/install.py", line 21, in finalize_options
old_install.finalize_options(self)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/command/install.py", line 45, in finalize_options
orig.install.finalize_options(self)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/install.py", line 325, in finalize_options
self.finalize_unix()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/install.py", line 498, in finalize_unix
self.select_scheme("posix_prefix")
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/install.py", line 528, in select_scheme
return self._select_scheme(resolved)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/install.py", line 537, in _select_scheme
setattr(self, attrname, scheme[key])
~~~~~~^^^^^
KeyError: 'headers'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
``` | open | 2023-07-10T23:04:33Z | 2023-07-14T13:36:52Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1233 | [] | jessienab | 1 |
python-visualization/folium | data-visualization | 1,948 | No popup when clicking on specific icons (BeautifyIcon plugin) | Hi!
I was checking the BeautifyIcon plugin demo and I don't understand why when I click on the plane icon the popup doesn't show up.
Here is the example: https://python-visualization.github.io/folium/latest/user_guide/plugins/beautify_icon.html
Is this a font-awesome related problem?
| open | 2024-05-13T11:16:59Z | 2024-07-25T23:01:44Z | https://github.com/python-visualization/folium/issues/1948 | [
"plugin",
"not our bug"
] | EmanuelCastanho | 5 |
huggingface/text-generation-inference | nlp | 2,671 | Distributed Inference failing for Llama-3.1-70b-Instruct | ### System Info
text-generation-inference docker: sha-5e0fb46 (latest)
OS: Ubuntu 22.04
Model: meta-llama/Llama-3.1-70B-Instruct
GPU Used: 4
`nvidia-smi`:
```
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.06 Driver Version: 555.42.06 CUDA Version: 12.5 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A10G Off | 00000000:00:1B.0 Off | 0 |
| 0% 25C P0 58W / 300W | 2880MiB / 23028MiB | 7% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA A10G Off | 00000000:00:1C.0 Off | 0 |
| 0% 19C P8 16W / 300W | 17MiB / 23028MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA A10G Off | 00000000:00:1D.0 Off | 0 |
| 0% 21C P8 16W / 300W | 17MiB / 23028MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA A10G Off | 00000000:00:1E.0 Off | 0 |
| 0% 21C P8 22W / 300W | 17MiB / 23028MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2395 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 2421 C /opt/conda/bin/python3.11 2858MiB |
| 1 N/A N/A 2395 G /usr/lib/xorg/Xorg 4MiB |
| 2 N/A N/A 2395 G /usr/lib/xorg/Xorg 4MiB |
| 3 N/A N/A 2395 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------------------+
ubuntu@ip-172-31-31-233:~$ docker stop main_llm
main_llm
ubuntu@ip-172-31-31-233:~$ nvidia-smi
Sun Oct 20 03:21:53 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.06 Driver Version: 555.42.06 CUDA Version: 12.5 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A10G Off | 00000000:00:1B.0 Off | 0 |
| 0% 25C P0 42W / 300W | 14MiB / 23028MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA A10G Off | 00000000:00:1C.0 Off | 0 |
| 0% 19C P8 16W / 300W | 14MiB / 23028MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA A10G Off | 00000000:00:1D.0 Off | 0 |
| 0% 21C P8 16W / 300W | 14MiB / 23028MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA A10G Off | 00000000:00:1E.0 Off | 0 |
| 0% 21C P8 16W / 300W | 14MiB / 23028MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
```
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
1. Run the command below
```
docker run --name main_llm_dist --gpus all --shm-size 1g -p 8010:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model --quantize eetq --max-total-tokens 6000 --sharded true --num-shard 4
```
2. In a few minutes, it raises the following error:
```
2024-10-20T03:04:31.701539Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-10-20T03:04:31.703596Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-10-20T03:04:31.708164Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-10-20T03:04:31.708272Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-10-20T03:04:41.709264Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-10-20T03:04:41.711905Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-10-20T03:04:41.715971Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-10-20T03:04:41.716421Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-10-20T03:04:48.821447Z ERROR shard-manager: text_generation_launcher: Shard complete standard error output:
2024-10-20 03:02:43.938 | INFO | text_generation_server.utils.import_utils:<module>:80 - Detected system cuda
/opt/conda/lib/python3.11/site-packages/text_generation_server/layers/gptq/cuda.py:242: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd(cast_inputs=torch.float16)
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/selective_scan_interface.py:158: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/selective_scan_interface.py:231: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
@custom_bwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/triton/layernorm.py:507: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/triton/layernorm.py:566: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
@custom_bwd
[rank1]:[E1020 03:04:48.524815927 ProcessGroupNCCL.cpp:607] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=120000) ran for 120010 milliseconds before timing out.
[rank1]:[E1020 03:04:48.531276913 ProcessGroupNCCL.cpp:1664] [PG 0 (default_pg) Rank 1] Exception (either an error or timeout) detected by watchdog at work: 1, last enqueued NCCL work: 1, last completed NCCL work: -1.
[rank1]:[E1020 03:04:48.531294224 ProcessGroupNCCL.cpp:1709] [PG 0 (default_pg) Rank 1] Timeout at NCCL work: 1, last enqueued NCCL work: 1, last completed NCCL work: -1.
[rank1]:[E1020 03:04:48.531301554 ProcessGroupNCCL.cpp:621] [Rank 1] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank1]:[E1020 03:04:48.531306434 ProcessGroupNCCL.cpp:627] [Rank 1] To avoid data inconsistency, we are taking the entire process down.
[rank1]:[E1020 03:04:48.534540962 ProcessGroupNCCL.cpp:1515] [PG 0 (default_pg) Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=120000) ran for 120010 milliseconds before timing out.
Exception raised from checkTimeout at /opt/conda/conda-bld/pytorch_1720538435607/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:609 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x70e066ba5f86 in /opt/conda/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x70e0167f00b2 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x70e0167f6af3 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x70e0167f8edc in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0xd3b75 (0x70e06fcc7b75 in /opt/conda/bin/../lib/libstdc++.so.6)
frame #5: <unknown function> + 0x94ac3 (0x70e06fe6bac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x70e06fefca04 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG 0 (default_pg) Rank 1] Process group watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=120000) ran for 120010 milliseconds before timing out.
Exception raised from checkTimeout at /opt/conda/conda-bld/pytorch_1720538435607/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:609 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x70e066ba5f86 in /opt/conda/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x70e0167f00b2 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x70e0167f6af3 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x70e0167f8edc in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0xd3b75 (0x70e06fcc7b75 in /opt/conda/bin/../lib/libstdc++.so.6)
frame #5: <unknown function> + 0x94ac3 (0x70e06fe6bac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x70e06fefca04 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at /opt/conda/conda-bld/pytorch_1720538435607/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1521 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x70e066ba5f86 in /opt/conda/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0xe3ec34 (0x70e016478c34 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: <unknown function> + 0xd3b75 (0x70e06fcc7b75 in /opt/conda/bin/../lib/libstdc++.so.6)
frame #3: <unknown function> + 0x94ac3 (0x70e06fe6bac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: clone + 0x44 (0x70e06fefca04 in /lib/x86_64-linux-gnu/libc.so.6)
rank=1
2024-10-20T03:04:48.821482Z ERROR shard-manager: text_generation_launcher: Shard process was signaled to shutdown with signal 6 rank=1
2024-10-20T03:04:48.892636Z ERROR text_generation_launcher: Shard 1 failed to start
2024-10-20T03:04:48.892664Z INFO text_generation_launcher: Shutting down shards
2024-10-20T03:04:48.914829Z INFO shard-manager: text_generation_launcher: Terminating shard rank=0
2024-10-20T03:04:48.914871Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=0
2024-10-20T03:04:48.918051Z INFO shard-manager: text_generation_launcher: Terminating shard rank=3
2024-10-20T03:04:48.918081Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=3
2024-10-20T03:04:48.922333Z ERROR shard-manager: text_generation_launcher: Shard complete standard error output:
2024-10-20 03:02:43.901 | INFO | text_generation_server.utils.import_utils:<module>:80 - Detected system cuda
/opt/conda/lib/python3.11/site-packages/text_generation_server/layers/gptq/cuda.py:242: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd(cast_inputs=torch.float16)
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/selective_scan_interface.py:158: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/selective_scan_interface.py:231: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
@custom_bwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/triton/layernorm.py:507: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/triton/layernorm.py:566: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
@custom_bwd
[rank2]:[E1020 03:04:48.524736665 ProcessGroupNCCL.cpp:607] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=120000) ran for 120010 milliseconds before timing out.
[rank2]:[E1020 03:04:48.531283133 ProcessGroupNCCL.cpp:1664] [PG 0 (default_pg) Rank 2] Exception (either an error or timeout) detected by watchdog at work: 1, last enqueued NCCL work: 1, last completed NCCL work: -1.
[rank2]:[E1020 03:04:48.531301204 ProcessGroupNCCL.cpp:1709] [PG 0 (default_pg) Rank 2] Timeout at NCCL work: 1, last enqueued NCCL work: 1, last completed NCCL work: -1.
[rank2]:[E1020 03:04:48.531306904 ProcessGroupNCCL.cpp:621] [Rank 2] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank2]:[E1020 03:04:48.531310364 ProcessGroupNCCL.cpp:627] [Rank 2] To avoid data inconsistency, we are taking the entire process down.
[rank2]:[E1020 03:04:48.534529352 ProcessGroupNCCL.cpp:1515] [PG 0 (default_pg) Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=120000) ran for 120010 milliseconds before timing out.
Exception raised from checkTimeout at /opt/conda/conda-bld/pytorch_1720538435607/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:609 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e93bf776f86 in /opt/conda/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7e936f7f00b2 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7e936f7f6af3 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7e936f7f8edc in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0xd3b75 (0x7e93c8cf0b75 in /opt/conda/bin/../lib/libstdc++.so.6)
frame #5: <unknown function> + 0x94ac3 (0x7e93d796cac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x7e93d79fda04 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG 0 (default_pg) Rank 2] Process group watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=120000) ran for 120010 milliseconds before timing out.
Exception raised from checkTimeout at /opt/conda/conda-bld/pytorch_1720538435607/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:609 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e93bf776f86 in /opt/conda/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1d2 (0x7e936f7f00b2 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7e936f7f6af3 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7e936f7f8edc in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0xd3b75 (0x7e93c8cf0b75 in /opt/conda/bin/../lib/libstdc++.so.6)
frame #5: <unknown function> + 0x94ac3 (0x7e93d796cac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: clone + 0x44 (0x7e93d79fda04 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at /opt/conda/conda-bld/pytorch_1720538435607/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1521 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7e93bf776f86 in /opt/conda/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0xe3ec34 (0x7e936f478c34 in /opt/conda/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: <unknown function> + 0xd3b75 (0x7e93c8cf0b75 in /opt/conda/bin/../lib/libstdc++.so.6)
frame #3: <unknown function> + 0x94ac3 (0x7e93d796cac3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: clone + 0x44 (0x7e93d79fda04 in /lib/x86_64-linux-gnu/libc.so.6)
rank=2
2024-10-20T03:04:48.922371Z ERROR shard-manager: text_generation_launcher: Shard process was signaled to shutdown with signal 6 rank=2
2024-10-20T03:04:49.018264Z INFO shard-manager: text_generation_launcher: shard terminated rank=3
2024-10-20T03:04:49.115125Z INFO shard-manager: text_generation_launcher: shard terminated rank=0
Error: ShardCannotStart
```
### Expected behavior
Expecting TGI to be able to run distributed inference over 4xA10 GPUs. | closed | 2024-10-20T03:22:39Z | 2024-11-13T14:12:33Z | https://github.com/huggingface/text-generation-inference/issues/2671 | [] | SMAntony | 3 |
exaloop/codon | numpy | 400 | Trying to run nvidia gpu example failed to find libdevice.10.bc | > codon run gpuEx.py
libdevice.10.bc: error: Could not open input file: No such file or directory
where gpuEx.py is:
```
import gpu
@gpu.kernel
def hello(a, b, c):
i = gpu.thread.x
c[i] = a[i] + b[i]
a = [i for i in range(16)]
b = [2*i for i in range(16)]
c = [0 for _ in range(16)]
hello(a, b, c, grid=1, block=16)
print(c)
```
OS; Ubuntu 22.04
where libdevice.10.bc is in
> ls /usr/lib/cuda/nvvm/libdevice
. .. libdevice.10.bc
I also tried adding /usr/lib/cuda/nvvm/libdevice to /etc/ld.so.conf and ran sudo ldconf and verified directory was in ldconf path
using:
ldconfig -v | g nvvm
...
/usr/lib/cuda/nvvm/libdevice: (from /etc/ld.so.conf:2)
I also tried putting a symbolic link in /usr/lib/
> ll /usr/lib/libdevice.10.bc
lrwxrwxrwx 1 root root 44 Jun 6 21:14 /usr/lib/libdevice.10.bc -> /usr/lib/cuda/nvvm/libdevice/libdevice.10.bc
>
None of the above was successful. Any ideas?
| closed | 2023-06-07T01:28:11Z | 2024-11-10T06:10:25Z | https://github.com/exaloop/codon/issues/400 | [] | kwmartin | 3 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 102 | Automatic conversion from Int to ID is problematic | Graphene-SQLAlchemy automatically converts columns of type SmallInteger or Integer to `ID!` fields if they are primary keys, but does not convert such columns to `ID` fields if they are foreign keys.
Take for example this schema:
```python
class Department(Base):
__tablename__ = 'department'
id = Column(Integer, primary_key=True)
name = Column(String)
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
department_id = Column(Integer, ForeignKey('department.id'))
department = relationship(Department)
class DepartmentType(SQLAlchemyObjectType):
class Meta:
model = Department
class UserType(SQLAlchemyObjectType):
class Meta:
model = User
class Query(ObjectType):
departments = List(DepartmentType)
users = List(UserType)
def resolve_departments(self, info):
return DepartmentType.get_query(info)
def resolve_users(self, info):
return UserType.get_query(info)
```
You can run the following query:
```graphql
query {
users {
id
name
departmentId
department {
id
}
}
}
```
As a result, you get something like:
```json
{
"data": {
"users": [
{
"id": "1",
"firstName": "Fred",
"departmentId": 1,
"department": {
"id": "1"
}
},
{
"id": "2",
"firstName": "Barnie",
"departmentId": 2,
"department": {
"id": "2"
}
}
]
}
}
```
As you see, `department.id` is a string (because IDs are returned as strings), while `departmentId` is a number. This turned out to be a huge problem and source of error in practice. Working with this inconsistent, fault-prone interface has bitten me many times. When storing ids in objects on the frontend, or using ids as filters, I never know whether I should use numbers or strings. Currently I have conversions from number to string and vice versa everywhere in my frontend code, and if I don't do it correctly, things stop working in hard to debug ways because you often don't recognize such type mismatches. On the server side, do I take ids used as filter parameters as IDs or Ints? If I do the former, I must then convert them to integer when using them as filter arguments for SQLAlchemy. So, really, this is no fun to work with and doesn't work in practice, because you always have this mental burden of thinking about whether your ids should be represented as strings or numbers and whether you need to convert them when passing them around.
I suggest the conversions should be consistent. Either convert all keys, including foreign keys, to IDs, or do not make a special case conversion for primary keys. Actually I'd prefer the latter, since then I never need to think about the type and since storing numbers on the frontend uses less memory.
Now of course I know that there is the relay specification which assumes there is an `id` field with a type of `ID`. So when using the relay interface, things are different. In this case, I suggest converting to IDs everywhere (including foreign keys) - but here we need conversion of the values to global ids anyway, they are not just the row ids converted to strings.
| open | 2017-12-17T11:55:43Z | 2024-12-05T15:54:25Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/102 | [] | Cito | 15 |
iperov/DeepFaceLab | deep-learning | 565 | wrong frame sequence when convert to mp4 |
when i convert to mp4 there are frames which skip sequence.
every after around 10 frames .... 1 frame will show up in the video which is not part of the numbered output framed in the merged .
for example if i have 20,000 frames. after 4,000 frames 7000th frame will come up on the video .then the video will continue from 4,000 to 4010 then 7001th frame will come .... then video will continue from 4011 then 7002 frame will show up and it goes on ...
using a 980ti i7 6th gen . | open | 2020-01-20T06:14:17Z | 2023-06-08T20:04:23Z | https://github.com/iperov/DeepFaceLab/issues/565 | [] | cabof42927 | 3 |
Esri/arcgis-python-api | jupyter | 1,819 | to_featureset() is not handling fields with null date values correctly | I am using to_featureset() to create a featureset from a spatially enabled dataframe and then using edit_features() to edit a feature layer on AGOL. I was having this issue https://github.com/Esri/arcgis-python-api/issues/1693 so I rolled back to a previous version of the API. Now that 2.3.0 is out, I was going to try my luck again.
It seems there is a new bug with to_featureset(). Only some of my features manage to successfully make it to the feature layer using edit_features(). And I am getting this message when they fail.
{'addResults': [{'objectId': 1,
'uniqueId': 1,
'globalId': None,
'success': False,
'error': {'code': 1000,
'description': 'The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Parameter 21 ("@applicant_closeout_date"): The supplied value is not a valid instance of data type float. Check the source data for invalid values. An example of an invalid value is data of numeric type with scale greater than precision.'}}],
'updateResults': [],
'deleteResults': []}
That field is not a float. It is a date with null values mixed in. I did some troubleshooting and it seems to_featureset() is treating that field differently than other date fields in the data with no nulls.
Here is how it treats that field if I remove the null values, and these values will work in a bulk edit using edit_features(adds=featureset.features).

Here is how it treats that field if it has null values, and it won't work using edit_features(adds=featureset.features).

| closed | 2024-05-02T12:43:23Z | 2024-07-10T14:32:30Z | https://github.com/Esri/arcgis-python-api/issues/1819 | [
"bug"
] | tbrobin | 12 |
tensorpack/tensorpack | tensorflow | 1,265 | How to result predict.py --evaluate with Tensorpack? | Hi, everyone.
Problems occurred while using tensorpack.
so.. I trained fasterRCNN example( not resnet, used my backbone network named vovnet). I want to evaluate the performance, i tried to command blow.
> ./predict.py --evaluate coco_minival2014-outputs225000.jn --load ./train_log/maskrcnn/model-230000.data-00000-of-00001 --config DATA.BASEDIR=COCO/DIR
and, i saw the result
```
2019-07-15 09:39:53.542112: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supportinstructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-07-15 09:39:54.005480: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:04:00.0
totalMemory: 11.90GiB freeMemory: 7.38GiB
2019-07-15 09:39:54.648601: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:05:00.0
totalMemory: 11.90GiB freeMemory: 7.38GiB
2019-07-15 09:39:55.184735: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:08:00.0
totalMemory: 11.90GiB freeMemory: 7.38GiB
2019-07-15 09:39:55.650285: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:09:00.0
totalMemory: 11.90GiB freeMemory: 7.38GiB
2019-07-15 09:39:56.412059: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:83:00.0
totalMemory: 11.90GiB freeMemory: 7.38GiB
2019-07-15 09:39:56.934024: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:84:00.0
totalMemory: 11.90GiB freeMemory: 7.38GiB
2019-07-15 09:39:57.630006: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:87:00.0
totalMemory: 11.90GiB freeMemory: 7.38GiB
2019-07-15 09:39:58.285865: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:88:00.0
totalMemory: 11.90GiB freeMemory: 7.38GiB
2019-07-15 09:39:58.303603: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visib gpu devices: 0, 1, 2, 3, 4, 5, 6, 7
2019-07-15 09:40:02.697575: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device intercnect StreamExecutor with strength 1 edge matrix:
2019-07-15 09:40:02.697677: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0 1 2 3 5 6 7
2019-07-15 09:40:02.697689: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N Y Y Y N N N
2019-07-15 09:40:02.697696: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 1: Y N Y Y N N N
2019-07-15 09:40:02.697733: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 2: Y Y N Y N N N
2019-07-15 09:40:02.697740: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 3: Y Y Y N N N N
2019-07-15 09:40:02.697748: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 4: N N N N Y Y Y
2019-07-15 09:40:02.697756: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 5: N N N N N Y Y
2019-07-15 09:40:02.697764: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 6: N N N N Y N Y
2019-07-15 09:40:02.697772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 7: N N N N Y Y N
2019-07-15 09:40:02.705121: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensFlow device (/device:GPU:0 with 7128 MB memory) -> physical GPU (device: 0, name: TITAN X (Pascal pci bus id: 0000:04:00.0, compute capability: 6.1)
2019-07-15 09:40:02.843159: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensFlow device (/device:GPU:1 with 7126 MB memory) -> physical GPU (device: 1, name: TITAN X (Pascal pci bus id: 0000:05:00.0, compute capability: 6.1)
2019-07-15 09:40:02.962598: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensFlow device (/device:GPU:2 with 7126 MB memory) -> physical GPU (device: 2, name: TITAN X (Pascal pci bus id: 0000:08:00.0, compute capability: 6.1)
2019-07-15 09:40:03.066917: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensFlow device (/device:GPU:3 with 7126 MB memory) -> physical GPU (device: 3, name: TITAN X (Pascal pci bus id: 0000:09:00.0, compute capability: 6.1)
2019-07-15 09:40:03.152312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensFlow device (/device:GPU:4 with 7126 MB memory) -> physical GPU (device: 4, name: TITAN X (Pascal pci bus id: 0000:83:00.0, compute capability: 6.1)
2019-07-15 09:40:03.245517: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensFlow device (/device:GPU:5 with 7126 MB memory) -> physical GPU (device: 5, name: TITAN X (Pascal pci bus id: 0000:84:00.0, compute capability: 6.1)
2019-07-15 09:40:03.331755: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensFlow device (/device:GPU:6 with 7126 MB memory) -> physical GPU (device: 6, name: TITAN X (Pascal pci bus id: 0000:87:00.0, compute capability: 6.1)
2019-07-15 09:40:03.427539: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensFlow device (/device:GPU:7 with 7126 MB memory) -> physical GPU (device: 7, name: TITAN X (Pascal pci bus id: 0000:88:00.0, compute capability: 6.1)
fault
[0715 09:40:03 @config.py:287] Config: ------------------------------------------
{'BACKBONE': {'FREEZE_AFFINE': False,
'FREEZE_AT': 0,
'NORM': 'GN',
'RESNET_NUM_BLOCKS': [3, 4, 23, 3],
'STRIDE_1X1': False,
'TF_PAD_MODE': False,
'WEIGHTS': ''},
'CASCADE': {'BBOX_REG_WEIGHTS': [[10.0, 10.0, 5.0, 5.0], [20.0, 20.0, 10.0, 10.0],
[30.0, 30.0, 15.0, 15.0]],
'IOUS': [0.5, 0.6, 0.7]},
'DATA': {'ABSOLUTE_COORD': True,
'BASEDIR': 'COCO/DIR',
'CLASS_NAMES': ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign',
'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow'
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handba,
'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite
'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon
'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot',
'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant',
'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink',
'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush'],
'NUM_CATEGORY': 80,
'NUM_WORKERS': 10,
'TRAIN': ('coco_train2014', 'coco_valminusminival2014'),
'VAL': ('coco_minival2014',)},
'FPN': {'ANCHOR_STRIDES': (4, 8, 16, 32, 64),
'CASCADE': True,
'FRCNN_CONV_HEAD_DIM': 256,
'FRCNN_FC_HEAD_DIM': 1024,
'FRCNN_HEAD_FUNC': 'fastrcnn_4conv1fc_gn_head',
'MRCNN_HEAD_FUNC': 'maskrcnn_up4conv_gn_head',
'NORM': 'GN',
'NUM_CHANNEL': 256,
'PROPOSAL_MODE': 'Level',
'RESOLUTION_REQUIREMENT': 32},
'FRCNN': {'BATCH_PER_IM': 512,
'BBOX_REG_WEIGHTS': [10.0, 10.0, 5.0, 5.0],
'FG_RATIO': 0.25,
'FG_THRESH': 0.5},
'MODE_FPN': True,
'MODE_MASK': True,
'MRCNN': {'HEAD_DIM': 256},
'PREPROC': {'MAX_SIZE': 1344.0,
'PIXEL_MEAN': [123.675, 116.28, 103.53],
'PIXEL_STD': [58.395, 57.12, 57.375],
'TEST_SHORT_EDGE_SIZE': 800,
'TRAIN_SHORT_EDGE_SIZE': [640, 800]},
'RPN': {'ANCHOR_RATIOS': (0.5, 1.0, 2.0),
'ANCHOR_SIZES': (32, 64, 128, 256, 512),
'ANCHOR_STRIDE': 16,
'BATCH_PER_IM': 256,
'CROWD_OVERLAP_THRESH': 9.99,
'FG_RATIO': 0.5,
'HEAD_DIM': 1024,
'MIN_SIZE': 0,
'NEGATIVE_ANCHOR_THRESH': 0.3,
'NUM_ANCHOR': 15,
'POSITIVE_ANCHOR_THRESH': 0.7,
'PROPOSAL_NMS_THRESH': 0.7,
'TEST_PER_LEVEL_NMS_TOPK': 1000,
'TEST_POST_NMS_TOPK': 1000,
'TEST_PRE_NMS_TOPK': 6000,
'TRAIN_PER_LEVEL_NMS_TOPK': 2000,
'TRAIN_POST_NMS_TOPK': 2000,
'TRAIN_PRE_NMS_TOPK': 12000},
'TEST': {'FRCNN_NMS_THRESH': 0.5,
'RESULTS_PER_IM': 100,
'RESULT_SCORE_THRESH': 0.05,
'RESULT_SCORE_THRESH_VIS': 0.5},
'TRAIN': {'BASE_LR': 0.01,
'EVAL_PERIOD': 25,
'LR_SCHEDULE': [420000, 500000, 540000],
'NUM_GPUS': 8,
'STARTING_EPOCH': 1,
'STEPS_PER_EPOCH': 500,
'WARMUP': 1000,
'WARMUP_INIT_LR': 0.0033000000000000004,
'WEIGHT_DECAY': 0.0001},
'TRAINER': 'horovod'}
[0715 09:40:03 @varmanip.py:195] Checkpoint path ./train_log/maskrcnn/model-230000.data-00000-of-001 is auto-corrected to ./train_log/maskrcnn/model-230000.
[0715 09:40:03 @sesscreate.py:38] WRN User-provided custom session config may not work due to TF gs. See https://github.com/tensorpack/tensorpack/issues/497 for workarounds.
[0715 09:40:03 @multigpu.py:44] Building graph for predict tower 'tower0' on device /gpu:0 ...
[0715 09:40:03 @registry.py:135] stem1 input: [None, 3, None, None]
[0715 09:40:03 @registry.py:135] stem1/gn input: [None, 64, None, None]
[0715 09:40:03 @registry.py:143] stem1/gn output: [None, 64, None, None]
[0715 09:40:03 @registry.py:143] stem1 output: [None, 64, None, None]
[0715 09:40:03 @registry.py:135] stem2 input: [None, 64, None, None]
[0715 09:40:03 @registry.py:135] stem2/gn input: [None, 64, None, None]
[0715 09:40:03 @registry.py:143] stem2/gn output: [None, 64, None, None]
[0715 09:40:03 @registry.py:143] stem2 output: [None, 64, None, None]
[0715 09:40:03 @registry.py:135] stem3 input: [None, 64, None, None]
[0715 09:40:03 @registry.py:135] stem3/gn input: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] stem3/gn output: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] stem3 output: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv1 input: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv1/gn input: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv1/gn output: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv1 output: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv2 input: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv2/gn input: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv2/gn output: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv2 output: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv3 input: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv3/gn input: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv3/gn output: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv3 output: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv4 input: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv4/gn input: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv4/gn output: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv4 output: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv5 input: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/OSA2_1_conv5/gn input: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv5/gn output: [None, 128, None, None]
[0715 09:40:04 @registry.py:143] OSA2/OSA2_1_conv5 output: [None, 128, None, None]
[0715 09:40:04 @registry.py:135] OSA2/last input: [None, 768, None, None]
[0715 09:40:04 @registry.py:135] OSA2/last/gn input: [None, 256, None, None]
[0715 09:40:04 @registry.py:143] OSA2/last/gn output: [None, 256, None, None]
[0715 09:40:04 @registry.py:143] OSA2/last output: [None, 256, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_Pooling input: [None, 256, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_Pooling output: [None, 256, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv1 input: [None, 256, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv1/gn input: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv1/gn output: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv1 output: [None, 160, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv2 input: [None, 160, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv2/gn input: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv2/gn output: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv2 output: [None, 160, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv3 input: [None, 160, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv3/gn input: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv3/gn output: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv3 output: [None, 160, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv4 input: [None, 160, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv4/gn input: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv4/gn output: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv4 output: [None, 160, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv5 input: [None, 160, None, None]
[0715 09:40:04 @registry.py:135] OSA3/OSA3_1_conv5/gn input: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv5/gn output: [None, 160, None, None]
[0715 09:40:04 @registry.py:143] OSA3/OSA3_1_conv5 output: [None, 160, None, None]
[0715 09:40:04 @registry.py:135] OSA3/last input: [None, 1056, None, None]
[0715 09:40:04 @registry.py:135] OSA3/last/gn input: [None, 512, None, None]
[0715 09:40:04 @registry.py:143] OSA3/last/gn output: [None, 512, None, None]
[0715 09:40:04 @registry.py:143] OSA3/last output: [None, 512, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_Pooling input: [None, 512, None, None]
[0715 09:40:04 @registry.py:143] OSA4/OSA4_Pooling output: [None, 512, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv1 input: [None, 512, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv1/gn input: [None, 192, None, None]
[0715 09:40:04 @registry.py:143] OSA4/OSA4_1_conv1/gn output: [None, 192, None, None]
[0715 09:40:04 @registry.py:143] OSA4/OSA4_1_conv1 output: [None, 192, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv2 input: [None, 192, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv2/gn input: [None, 192, None, None]
[0715 09:40:04 @registry.py:143] OSA4/OSA4_1_conv2/gn output: [None, 192, None, None]
[0715 09:40:04 @registry.py:143] OSA4/OSA4_1_conv2 output: [None, 192, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv3 input: [None, 192, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv3/gn input: [None, 192, None, None]
[0715 09:40:04 @registry.py:143] OSA4/OSA4_1_conv3/gn output: [None, 192, None, None]
[0715 09:40:04 @registry.py:143] OSA4/OSA4_1_conv3 output: [None, 192, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv4 input: [None, 192, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv4/gn input: [None, 192, None, None]
[0715 09:40:04 @registry.py:143] OSA4/OSA4_1_conv4/gn output: [None, 192, None, None]
[0715 09:40:04 @registry.py:143] OSA4/OSA4_1_conv4 output: [None, 192, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv5 input: [None, 192, None, None]
[0715 09:40:04 @registry.py:135] OSA4/OSA4_1_conv5/gn input: [None, 192, None, None]
[0715 09:40:05 @registry.py:143] OSA4/OSA4_1_conv5/gn output: [None, 192, None, None]
[0715 09:40:05 @registry.py:143] OSA4/OSA4_1_conv5 output: [None, 192, None, None]
[0715 09:40:05 @registry.py:135] OSA4/last input: [None, 1472, None, None]
[0715 09:40:05 @registry.py:135] OSA4/last/gn input: [None, 768, None, None]
[0715 09:40:05 @registry.py:143] OSA4/last/gn output: [None, 768, None, None]
[0715 09:40:05 @registry.py:143] OSA4/last output: [None, 768, None, None]
[0715 09:40:05 @registry.py:135] OSA4/OSA4_2_conv1 input: [None, 768, None, None]
[0715 09:40:05 @registry.py:143] OSA4/OSA4_2_conv1 output: [None, 192, None, None]
[0715 09:40:05 @registry.py:135] OSA4/OSA4_2_conv2 input: [None, 192, None, None]
[0715 09:40:05 @registry.py:143] OSA4/OSA4_2_conv2 output: [None, 192, None, None]
[0715 09:40:05 @registry.py:135] OSA4/OSA4_2_conv3 input: [None, 192, None, None]
[0715 09:40:05 @registry.py:143] OSA4/OSA4_2_conv3 output: [None, 192, None, None]
[0715 09:40:05 @registry.py:135] OSA4/OSA4_2_conv4 input: [None, 192, None, None]
[0715 09:40:05 @registry.py:143] OSA4/OSA4_2_conv4 output: [None, 192, None, None]
[0715 09:40:05 @registry.py:135] OSA4/OSA4_2_conv5 input: [None, 192, None, None]
[0715 09:40:05 @registry.py:143] OSA4/OSA4_2_conv5 output: [None, 192, None, None]
[0715 09:40:05 @registry.py:135] OSA4/last2 input: [None, 1728, None, None]
[0715 09:40:05 @registry.py:135] OSA4/last2/gn input: [None, 768, None, None]
[0715 09:40:05 @registry.py:143] OSA4/last2/gn output: [None, 768, None, None]
[0715 09:40:05 @registry.py:143] OSA4/last2 output: [None, 768, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_Pooling input: [None, 768, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_Pooling output: [None, 768, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv1 input: [None, 768, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv1/gn input: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv1/gn output: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv1 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv2 input: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv2/gn input: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv2/gn output: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv2 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv3 input: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv3/gn input: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv3/gn output: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv3 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv4 input: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv4/gn input: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv4/gn output: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv4 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv5 input: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_1_conv5/gn input: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv5/gn output: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_1_conv5 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/last input: [None, 1888, None, None]
[0715 09:40:05 @registry.py:135] OSA5/last/gn input: [None, 1024, None, None]
[0715 09:40:05 @registry.py:143] OSA5/last/gn output: [None, 1024, None, None]
[0715 09:40:05 @registry.py:143] OSA5/last output: [None, 1024, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_2_conv1 input: [None, 1024, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_2_conv1 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_2_conv2 input: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_2_conv2 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_2_conv3 input: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_2_conv3 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_2_conv4 input: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_2_conv4 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/OSA5_2_conv5 input: [None, 224, None, None]
[0715 09:40:05 @registry.py:143] OSA5/OSA5_2_conv5 output: [None, 224, None, None]
[0715 09:40:05 @registry.py:135] OSA5/last2 input: [None, 2144, None, None]
[0715 09:40:05 @registry.py:135] OSA5/last2/gn input: [None, 1024, None, None]
[0715 09:40:05 @registry.py:143] OSA5/last2/gn output: [None, 1024, None, None]
[0715 09:40:05 @registry.py:143] OSA5/last2 output: [None, 1024, None, None]
[0715 09:40:05 @registry.py:135] fpn input: [None, 256, None, None],[None, 512, None, None],[None768, None, None],[None, 1024, None, None]
[0715 09:40:05 @registry.py:135] fpn/lateral_1x1_c2 input: [None, 256, None, None]
[0715 09:40:05 @registry.py:143] fpn/lateral_1x1_c2 output: [None, 256, None, None]
[0715 09:40:05 @registry.py:135] fpn/lateral_1x1_c3 input: [None, 512, None, None]
[0715 09:40:05 @registry.py:143] fpn/lateral_1x1_c3 output: [None, 256, None, None]
[0715 09:40:05 @registry.py:135] fpn/lateral_1x1_c4 input: [None, 768, None, None]
[0715 09:40:05 @registry.py:143] fpn/lateral_1x1_c4 output: [None, 256, None, None]
[0715 09:40:05 @registry.py:135] fpn/lateral_1x1_c5 input: [None, 1024, None, None]
[0715 09:40:05 @registry.py:143] fpn/lateral_1x1_c5 output: [None, 256, None, None]
[0715 09:40:05 @registry.py:135] fpn/gn_c2 input: [None, 256, None, None]
[0715 09:40:05 @registry.py:143] fpn/gn_c2 output: [None, 256, None, None]
[0715 09:40:05 @registry.py:135] fpn/gn_c3 input: [None, 256, None, None]
[0715 09:40:05 @registry.py:143] fpn/gn_c3 output: [None, 256, None, None]
[0715 09:40:05 @registry.py:135] fpn/gn_c4 input: [None, 256, None, None]
[0715 09:40:05 @registry.py:143] fpn/gn_c4 output: [None, 256, None, None]
[0715 09:40:05 @registry.py:135] fpn/gn_c5 input: [None, 256, None, None]
[0715 09:40:05 @registry.py:143] fpn/gn_c5 output: [None, 256, None, None]
[0715 09:40:05 @registry.py:135] fpn/upsample_lat5 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/upsample_lat5 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/upsample_lat4 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/upsample_lat4 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/upsample_lat3 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/upsample_lat3 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/posthoc_3x3_p2 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/posthoc_3x3_p2 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/posthoc_3x3_p3 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/posthoc_3x3_p3 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/posthoc_3x3_p4 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/posthoc_3x3_p4 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/posthoc_3x3_p5 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/posthoc_3x3_p5 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/gn_p2 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/gn_p2 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/gn_p3 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/gn_p3 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/gn_p4 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/gn_p4 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/gn_p5 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/gn_p5 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] fpn/maxpool_p6 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn/maxpool_p6 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] fpn output: [None, 256, None, None],[None, 256, None, None],[Non 256, None, None],[None, 256, None, None],[None, 256, None, None]
[0715 09:40:06 @registry.py:135] rpn input: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] rpn/conv0 input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] rpn/conv0 output: [None, 256, None, None]
[0715 09:40:06 @registry.py:135] rpn/class input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] rpn/class output: [None, 3, None, None]
[0715 09:40:06 @registry.py:135] rpn/box input: [None, 256, None, None]
[0715 09:40:06 @registry.py:143] rpn/box output: [None, 12, None, None]
[0715 09:40:06 @registry.py:143] rpn output: [None, None, 3],[None, None, 3, 4]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head input: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head/conv0 input: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:143] cascade_rcnn_stage1/head/conv0 output: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head/gn0 input: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:143] cascade_rcnn_stage1/head/gn0 output: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head/conv1 input: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:143] cascade_rcnn_stage1/head/conv1 output: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head/gn1 input: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:143] cascade_rcnn_stage1/head/gn1 output: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head/conv2 input: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:143] cascade_rcnn_stage1/head/conv2 output: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head/gn2 input: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:143] cascade_rcnn_stage1/head/gn2 output: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head/conv3 input: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:143] cascade_rcnn_stage1/head/conv3 output: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head/gn3 input: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:143] cascade_rcnn_stage1/head/gn3 output: [None, 256, 7, 7]
[0715 09:40:07 @registry.py:135] cascade_rcnn_stage1/head/fc input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage1/head/fc output: [None, 1024]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage1/head output: [None, 1024]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage1/outputs input: [None, 1024]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage1/outputs/class input: [None, 1024]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage1/outputs/class output: [None, 81]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage1/outputs/box input: [None, 1024]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage1/outputs/box output: [None, 4]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage1/outputs output: [None, 81],[None, 1, 4]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head/conv0 input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head/conv0 output: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head/gn0 input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head/gn0 output: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head/conv1 input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head/conv1 output: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head/gn1 input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head/gn1 output: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head/conv2 input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head/conv2 output: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head/gn2 input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head/gn2 output: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head/conv3 input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head/conv3 output: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head/gn3 input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head/gn3 output: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/head/fc input: [None, 256, 7, 7]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head/fc output: [None, 1024]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/head output: [None, 1024]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/outputs input: [None, 1024]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/outputs/class input: [None, 1024]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/outputs/class output: [None, 81]
[0715 09:40:08 @registry.py:135] cascade_rcnn_stage2/outputs/box input: [None, 1024]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/outputs/box output: [None, 4]
[0715 09:40:08 @registry.py:143] cascade_rcnn_stage2/outputs output: [None, 81],[None, 1, 4]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head/conv0 input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head/conv0 output: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head/gn0 input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head/gn0 output: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head/conv1 input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head/conv1 output: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head/gn1 input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head/gn1 output: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head/conv2 input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head/conv2 output: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head/gn2 input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head/gn2 output: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head/conv3 input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head/conv3 output: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head/gn3 input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head/gn3 output: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/head/fc input: [None, 256, 7, 7]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head/fc output: [None, 1024]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/head output: [None, 1024]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/outputs input: [None, 1024]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/outputs/class input: [None, 1024]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/outputs/class output: [None, 81]
[0715 09:40:09 @registry.py:135] cascade_rcnn_stage3/outputs/box input: [None, 1024]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/outputs/box output: [None, 4]
[0715 09:40:09 @registry.py:143] cascade_rcnn_stage3/outputs output: [None, 81],[None, 1, 4]
[0715 09:40:09 @registry.py:135] maskrcnn input: [None, 256, 14, 14]
[0715 09:40:09 @registry.py:135] maskrcnn/fcn0 input: [None, 256, 14, 14]
[0715 09:40:09 @registry.py:143] maskrcnn/fcn0 output: [None, 256, 14, 14]
[0715 09:40:09 @registry.py:135] maskrcnn/gn0 input: [None, 256, 14, 14]
[0715 09:40:09 @registry.py:143] maskrcnn/gn0 output: [None, 256, 14, 14]
[0715 09:40:09 @registry.py:135] maskrcnn/fcn1 input: [None, 256, 14, 14]
[0715 09:40:09 @registry.py:143] maskrcnn/fcn1 output: [None, 256, 14, 14]
[0715 09:40:09 @registry.py:135] maskrcnn/gn1 input: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:143] maskrcnn/gn1 output: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:135] maskrcnn/fcn2 input: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:143] maskrcnn/fcn2 output: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:135] maskrcnn/gn2 input: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:143] maskrcnn/gn2 output: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:135] maskrcnn/fcn3 input: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:143] maskrcnn/fcn3 output: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:135] maskrcnn/gn3 input: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:143] maskrcnn/gn3 output: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:135] maskrcnn/deconv input: [None, 256, 14, 14]
[0715 09:40:10 @registry.py:143] maskrcnn/deconv output: [None, 256, 28, 28]
[0715 09:40:10 @registry.py:135] maskrcnn/conv input: [None, 256, 28, 28]
[0715 09:40:10 @registry.py:143] maskrcnn/conv output: [None, 80, 28, 28]
[0715 09:40:10 @registry.py:143] maskrcnn output: [None, 80, 28, 28]
[0715 09:40:10 @multigpu.py:44] Building graph for predict tower 'tower1' on device /gpu:1 ...
[0715 09:40:15 @multigpu.py:44] Building graph for predict tower 'tower2' on device /gpu:2 ...
[0715 09:40:19 @multigpu.py:44] Building graph for predict tower 'tower3' on device /gpu:3 ...
[0715 09:40:24 @multigpu.py:44] Building graph for predict tower 'tower4' on device /gpu:4 ...
[0715 09:40:29 @multigpu.py:44] Building graph for predict tower 'tower5' on device /gpu:5 ...
[0715 09:40:34 @multigpu.py:44] Building graph for predict tower 'tower6' on device /gpu:6 ...
[0715 09:40:39 @multigpu.py:44] Building graph for predict tower 'tower7' on device /gpu:7 ...
[0715 09:40:44 @sessinit.py:87] WRN The following variables are in the checkpoint, but not found in the graph: global_step, learning_rate
2019-07-15 09:40:44.795837: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0, 1, 2, 3, 4, 5, 6, 7
2019-07-15 09:40:44.799803: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-07-15 09:40:44.799839: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0 1 2 3 4 5 6 7
2019-07-15 09:40:44.799850: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N Y Y Y N N N N
2019-07-15 09:40:44.799857: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 1: Y N Y Y N N N N
2019-07-15 09:40:44.799864: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 2: Y Y N Y N N N N
2019-07-15 09:40:44.799871: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 3: Y Y Y N N N N N
2019-07-15 09:40:44.799878: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 4: N N N N N Y Y Y
2019-07-15 09:40:44.799884: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 5: N N N N Y N Y Y
2019-07-15 09:40:44.799891: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 6: N N N N Y Y N Y
2019-07-15 09:40:44.799898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 7: N N N N Y Y Y N
2019-07-15 09:40:44.803352: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7128 MB memory) -> physical GPU (device: 0, name: TITAN X (Pascal), pci bus id: 0000:04:00.0, compute capability: 6.1)
2019-07-15 09:40:44.803691: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 7126 MB memory) -> physical GPU (device: 1, name: TITAN X (Pascal), pci bus id: 0000:05:00.0, compute capability: 6.1)
2019-07-15 09:40:44.803999: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 7126 MB memory) -> physical GPU (device: 2, name: TITAN X (Pascal), pci bus id: 0000:08:00.0, compute capability: 6.1)
2019-07-15 09:40:44.804320: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 7126 MB memory) -> physical GPU (device: 3, name: TITAN X (Pascal), pci bus id: 0000:09:00.0, compute capability: 6.1)
2019-07-15 09:40:44.804829: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:4 with 7126 MB memory) -> physical GPU (device: 4, name: TITAN X (Pascal), pci bus id: 0000:83:00.0, compute capability: 6.1)
2019-07-15 09:40:44.805267: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:5 with 7126 MB memory) -> physical GPU (device: 5, name: TITAN X (Pascal), pci bus id: 0000:84:00.0, compute capability: 6.1)
2019-07-15 09:40:44.805771: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:6 with 7126 MB memory) -> physical GPU (device: 6, name: TITAN X (Pascal), pci bus id: 0000:87:00.0, compute capability: 6.1)
2019-07-15 09:40:44.806304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:7 with 7126 MB memory) -> physical GPU (device: 7, name: TITAN X (Pascal), pci bus id: 0000:88:00.0, compute capability: 6.1)
[0715 09:40:49 @sessinit.py:114] Restoring checkpoint from ./train_log/maskrcnn/model-230000 ...
[0715 09:40:50 @predict.py:87] Evaluating coco_minival2014 ...
loading annotations into memory...
Done (t=0.48s)
creating index...
index created!
[0715 09:40:51 @coco.py:68] Instances loaded from COCO/DIR/annotations/instances_minival2014.json.
100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 122886.96it/s]
[0715 09:40:51 @timer.py:50] Load Groundtruth Boxes for instances_minival2014.json finished, time:0.0483 sec.
[0715 09:40:51 @data.py:412] Found 5000 images for inference.
loading annotations into memory...
Done (t=1.12s)
creating index...
index created!
[0715 09:40:52 @coco.py:68] Instances loaded from COCO/DIR/annotations/instances_minival2014.json.
100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 152657.82it/s]
[0715 09:40:52 @timer.py:50] Load Groundtruth Boxes for instances_minival2014.json finished, time:0.0361 sec.
[0715 09:40:52 @data.py:412] Found 5000 images for inference.
loading annotations into memory...
Done (t=0.47s)
creating index...
index created!
[0715 09:40:53 @coco.py:68] Instances loaded from COCO/DIR/annotations/instances_minival2014.json.
100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 135149.28it/s]
[0715 09:40:53 @timer.py:50] Load Groundtruth Boxes for instances_minival2014.json finished, time:0.0415 sec.
[0715 09:40:53 @data.py:412] Found 5000 images for inference.
loading annotations into memory...
Done (t=0.47s)
creating index...
index created!
[0715 09:40:53 @coco.py:68] Instances loaded from COCO/DIR/annotations/instances_minival2014.json.
100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 163515.52it/s]
[0715 09:40:53 @timer.py:50] Load Groundtruth Boxes for instances_minival2014.json finished, time:0.0339 sec.
[0715 09:40:53 @data.py:412] Found 5000 images for inference.
loading annotations into memory...
Done (t=1.24s)
creating index...
index created!
[0715 09:40:55 @coco.py:68] Instances loaded from COCO/DIR/annotations/instances_minival2014.json.
100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 135470.56it/s]
[0715 09:40:55 @timer.py:50] Load Groundtruth Boxes for instances_minival2014.json finished, time:0.0415 sec.
[0715 09:40:55 @data.py:412] Found 5000 images for inference.
loading annotations into memory...
Done (t=0.47s)
creating index...
index created!
[0715 09:40:55 @coco.py:68] Instances loaded from COCO/DIR/annotations/instances_minival2014.json.
100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 170575.05it/s]
[0715 09:40:55 @timer.py:50] Load Groundtruth Boxes for instances_minival2014.json finished, time:0.0324 sec.
[0715 09:40:55 @data.py:412] Found 5000 images for inference.
loading annotations into memory...
Done (t=0.48s)
creating index...
index created!
[0715 09:40:56 @coco.py:68] Instances loaded from COCO/DIR/annotations/instances_minival2014.json.
100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 148630.88it/s]
[0715 09:40:56 @timer.py:50] Load Groundtruth Boxes for instances_minival2014.json finished, time:0.0371 sec.
[0715 09:40:56 @data.py:412] Found 5000 images for inference.
loading annotations into memory...
Done (t=1.15s)
creating index...
index created!
[0715 09:40:57 @coco.py:68] Instances loaded from COCO/DIR/annotations/instances_minival2014.json.
100%|████████████████████████████████████████████████████| 5000/5000 [00:00<00:00, 164042.49it/s]
[0715 09:40:57 @timer.py:50] Load Groundtruth Boxes for instances_minival2014.json finished, time:0.0336 sec.
[0715 09:40:57 @data.py:412] Found 5000 images for inference.
100%|████████████████████████████████████████████████████████| 5000/5000 [08:06<00:00, 1.62it/s]
loading annotations into memory...
Done (t=0.59s)
creating index...
index created!
[0715 09:49:04 @coco.py:68] Instances loaded from COCO/DIR/annotations/instances_minival2014.json.
```
and, i saw **coco_minival2014-outputs225000.json-coco_minival2014** made.
but, when i saw this file, I could only see the contents below.
> []
additionally, this file size is just **2bytes**.
what is the problem??
Thank you!! | closed | 2019-07-15T01:06:35Z | 2019-07-15T05:07:19Z | https://github.com/tensorpack/tensorpack/issues/1265 | [] | Dev-HJYoo | 6 |
piccolo-orm/piccolo | fastapi | 991 | Pressing Enter in `Reverse 1 migration? [y/N]` still leads to reversing. | When undoing the migration, pressing `Enter` after the prompt `Reverse 1 migration? [y/N]` still undoing it regardless of the default `N` behavior. | closed | 2024-05-20T22:22:58Z | 2024-05-21T11:00:08Z | https://github.com/piccolo-orm/piccolo/issues/991 | [] | metakot | 2 |
graphistry/pygraphistry | jupyter | 6 | Support Python 3 | closed | 2015-06-25T21:29:55Z | 2015-08-06T13:53:25Z | https://github.com/graphistry/pygraphistry/issues/6 | [
"enhancement"
] | thibaudh | 1 | |
MaartenGr/BERTopic | nlp | 1,512 | TypeError: Cannot use scipy.linalg.eigh for sparse A with k >= N. Use scipy.linalg.eigh(A.toarray()) or reduce k. | Hi!
When I try to run bertopic() I get the following error:
TypeError: Cannot use scipy.linalg.eigh for sparse A with k >= N. Use scipy.linalg.eigh(A.toarray()) or reduce k.
I increased the number of documents to 205504, which should be enough I think.
Does someone have any idea what could cause the problem? | open | 2023-09-07T14:28:18Z | 2024-05-12T18:56:41Z | https://github.com/MaartenGr/BERTopic/issues/1512 | [] | ElskeNijhof | 11 |
tensorflow/tensor2tensor | deep-learning | 1,074 | Transformer NaN loss for multiple GPUs for every dataset | ### Description
I got NaN loss for transformer on all multiple GPUs settings for every dataset. Not sure what happen.
The error occur immediately after step 0
### Environment information
```
OS: Ubuntu 16.04
$ pip freeze | grep tensor
# your output here
tensor2tensor==1.9.0
tensorboard==1.10.0
tensorflow-gpu==1.10.1
tensorflow-hub==0.1.1
$ python -V
# your output here
Python 3.5.2
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
export CUDA_VISIBLE_DEVICES=0,1
export DATA_DIR=translate_ende_wmt_bpe32k
export HPARAMS=transformer_base
export MODEL=transformer
export TRAIN_DIR=transformer-transformer-base
# whenever worker_gpu >=2, the error occur
export WORKER_GPU=2
t2t-trainer \
--data_dir=$DATA_DIR \
--problems=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR` \
--worker_gpu=2
```
# Error logs:
```
INFO:tensorflow:Overriding hparams in transformer_base with batch_size=4096
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/trainer_lib.py:165: RunConfig.__init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.
INFO:tensorflow:schedule=train_and_evaluate
INFO:tensorflow:worker_gpu=2
INFO:tensorflow:sync=False
WARNING:tensorflow:Schedule=train_and_evaluate. Assuming that training is running on a single machine.
INFO:tensorflow:datashard_devices: ['gpu:0', 'gpu:1']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:ps_devices: ['gpu:0', 'gpu:1']
INFO:tensorflow:Using config: {'_log_step_count_steps': 100, '_model_dir': '/projects/nmt/t2t_train/translate_ende_wmt_bpe32k/transformer-transformer_base-v12-b4096-gpu2-b4096', '_master': '', '_keep_checkpoint_max': 10, '_save_summary_steps': 100, '_is_chief': True, '_session_config': gpu_options {
per_process_gpu_memory_fraction: 0.95
}
allow_soft_placement: true
graph_options {
optimizer_options {
}
}
, '_tf_config': gpu_options {
per_process_gpu_memory_fraction: 1.0
}
, '_save_checkpoints_steps': 5000, '_keep_checkpoint_every_n_hours': 10000, 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x7faa4e818908>, '_task_id': 0, '_task_type': None, '_environment': 'local', '_num_ps_replicas': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fa9924a7550>, '_evaluation_master': '', 'use_tpu': False, '_save_checkpoints_secs': None, '_num_worker_replicas': 0, '_train_distribute': None, '_tf_random_seed': 100, 't2t_device_info': {'num_async_replicas': 1}}
WARNING:tensorflow:Estimator's model_fn (<function T2TModel.make_estimator_model_fn.<locals>.wrapping_model_fn at 0x7fa98b2ace18>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Using ValidationMonitor
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/monitors.py:279: BaseMonitor.__init__ (from tensorflow.contrib.learn.python.learn.monitors) is deprecated and will be removed after 2016-12-05.
Instructions for updating:
Monitors are deprecated. Please use tf.train.SessionRunHook.
WARNING:tensorflow:EvalSpec not provided. Estimator will not manage model evaluation. Assuming ValidationMonitor present in train_hooks.
INFO:tensorflow:Reading data files from /projects/nmt/t2t_data/translate_ende_wmt_bpe32k/translate_ende_wmt_bpe32k-train*
INFO:tensorflow:partition: 0 num_data_files: 100
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Setting T2TModel mode to 'train'
INFO:tensorflow:Using variable initializer: uniform_unit_scaling
INFO:tensorflow:Transforming feature 'inputs' with symbol_modality_37007_512.bottom
INFO:tensorflow:Transforming 'targets' with symbol_modality_37007_512.targets_bottom
INFO:tensorflow:Building model body
INFO:tensorflow:Transforming body output with symbol_modality_37007_512.top
INFO:tensorflow:Transforming feature 'inputs' with symbol_modality_37007_512.bottom
INFO:tensorflow:Transforming 'targets' with symbol_modality_37007_512.targets_bottom
INFO:tensorflow:Building model body
INFO:tensorflow:Transforming body output with symbol_modality_37007_512.top
INFO:tensorflow:Base learning rate: 2.000000
INFO:tensorflow:Trainable Variables Total size: 63067648
INFO:tensorflow:Using optimizer Adam
/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gradients_impl.py:100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
2018-09-18 17:13:25.167295: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:1a:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2018-09-18 17:13:25.562325: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 1 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:1b:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2018-09-18 17:13:25.564571: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0, 1
2018-09-18 17:13:26.298675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-18 17:13:26.298759: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0 1
2018-09-18 17:13:26.298775: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N Y
2018-09-18 17:13:26.298787: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 1: Y N
2018-09-18 17:13:26.299308: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10619 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:1a:00.0, compute capability: 6.1)
2018-09-18 17:13:26.536162: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10619 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:1b:00.0, compute capability: 6.1)
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 1 into /projects/nmt/t2t_train/translate_ende_wmt_bpe32k/transformer-transformer_base-v12-b4096-gpu2-b4096/model.ckpt.
INFO:tensorflow:step = 0, loss = 9.76205
ERROR:tensorflow:Model diverged with loss = NaN.
Traceback (most recent call last):
File "/usr/local/bin/t2t-trainer", line 32, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "/usr/local/bin/t2t-trainer", line 28, in main
t2t_trainer.main(argv)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 359, in main
execute_schedule(exp)
File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 313, in maybe_cloud_tpu
yield
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 359, in main
execute_schedule(exp)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 306, in execute_schedule
getattr(exp, FLAGS.schedule)()
File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 253, in profile_context
yield
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 306, in execute_schedule
getattr(exp, FLAGS.schedule)()
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/trainer_lib.py", line 297, in train_and_evaluate
self.train()
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/trainer_lib.py", line 303, in train
max_steps=self._train_spec.max_steps)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 363, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 843, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 859, in _train_model_default
saving_listeners)
File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5265, in get_controller
yield g
File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5060, in get_controller
yield default
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5265, in get_controller
yield g
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 859, in _train_model_default
saving_listeners)
File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 4338, in device
yield
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 859, in _train_model_default
saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1059, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 567, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1043, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1134, in run
raise six.reraise(*original_exc_info)
File "/usr/local/lib/python3.5/dist-packages/six.py", line 693, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1119, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1199, in run
run_metadata=run_metadata))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/basic_session_run_hooks.py", line 623, in after_run
raise NanLossDuringTrainingError
tensorflow.python.training.basic_session_run_hooks.NanLossDuringTrainingError: NaN loss during training.
```
| open | 2018-09-18T17:17:21Z | 2018-09-18T17:17:21Z | https://github.com/tensorflow/tensor2tensor/issues/1074 | [] | nxphi47 | 0 |
deepset-ai/haystack | nlp | 8,212 | Update FilterRetriever docstrings | closed | 2024-08-13T11:48:19Z | 2024-10-08T08:54:49Z | https://github.com/deepset-ai/haystack/issues/8212 | [
"type:documentation",
"P1"
] | agnieszka-m | 0 | |
tableau/server-client-python | rest-api | 988 | Use defusedxml library for prevention of xml attacks | See https://github.com/tableau/tabcmd/issues/15
https://pypi.org/project/defusedxml/ | closed | 2022-02-11T04:27:59Z | 2022-03-11T21:14:23Z | https://github.com/tableau/server-client-python/issues/988 | [
"enhancement"
] | jacalata | 0 |
alirezamika/autoscraper | automation | 71 | How to scrape a dynamic website? | I am trying to export a localhost website that is generated with this project:
https://github.com/HBehrens/puncover
The project generates a localhost website, and each time the user interacts clicks a link the project receives a GET request and the website generates the HTML. This means that the HTML is generated each time the user access a link through their browser. At the moment the project does not export the website to html or pdf. For this reason I want to know how could I recursively get all the hyperlinks and then generate the HTML version. Would this be possible with autoscraper? | closed | 2022-02-04T11:54:19Z | 2024-10-11T02:01:14Z | https://github.com/alirezamika/autoscraper/issues/71 | [
"Stale"
] | vChavezB | 4 |
roboflow/supervision | computer-vision | 1,630 | Support for Korean Characters in Annotator Function Fonts | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
I'm using the Annotator function in Roboflow Supervision and would like to add annotations in Korean. However, it seems that the default font doesn't support Korean characters. Are there any workarounds or recommended methods to enable Korean text in annotations?
Thanks in advance for your help!
### Additional
_No response_ | closed | 2024-10-30T02:35:13Z | 2024-10-30T03:28:12Z | https://github.com/roboflow/supervision/issues/1630 | [
"question"
] | YoungjaeDev | 0 |
pyqtgraph/pyqtgraph | numpy | 2,651 | Error in color displayed with ImageItem and ImageView | <!-- In the following, please describe your issue in detail! -->
<!-- If some sections do not apply, just remove them. -->
### Short description
I want to display a .png in pyqtgraph but the image produced doesn't seems to have a the proper color.
I think I'm missing something quite simple but I can figure out what it is.
Here is the file used for test :  (named `avatar.png` in the code example)
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
import sys
import pyqtgraph as pg
import numpy as np
from PyQt5.QtGui import QImage, QPixmap
from PyQt5.QtWidgets import QApplication, QMainWindow, QLabel, QHBoxLayout, QWidget, QSplitter
class MainWindow(QMainWindow):
def __init__(self, parent=None):
super(MainWindow, self).__init__(parent)
# Define central widget and add splitter
layout = QHBoxLayout()
self.setCentralWidget(QWidget())
self.centralWidget().setLayout(layout)
splitter = QSplitter(self)
layout.addWidget(splitter)
# Define used image
img = QImage("avatar.png")
imgArray = pg.imageToArray(img)
# Create PlotWidget for ImageItem
self.plot = pg.PlotWidget(name="Plot", title="test")
self.plot.setBackground('white')
imgitem = pg.ImageItem(imgArray)
self.plot.addItem(imgitem)
# Create label and set pixmap
self.label = QLabel(self)
self.label.setPixmap(QPixmap.fromImage(img))
# Create ImageView
self.img_view = pg.ImageView()
self.img_view.setImage(imgArray)
# Add to splitter
splitter.addWidget(self.plot)
splitter.addWidget(self.img_view)
splitter.addWidget(self.label)
self.showMaximized()
def main():
app = QApplication(sys.argv)
main_ = MainWindow()
main_.show()
sys.exit(app.exec_())
if __name__ == "__main__":
print(f"{pg.__version__=}")
print(f"{pg.Qt.VERSION_INFO=}")
print(f"{np.__version__}")
main()
```
### Expected behavior
I was expecting the image displayed in `self.img_view` and `pg.ImageItem` to be the same as the one displayed in `self.label`
### Real behavior

### Tested environment(s)
* PyQtGraph version: 0.13.2
* Qt Python binding: PyQt5 5.15.9 Qt 5.15.2
* Python version: 3.10.6
* NumPy version: 1.24.2
* Operating system: Ubuntu 22.04
* Installation method: pip
### Additional context
| closed | 2023-03-15T12:34:23Z | 2023-03-16T16:27:28Z | https://github.com/pyqtgraph/pyqtgraph/issues/2651 | [] | jmkerloch | 8 |
tableau/server-client-python | rest-api | 620 | filter view with ID | In the doc Api reference, the req_option parameter in the view section mention that it is possible to filter views get using id.
May i am wrong but it seams not possible to filter using the ID of a view.
**Parameters**
Name | Description
:--- | :---
`req_option` | (Optional) You can pass the method a request object that contains additional parameters to filter the request. For example, if you were searching for a specific view, you could specify the name of the view or **its ID**.
`usage` | (Optional) If true (`usage=True`) returns the usage statistics for the views. The default is `usage=False`. | closed | 2020-05-12T05:15:39Z | 2021-06-28T13:38:42Z | https://github.com/tableau/server-client-python/issues/620 | [] | rferraton | 2 |
wagtail/wagtail | django | 12,780 | Memory and Performance Issues with Page Reordering Interface for Large Number of Pages | ### Issue Summary
The page reordering interface becomes unresponsive and stays in perpetual loading state when handling a large number of child pages (1000+). The interface attempts to load all pages simultaneously, causing performance issues in the browser.
### Steps to Reproduce
1. Create a Wagtail project with NewsPage model
2. Import or create 1000+ pages under a parent page (e.g., NewsLandingPage)
3. Navigate to the parent page in admin interface
4. Click on "Sort menu order" option
5. Observe that the interface remains in loading state indefinitely
Expected behavior: The reordering interface should load and be usable even with large numbers of pages.
Actual behavior: The interface stays in loading state and becomes unresponsive.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.12.5
- Django version: 5.1.3
- Wagtail version: 6.0.3
### Working on this
The issue requires knowledge of:
- Wagtail's page ordering implementation
- Django/Python performance optimization
- Frontend performance optimization
- UI/UX considerations for handling large datasets
I'm not planning to work on this myself but happy to provide more information or test potential solutions. | open | 2025-01-16T07:01:08Z | 2025-02-01T13:46:49Z | https://github.com/wagtail/wagtail/issues/12780 | [
"type:Bug",
"🚀 Performance"
] | sreeharikodavalam | 7 |
AntonOsika/gpt-engineer | python | 331 | adding a custom api end point | I use a custom and free api endpoint so I would like to add a feature which is similar to:
```
export OPENAI_API_KEY=[your api key]
```
so I would like to have:
```
export OPENAI_API_BASE=[your custom api base url]
```
If you don't export it then it will have the default base url.
is it appropriate for me to work on this or if it is already implemented then please let me know. | closed | 2023-06-22T15:36:53Z | 2024-02-28T18:10:40Z | https://github.com/AntonOsika/gpt-engineer/issues/331 | [] | BhagatHarsh | 19 |
axnsan12/drf-yasg | django | 225 | Validator authentication | Hello I have a question,
I have session authentication with my Django project. When I go to the swagger url it generates the docs perfectly except it always throws an error:

With error message being:
`{"schemaValidationMessages":[{"level":"error","message":"Can't read from file https://my-example-url.com/swagger/?format=openapi"}]}`
When I visit the above url (https://my-example-url.com/swagger/?format=openapi) directly it works perfectly in the sense that it returns the json data but when I try view it in an incognito window it throws the same error. `TypeError: Expected a `openapi.Swagger` instance`
It seems that the validator is not authenticated to view my swagger-json and therefore says there is an error? Is there a way around this? Perhaps I have set it up incorrectly.
Thanks so much!
| closed | 2018-09-29T06:29:20Z | 2018-10-09T21:55:13Z | https://github.com/axnsan12/drf-yasg/issues/225 | [] | nicholasgcoles | 2 |
vipstone/faceai | tensorflow | 3 | 用摄像头测试 超级卡顿 特别慢 | 大华的摄像头 rtsp 协议的 | open | 2018-05-14T10:36:55Z | 2018-07-23T15:21:34Z | https://github.com/vipstone/faceai/issues/3 | [] | XinJiangQingMang | 3 |
fastapi/fastapi | python | 13,399 | Dependency Models created from Form input data are loosing metadata(field set) and are enforcing validation on default values. |
### Discussed in https://github.com/fastapi/fastapi/discussions/13380
<div type='discussions-op-text'>
<sup>Originally posted by **sneakers-the-rat** February 16, 2025</sup>
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
File: fastapi_defaults_bug.py
```python
import uvicorn
from typing import Annotated
from pydantic import BaseModel, Field
from fastapi import FastAPI, Form
class ExampleJsonModel(BaseModel):
sample_field_1: Annotated[bool, Field(default=True)]
sample_field_2: Annotated[bool, Field(default=False)]
sample_field_3: Annotated[bool, Field(default=None)]
sample_field_4: Annotated[str, Field(default=0)] # This is dangerous but can be used with a validator
class ExampleFormModel(BaseModel):
sample_field_1: Annotated[bool, Form(default=True)]
sample_field_2: Annotated[bool, Form(default=False)]
sample_field_3: Annotated[bool, Form(default=None)]
sample_field_4: Annotated[str, Form(default=0)] # This is dangerous but can be used with a validator
class ResponseSampleModel(BaseModel):
fields_set: Annotated[list, Field(default_factory=list)]
dumped_fields_no_exclude: Annotated[dict, Field(default_factory=dict)]
dumped_fields_exclude_default: Annotated[dict, Field(default_factory=dict)]
dumped_fields_exclude_unset: Annotated[dict, Field(default_factory=dict)]
app = FastAPI()
@app.post("/form")
async def form_endpoint(model: Annotated[ExampleFormModel, Form()]) -> ResponseSampleModel:
return ResponseSampleModel(
fields_set=list(model.model_fields_set),
dumped_fields_no_exclude=model.model_dump(),
dumped_fields_exclude_default=model.model_dump(exclude_defaults=True),
dumped_fields_exclude_unset=model.model_dump(exclude_unset=True)
)
@app.post("/json")
async def form_endpoint(model: ExampleJsonModel) -> ResponseSampleModel:
return ResponseSampleModel(
fields_set=list(model.model_fields_set),
dumped_fields_no_exclude=model.model_dump(),
dumped_fields_exclude_default=model.model_dump(exclude_defaults=True),
dumped_fields_exclude_unset=model.model_dump(exclude_unset=True)
)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
```
Test File: test_fastapi_defaults_bug.py
```python
import pytest
from fastapi.testclient import TestClient
from fastapi_defaults_bug import (
app,
ExampleFormModel,
ExampleJsonModel,
ResponseSampleModel
)
@pytest.fixture(scope="module")
def fastapi_client():
with TestClient(app) as test_client:
yield test_client
################
# Section 1: Tests on Form model -> no fastapi, pydantic model
################
def test_form_model_pydantic_only_defaults():
f_model = ExampleFormModel()
for field_name, field in f_model.model_fields.items():
assert getattr(f_model, field_name) == field.default
def test_form_model_pydantic_all_unset():
f_model = ExampleFormModel()
assert not f_model.model_fields_set
def test_form_model_pydantic_set_1():
f_model = ExampleFormModel(sample_field_1=True) # Those set have the same value of default
assert "sample_field_1" in f_model.model_fields_set
assert len(f_model.model_fields_set) == 1
def test_form_model_pydantic_set_2():
f_model = ExampleFormModel(sample_field_1=True, sample_field_2=False) # Those set have the same value of default
assert "sample_field_1" in f_model.model_fields_set
assert "sample_field_2" in f_model.model_fields_set
assert len(f_model.model_fields_set) == 2
def test_form_model_pydantic_set_all():
f_model = ExampleFormModel(
sample_field_1=True,
sample_field_2=False,
sample_field_3=True,
sample_field_4=""
) # Those set could have different values from default
assert not set(f_model.model_fields).difference(f_model.model_fields_set)
################
# Section 2: Same Tests of Form on Json model -> they are the same on different model
################
def test_json_model_pydantic_only_defaults():
j_model = ExampleJsonModel()
for field_name, field in j_model.model_fields.items():
assert getattr(j_model, field_name) == field.default
def test_json_model_pydantic_all_unset():
j_model = ExampleJsonModel()
assert not j_model.model_fields_set
def test_json_model_pydantic_set_1():
j_model = ExampleJsonModel(sample_field_1=True) # Those set have the same value of default
assert "sample_field_1" in j_model.model_fields_set
assert len(j_model.model_fields_set) == 1
def test_json_model_pydantic_set_2():
j_model = ExampleJsonModel(sample_field_1=True, sample_field_2=False) # Those set have the same value of default
assert "sample_field_1" in j_model.model_fields_set
assert "sample_field_2" in j_model.model_fields_set
assert len(j_model.model_fields_set) == 2
def test_json_model_pydantic_set_all():
j_model = ExampleJsonModel(
sample_field_1=True,
sample_field_2=False,
sample_field_3=True,
sample_field_4=""
) # Those set could have different values from default
assert not set(j_model.model_fields).difference(j_model.model_fields_set)
def test_form_json_model_share_same_default_behaviour():
f_model = ExampleFormModel()
j_model = ExampleJsonModel()
for field_name, field in f_model.model_fields.items():
assert getattr(f_model, field_name) == getattr(j_model, field_name)
################
# Section 3: Tests on Form model with fastapi
################
def test_submit_form_with_all_values(fastapi_client: TestClient):
form_content = {
"sample_field_1": "False",
"sample_field_2": "True",
"sample_field_3": "False",
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/form", data=form_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 4
assert not set(form_content).symmetric_difference(set(response_model.fields_set))
def test_submit_form_with_not_all_values(fastapi_client: TestClient):
"""
This test should pass but fails because fastapi is preloading default and pass those values
on model creation, losing the ability to know if a field has been set.
:param fastapi_client:
:return:
"""
form_content = {
"sample_field_1": "False",
"sample_field_3": "False",
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/form", data=form_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 3 # test will fail here and below
assert not set(form_content).symmetric_difference(set(response_model.fields_set))
def test_submit_form_with_no_values(fastapi_client: TestClient):
"""
This test should pass but fails because fastapi is preloading default and pass those values
on model creation, losing the ability to not have validation on default value.
:param fastapi_client:
:return:
"""
form_content = {}
response = fastapi_client.post("/form", data=form_content)
assert response.status_code == 200 # test will fail here and below -> will raise 422
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 0
assert not set(form_content).symmetric_difference(set(response_model.fields_set))
################
# Section 4: Tests on Json model with fastapi
################
def test_submit_json_with_all_values(fastapi_client: TestClient):
json_content = {
"sample_field_1": False,
"sample_field_2": True,
"sample_field_3": False,
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/json", json=json_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 4
assert not set(json_content).symmetric_difference(set(response_model.fields_set))
def test_submit_json_with_not_all_values(fastapi_client: TestClient):
"""
This test will pass but the same not happen with Form.
:param fastapi_client:
:return:
"""
json_content = {
"sample_field_1": False,
"sample_field_3": False,
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/json", json=json_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 3 # This time will not fail
assert not set(json_content).symmetric_difference(set(response_model.fields_set))
def test_submit_json_with_no_values(fastapi_client: TestClient):
"""
This test will pass but the same not happen with Form.
:param fastapi_client:
:return:
"""
json_content = {}
response = fastapi_client.post("/json", json=json_content)
assert response.status_code == 200 # This time will not fail
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 0
assert not set(json_content).symmetric_difference(set(response_model.fields_set))
```
### Description
This is a generalized version of the issue reported in https://github.com/fastapi/fastapi/discussions/13380 .
This issue do not affect body json data.
For models created from a form, during the parsing phase their default values are preloaded and passed to the validator to create the model.
1) This leads to a loss of information regarding which fields have been explicitly set, since default values are now considered as having been provided.
12) Consequently, validation is enforced on default values, which might not be the intended behavior and anyway different from the one from json body.
### Operating System
macOS - Linux
### Operating System Details
_No response_
### FastAPI Version
0.115.8
### Pydantic Version
2.10.6
### Python Version
Python 3.11 - Python 3.13.1 | open | 2025-02-20T14:36:29Z | 2025-03-07T03:38:28Z | https://github.com/fastapi/fastapi/issues/13399 | [
"good first issue",
"question"
] | luzzodev | 9 |
RomelTorres/alpha_vantage | pandas | 174 | Inaccuracies with intraday data | From research I understand this may be a previously addressed issue.
I have noticed consistent inaccuracies with intraday price data of over 0.1% relative to data streaming services such as tradingview or my personal broker. Is this an issue that is currently being looked into?
Or is this inaccuracy at acceptable levels?
Thank you, | closed | 2020-01-09T22:04:30Z | 2020-01-10T17:24:39Z | https://github.com/RomelTorres/alpha_vantage/issues/174 | [] | Savarani | 1 |
fastapi/sqlmodel | fastapi | 219 | How to make a foreign key of a table be the primary key of another table that has a composite primary key ? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional, List
from sqlmodel import SQLModel, Field, Relationship, create_engine, Session
engine = create_engine("postgresql://username:password@localhost:5432/db_dynamic")
class Model1(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
model2_id: Optional[int] = Field(default=None, foreign_key="model2.id")
model2: "Model2" = Relationship(back_populates="model1")
class Model2(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
model3_id: Optional[int] = Field(default=None, foreign_key="model3.id", primary_key=True)
model4_id: Optional[int] = Field(default=None, foreign_key="model4.id", primary_key=True)
model1: List[Model1] = Relationship(back_populates="model2")
class Model3(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
model4: List["Model4"] = Relationship(back_populates="model3", link_model=Model2)
class Model4(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
model3: List[Model3] = Relationship(back_populates="model4", link_model=Model2)
if __name__ == "__main__":
SQLModel.metadata.create_all(engine)
with Session(engine) as session:
model3 = Model3()
model4 = Model4()
session.add(model3)
session.add(model4)
session.commit()
model2 = Model2(**{"model3_id": 1, "model4_id": 1})
session.add(model2)
session.commit()
model1 = Model1(**{"model2_id": 1})
session.add(model1)
session.commit()
```
### Description
I have link table represented by `Model2`, which establishes a many to many relation between `Model3` and `Model4`.
Now I want to establish a many to one relation between `Model1` and the link table. To do that, I created a third primary key `id` in `Model2` and a foreign key in `Model1` which points to the `id` of `Model2`.
I'm using a PostgreSQL database created using a `docker-compose.yml` as follows:
```python
version: '3.7'
services:
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=username
- POSTGRES_PASSWORD=password
- POSTGRES_DB=db_dynamic
ports:
- "5432:5432"
volumes:
postgres_data:
```
However, I'm getting the following error:
```python
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InvalidForeignKey) there is no unique constraint matching given keys for referenced table "model2"
[SQL:
CREATE TABLE model1 (
id SERIAL,
model2_id INTEGER,
PRIMARY KEY (id),
FOREIGN KEY(model2_id) REFERENCES model2 (id)
)
```
However, it works well with a SQLite database. The problem seems to be in how SQLModel (or SQLAlchemy) generates the schema for the PostgreSQL database.
Someone knows how to solve it ?
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.9.5
### Additional Context
_No response_ | closed | 2022-01-13T14:18:57Z | 2022-01-14T14:06:35Z | https://github.com/fastapi/sqlmodel/issues/219 | [
"question"
] | joaopfg | 0 |
xorbitsai/xorbits | numpy | 57 | DOC: Project index | closed | 2022-12-08T11:25:45Z | 2022-12-16T08:19:07Z | https://github.com/xorbitsai/xorbits/issues/57 | [] | UranusSeven | 1 | |
nltk/nltk | nlp | 2,950 | Panlex lite not working | Hi!
I'm trying to get the panlex_lite corpus working, but I'm not able to download it. Reading [this](https://github.com/nltk/nltk/issues/1253) I've tried to do it with the develop version of NLTK but still no success.
I'm I missing something?
Thanks,
Best | open | 2022-02-21T17:19:12Z | 2022-08-18T16:49:14Z | https://github.com/nltk/nltk/issues/2950 | [] | geblanco | 2 |
keras-team/keras | deep-learning | 20,605 | Mixed-precision stateful LSTM/GRU training not working | Enabling mixed-precision mode when training a stateful LSTM or GRU using Keras v3.7.0 fails with error messages like this:
```
Traceback (most recent call last):
File "/home/lars.christensen/git/keras-io/examples/timeseries/timeseries_weather_forecasting.py", line 275, in <module>
lstm_out = keras.layers.LSTM(32, stateful=True)(inputs)
File "/home/lars.christensen/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/lars.christensen/.local/lib/python3.10/site-packages/optree/ops.py", line 747, in tree_map
return treespec.unflatten(map(func, *flat_args))
ValueError: initial_value: Tensor conversion requested dtype float32 for Tensor with dtype float16: <tf.Tensor: shape=(256, 32), dtype=float16, numpy=
array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], dtype=float16)>
```
The issue can for example be recreated by modifying the weather forecasting example located at "https://github.com/keras-team/keras-io/blob/master/examples/timeseries/timeseries_weather_forecasting.py" to use a stateful LSTM as in the attached source code.
[timeseries_weather_forecasting.zip](https://github.com/user-attachments/files/18038089/timeseries_weather_forecasting.zip)
When mixed-precision mode is disabled, it works as expected. Hence, this is a problem only in mixed-precision mode. | closed | 2024-12-06T12:31:08Z | 2024-12-10T14:10:24Z | https://github.com/keras-team/keras/issues/20605 | [
"type:Bug"
] | larschristensen | 5 |
lorien/grab | web-scraping | 273 | GrabTimeoutError - Resolving timed out after 3495 milliseconds | Возникла проблема нужно парсить список URL в 1к потоков. Сейчас ставлю 300 потоков или выше, начинает сыпать массово ошибка:
`GrabTimeoutError(28, 'Resolving timed out after 3495 milliseconds')`
Работаю с windows 7, python 3, grab обычный (не spider).
Как это можно исправить? | closed | 2017-07-17T01:51:07Z | 2017-07-19T14:20:09Z | https://github.com/lorien/grab/issues/273 | [] | InputError | 8 |
ipython/ipython | data-science | 14,106 | IPython.start_ipython can only be started once | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
I have an application with another kind of main loop, and I sometimes want to jump in a IPython session.
So I use `start_ipython(ns=some_dict)`. This block the main program while the user is using ipython pre-sorted data.
When the user type quit, I'm back in the main program, which is nice, but any further attempt to launch the interpreter result in failure :
```
Loading history failed
Traceback (most recent call last):
File "/home/aurelien/.local/share/virtualenvs/pygris/lib/python3.11/site-packages/prompt_toolkit/buffer.py", line 422, in load_history_done
f.result()
File "/home/aurelien/.local/share/virtualenvs/pygris/lib/python3.11/site-packages/prompt_toolkit/buffer.py", line 410, in load_history
async for item in self.history.load():
File "/home/aurelien/.local/share/virtualenvs/pygris/lib/python3.11/site-packages/prompt_toolkit/history.py", line 59, in load
self._loaded_strings = list(self.load_history_strings())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aurelien/.local/share/virtualenvs/pygris/lib/python3.11/site-packages/IPython/terminal/interactiveshell.py", line 175, in load_history_strings
for __, ___, cell in self.shell.history_manager.get_tail(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'get_tail'
Unhandled exception in event loop:
File "/home/aurelien/.local/share/virtualenvs/pygris/lib/python3.11/site-packages/prompt_toolkit/buffer.py", line 410, in load_history
async for item in self.history.load():
File "/home/aurelien/.local/share/virtualenvs/pygris/lib/python3.11/site-packages/prompt_toolkit/history.py", line 59, in load
self._loaded_strings = list(self.load_history_strings())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aurelien/.local/share/virtualenvs/pygris/lib/python3.11/site-packages/IPython/terminal/interactiveshell.py", line 175, in load_history_strings
for __, ___, cell in self.shell.history_manager.get_tail(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception 'NoneType' object has no attribute 'get_tail'
Press ENTER to continue...
```
and after that the shell is unusable, it only repeat the same error over and over.
**Minimal code to reproduce**
```
import IPython
IPython.start_ipython()
IPython.start_ipython()
```
maybe there is another way to "reconnect" to the last process, (but I could not find anything in the docs), or something is not cleaned and it's kind of a bug....
Would appreciate very much any help if I'm doing this wrong, or happy to provide further infos if this seems like a bug. | open | 2023-06-28T16:00:38Z | 2024-03-08T13:18:33Z | https://github.com/ipython/ipython/issues/14106 | [] | Yinameah | 1 |
NullArray/AutoSploit | automation | 450 | Unhandled Exception (9bd99fa4a) | Autosploit version: `3.0`
OS information: `Linux-4.19.0-kali1-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/root/Github/exploit/AutoSploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/root/Github/exploit/AutoSploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-02-10T07:10:05Z | 2019-02-19T04:22:45Z | https://github.com/NullArray/AutoSploit/issues/450 | [] | AutosploitReporter | 0 |
comfyanonymous/ComfyUI | pytorch | 6,410 | Image Representation | ### Feature Idea
Would it be possible to make it as portait representation such as 512x768
Thank you
### Existing Solutions
_No response_
### Other
_No response_ | closed | 2025-01-09T09:22:38Z | 2025-01-11T15:49:43Z | https://github.com/comfyanonymous/ComfyUI/issues/6410 | [
"Feature"
] | MrFries1111 | 1 |
vaexio/vaex | data-science | 1,579 | [FEATURE-REQUEST]Does Vaex support merge as of ? | I think this is a pretty important application | open | 2021-09-16T00:14:03Z | 2022-04-11T11:42:02Z | https://github.com/vaexio/vaex/issues/1579 | [
"feature-request"
] | enthusiastics | 3 |
pydantic/pydantic-ai | pydantic | 1,022 | Static system prompt is not replaced by dynamic system prompt if passing message history | ### Initial Checks
- [x] I confirm that I'm using the latest version of Pydantic AI
### Description
If you originally have a static system prompt.
You can't regenerate it by setting a dynamic system prompt if you pass a message history.
```
chatbot = Agent(
model='openai:gpt-4o',
result_type=str,
system_prompt='You are a highly intelligent chatbot'
)
```
Run this and get the history and save to JSON
Now if you add a dynamic system prompt and remove the static one.
```python
chatbot = Agent(
model='openai:gpt-4o',
result_type=str,
)
@chatbot.system_prompt(dynamic=True)
def dynamic_prompt() -> str:
return 'You are a helpful chatbot'
```
If you pass the old history, it still uses the old system prompt.
### Python, Pydantic AI & LLM client version
```Text
python 3.12
pydantic 0.0.30
openai 1.65.1
``` | open | 2025-03-01T05:19:05Z | 2025-03-01T05:27:26Z | https://github.com/pydantic/pydantic-ai/issues/1022 | [
"need confirmation"
] | vikigenius | 1 |
coqui-ai/TTS | python | 2,555 | [Bug] RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument. | ### Describe the bug
I am training a voice cloning model using VITS. My dataset is in LJSpeech Format. I am trying to train Indian English model straight from character with Phonemizer = False. The training runs for 35-40 epochs and then abruptly stops. Sometimes it runs for even longer, like 15k steps and then stops. I can share the notebook I am using for training. I have successfully completed my training with this notebook several times, but in recent times I am facing this error.
Also I am getting this warning at the beginning of the training.
/usr/local/lib/python3.10/dist-packages/torch/functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:862.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
I am providing the screenshots of the error I encounter everytime.
<img width="1405" alt="Screenshot 2023-04-13 at 11 08 42 PM" src="https://user-images.githubusercontent.com/19510293/234267079-d8c5f39e-75eb-4f07-b6a0-da9e56499c03.png">
<img width="1362" alt="Screenshot 2023-04-13 at 11 09 00 PM" src="https://user-images.githubusercontent.com/19510293/234267091-102b5c5c-a881-4175-b1c5-b11b322af255.png">
<img width="1356" alt="Screenshot 2023-04-13 at 11 09 14 PM" src="https://user-images.githubusercontent.com/19510293/234267110-801c9e91-af08-47e6-b41f-3ac1b38cc71b.png">
### To Reproduce
https://colab.research.google.com/drive/1k8Fk5kfU_aZ2lM7Esih3Ud1fYtNlujOQ?authuser=0#scrollTo=A49iDwajBtu_
I am using this colab notebook for training purpose. Every configuration regarding the training can be referred from here. Mind that training will go on for 35-40 epochs then it will stop.
### Expected behavior
Training should continue.
### Logs
_No response_
### Environment
```shell
https://colab.research.google.com/drive/1k8Fk5kfU_aZ2lM7Esih3Ud1fYtNlujOQ?authuser=0#scrollTo=A49iDwajBtu_
```
### Additional context
I have tried to resolve the warning and error both as I think both are related.
I tried following solutions to resolve the warning.
https://github.com/jaywalnut310/vits/issues/15
and the following to solve the error.
https://github.com/coqui-ai/TTS/discussions/1949
Looks like Torch version == 1.8 is unstable and distribution is not available. I tried 1.9 too because github above prescribed it. Distribution not available. | closed | 2023-04-25T11:52:05Z | 2024-01-23T15:28:31Z | https://github.com/coqui-ai/TTS/issues/2555 | [
"bug",
"wontfix"
] | offside609 | 13 |
tensorly/tensorly | numpy | 246 | Complex Support | Here are 3 issues with complex support:
```python
import math
import tensorly as tl
from tensorly import testing
from tensorly.decomposition import parafac, tensor_train
```
1.) Issue with "norm" from backend
```python
def test_norm_complex():
"""The norm at a minimum does not work for complex tensors with order = 2, as
it should be tensor*conj(tensor), not tensor**2. I'm not sure about the other
orders, they too should be checked."""
tensor = tl.tensor([1, 1j, 0], dtype=torch.complex64)
norm = tl.norm(tensor)
testing.assert_array_almost_equal(norm, math.sqrt(2), decimal=6)
```
2.) Issue with CP tensor decomposition/recomposition with parafac and cp_to_tensor - fails in general by giving very wrong answer
```python
def test_cp_complex():
"""Test that parafac decomposition and cp_to_tensor work for complex tensors.
Produces incorrect answers. Code needs debugging for complex tensors."""
shape = (2, 4, 3, 2)
tensor = tl.randn(shape) + tl.randn(shape)*1j
decomp = parafac(tensor, rank=100)
recomp = tl.cp_to_tensor(decomp)
testing.assert_array_almost_equal(tensor, recomp)
```
3.) Issue with TT tensor decomposition/recomposition with tensor_train and tt_to_tensor - fails for TensorFlow and MxNet, but as it still works for 3 backend
```python
def test_tt_complex():
"""Test that tensor_train decomposition and tt_to_tensor work for complex tensors.
Works for some backends, but not for TensorFlow and MxNet."""
shape = (2, 4, 3, 2)
tensor = tl.randn(shape) + tl.randn(shape)*1j
decomp = tensor_train(tensor, rank=(1, 8, 24, 8, 1))
recomp = tl.tt_to_tensor(decomp)
testing.assert_array_almost_equal(tensor, recomp)
``` | closed | 2021-03-22T11:09:23Z | 2021-03-22T19:59:20Z | https://github.com/tensorly/tensorly/issues/246 | [] | taylorpatti | 2 |
sigmavirus24/github3.py | rest-api | 954 | Missing set permission for collaborators | Missing parameter to handle permission for users
https://developer.github.com/v3/repos/collaborators/#add-user-as-a-collaborator
## Version Information
Please provide:
- The version of Python you're using
3.6
- The version of pip you used to install github3.py
19.1.1
- The version of github3.py, requests, uritemplate, and dateutil installed
github3.py==1.3.0
- jwcrypto [required: >=0.5.0, installed: 0.6.0]
- cryptography [required: >=1.5, installed: 2.7]
- asn1crypto [required: >=0.21.0, installed: 0.24.0]
- cffi [required: >=1.8,!=1.11.3, installed: 1.12.3]
- pycparser [required: Any, installed: 2.19]
- six [required: >=1.4.1, installed: 1.12.0]
- python-dateutil [required: >=2.6.0, installed: 2.8.0]
- six [required: >=1.5, installed: 1.12.0]
- requests [required: >=2.18, installed: 2.22.0]
- certifi [required: >=2017.4.17, installed: 2019.6.16]
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
- idna [required: >=2.5,<2.9, installed: 2.8]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.3]
- uritemplate [required: >=3.0.0, installed: 3.0.0]
## Minimum Reproducible Example
Missing permission as a `put` payload
```
def add_collaborator(self, username):
"""Add ``username`` as a collaborator to a repository.
:param username:
(required), username of the user
:type username:
str or :class:`~github3.users.User`
:returns:
True if successful, False otherwise
:rtype:
"""
if not username:
return False
url = self._build_url(
"collaborators", str(username), base_url=self._api
)
return self._boolean(self._put(url), 201, 404)
``` | closed | 2019-07-02T12:30:13Z | 2022-08-15T16:38:38Z | https://github.com/sigmavirus24/github3.py/issues/954 | [] | NargiT | 1 |
SYSTRAN/faster-whisper | deep-learning | 603 | Bug with the prompt reset on temp! | ```
Processing segment at 00:52.160
DEBUG: Compression ratio threshold is not met with temperature 0.0 (8.777778 > 2.400000)
DEBUG: Compression ratio threshold is not met with temperature 0.2 (9.307692 > 2.400000)
DEBUG: Log probability threshold is not met with temperature 0.4 (-1.716158 < -1.000000)
DEBUG: Log probability threshold is not met with temperature 0.6 (-2.491989 < -1.000000)
DEBUG: Log probability threshold is not met with temperature 0.8 (-3.483112 < -1.000000)
DEBUG: Log probability threshold is not met with temperature 1.0 (-2.989720 < -1.000000)
[00:52.160 --> 00:54.160] I'm sorry.
DEBUG: TEMPERATURE at the end of generate_segments(): 0.4
Processing segment at 00:54.160
```
Why at the end of `generate_segments()` temperature is only 0.4?
| closed | 2023-12-04T18:19:31Z | 2023-12-13T12:25:13Z | https://github.com/SYSTRAN/faster-whisper/issues/603 | [] | Purfview | 8 |
modin-project/modin | pandas | 7,123 | Preserve shape_hint for dropna | This is to avoid index materialization in Series.columnarize.
| closed | 2024-03-26T09:34:08Z | 2024-03-26T12:14:25Z | https://github.com/modin-project/modin/issues/7123 | [
"Performance 🚀"
] | YarShev | 0 |
psf/requests | python | 6,695 | Session.verify ignored if REQUEST_CA_BUNDLES is set; behaviour not documented. | ## Summary
Session-level CA overrides are ignored if either `REQUEST_CA_BUNDLES` or `CURL_CA_BUNDLES` is set.
This is unintuitive as you'd expect a per-session override to take precedence over global state. It's also not mentioned in [the documentation](https://requests.readthedocs.io/en/latest/user/advanced/#ssl-cert-verification).
### Repro
The following script normally outputs '200', but instead fails with SSL verification errors if either of the above environment variables are set (because `session.verify = False` gets ignored).
import requests
session = requests.Session()
session.verify = False
r = session.get('https://self-signed.badssl.com')
print(r.status_code)
## Expected Result
I'd intuitively expect the repro above to return 200 regardless of the state of the environment variables.
## Actual Result
If `REQUESTS_CA_BUNDLE` or `CURL_CA_BUNDLE` is set, the script above fails with verification errors, even though `session.verify = False`.
## Reproduction Steps
1. Create a file `test.py` containing the script from the **Summary** section above.
2. Run `export REQUESTS_CA_BUNDLE=$(python3 -c "import certifi; print(certifi.where())")`
* This sets `REQUESTS_CA_BUNDLE` to the system default truststore.
* Any other valid value for `REQUESTS_CA_BUNDLE` would work here too.
3. Run `python3 test.py` and observe SSL validation errors.
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": "4.0.0"
},
"cryptography": {
"version": "3.4.8"
},
"idna": {
"version": "3.3"
},
"implementation": {
"name": "CPython",
"version": "3.10.12"
},
"platform": {
"release": "5.15.0-102-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "30000020",
"version": "21.0.0"
},
"requests": {
"version": "2.25.1"
},
"system_ssl": {
"version": "30000020"
},
"urllib3": {
"version": "1.26.5"
},
"using_pyopenssl": true
}
```
I have also reproduced this on requests 2.31.0.
## Other Impact
This behaviour also impacts other uses of `session.verify`, such as `session.verify = 'my_custom_ca_bundle.pem'` – if the environment variables are present, the custom CA bundle will not be used.
## Proposed Fix
I'm happy to submit a docs PR to clarify this behaviour. In an ideal world I think `session.verify` would just take precedence over the environment variables, but making that change might break consumers that are inadvertently relying on the current semantics – so I think a docs change is the best we can do.
If you do want the behavioural change, I'm also happy to submit a PR for that instead of the docs fix.
## Thanks
I wanted to take this opportunity to say thanks for a fantastic library – I've been a happy user of `requests` for over a decade and I really appreciate the hard work of all involved :) | closed | 2024-05-08T10:34:33Z | 2024-05-08T13:23:58Z | https://github.com/psf/requests/issues/6695 | [] | StefanKopieczek | 4 |
hankcs/HanLP | nlp | 1,821 | python3.10时,tf支持版本最低是2.8.0,setup中却只能等于2.6.0 | <!--
感谢找出bug,请认真填写下表:
-->
**Describe the bug**
- python3.10时,安装`extras_require=tf`时,`tensorflow==2.6.0`不能被满足
https://github.com/hankcs/HanLP/blob/31c34ec86f71fe91f1fe6d86e7ca8575c80e2306/setup.py#L24
- 因为[pypi](https://pypi.org/project/tensorflow/2.6.0/#files)中最多支持到python3.9

**Expected behavior**
- 期待库检查时,可适当放松版本限制
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux CentOS 7
- Python version: 3.10
- HanLP version: 2.1.0b49
* [x] I've completed this form and searched the web for solutions.
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! --> | closed | 2023-05-22T08:12:37Z | 2023-05-23T22:04:57Z | https://github.com/hankcs/HanLP/issues/1821 | [
"feature request"
] | SWHL | 3 |
benlubas/molten-nvim | jupyter | 173 | [Help] What am I doing wrong? | please include:
- _what you're trying to do_
Start the plug in
- _what you've tried (if anything)_
run :MoltenInit
- _questions you'd like answered_
Why its returning, "Not an editor command" instead of running
If you haven't already, please read the README and browse the `docs/` folder (Yep, did that)
Also heres the output of :UpdateRemotePlugins
```txt
function remote#host#UpdateRemotePlugins[6]..<SNR>69_RegistrationCommands[13]..remote#host#RegisterPlugin, line 5
Plugin "/home/bowmanpete/.local/share/nvim/lazy/molten-nvim/rplugin/python3/molten" is already registered
remote/host: generated rplugin manifest: /home/bowmanpete/.local/share/nvim/rplugin.vim
```
Tried calling healthcheck molten
```
function remote#host#UpdateRemotePlugins[6]..<SNR>69_RegistrationCommands[15]..remote#host#Require[10]..provider#pythonx#Require[12]..provider#Poll, line 7
Vim(if):Error invoking 'poll' on channel 3:^@Invalid channel: 3
function remote#host#UpdateRemotePlugins[6]..<SNR>69_RegistrationCommands[15]..remote#host#Require[10]..provider#pythonx#Require[12]..provider#Poll, line 17
Failed to load python3 host. You can try to see what happened by starting nvim with $NVIM_PYTHON_LOG_FILE set and opening the generated log file. Also, the host stderr is available in messages.
remote/host: generated rplugin manifest: /home/bowmanpete/.local/share/nvim/rplugin.vim
```
Though it shows all ok once the command is run:

| closed | 2024-03-25T22:59:24Z | 2024-04-14T20:37:57Z | https://github.com/benlubas/molten-nvim/issues/173 | [
"config problem"
] | mesa123123 | 9 |
Lightning-AI/pytorch-lightning | deep-learning | 20,576 | chunkable datasets and dataloaders | ### Description & Motivation
Current large model training requires a huge number of traning samples, so that the traditional Mapped dataloaders are failed to load the training data because of limited memory. so I think there can be a chunkable datasets and dataloaders available. so that we can use the Mapped logic to load the first subset of training data in the dataset and dataloader 1 for training and the loading the second subset of training data for prepare.
### Pitch
I had seen the CombinedLoader in lighting, and I did not find too much documents and examples about it, It seems that it can not resolve the requirement about **"loading large traning data in memory chunk by chunk"**. If there had been some resolution, please give me some help, thanks!
### Alternatives
_No response_
### Additional context
_No response_
cc @lantiga @borda | open | 2025-02-06T08:12:44Z | 2025-02-06T08:13:06Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20576 | [
"feature",
"needs triage"
] | JohnHerry | 0 |
abhiTronix/vidgear | dash | 172 | Stablilizer Error: (-215:Assertion failed) count >= 0 && count2 == count in function 'cv::RANSACPointSetRegistrator::run' | <!--
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
If the important info is missing we'll add the 'Needs more information' label
or may choose to close the issue until there is enough information provided.
-->
## Description
<!-- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
**Issue by @Nizcik_twitter**
I am trying to stabilize live video feed and I think this is amazing tool for that. However I am running to issue where stabilization fails due to OpenCV error and I am not sure how to troubleshoot further. I am not experienced at all on python or coding so I apologize if there is obvious steps that I should have taken. I am on Ryzen 3600 CPU and windows 10 - Python 3.9 and openCV 4.4.0. This happens every time immediately when I block the view of camera resulting in black screen.
### Acknowledgment
<!-- By posting an issue you acknowledge the following: (Put an `x` in all the boxes that apply(important)) -->
- [x] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful.
- [x] I have read the [Documentation](https://abhitronix.github.io/vidgear).
- [x] I have read the [Issue Guidelines](https://abhitronix.github.io/vidgear/contribution/issue/#submitting-an-issue-guidelines).
### Environment
<!-- Include as many relevant details about the environment you experienced the bug in -->
* VidGear version: <!-- Run command `python -c "import vidgear; print(vidgear.__version__)"` --> 0.1.9
* Branch: <!-- Select between: Master | Testing | Development | PyPi --> PyPi
* Python version: <!---Run command `python -V` --> 3.9
* PiP version: <!-- Run command `python -c "import pip; print(pip.__version__)"` --> latest
* Operating System and version: windows 10
### Expected Behavior
<!-- Tell us what should happen -->
No Error with blank frame.
### Actual Behavior
<!-- Tell us what happens instead -->
<!-- You can turn `logging=True` in parameters of the respective vidgear API for getting debug output -->
Here is the error:
```sh
CamGear :: DEBUG :: Enabling Threaded Queue Mode for the current video source!
Traceback (most recent call last):
File "C:\programs\Stabilizer\stabilizer.py", line 26, in <module>
stabilized_frame = stab.stabilize(frame)
File "C:\Programs\Python39\lib\site-packages\vidgear\gears\stabilizer.py", line 194, in stabilize
self.generate_transformations() # generate transformations
File "C:\Programs\Python39\lib\site-packages\vidgear\gears\stabilizer.py", line 233, in generate_transformations
transformation = cv2.estimateAffinePartial2D(
cv2.error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-qjdp5db9\opencv\modules\calib3d\src\ptsetreg.cpp:174: error: (-215:Assertion failed) count >= 0 && count2 == count in function 'cv::RANSACPointSetRegistrator::run'
```
### Possible Fix
<!-- Not obligatory, but suggest a fix or reason for the bug or remove this block-->
None
### Steps to reproduce
<!--
How would you describe your issue to someone who doesn’t know you or your project?
Try to write a sequence of steps that anybody can repeat to see the issue.
-->
Try this video: https://raw.githubusercontent.com/abhiTronix/Imbakup/master/Images/example_empty_train_input.mp4 with Stabilizer class. | closed | 2020-12-08T03:59:26Z | 2020-12-08T17:41:55Z | https://github.com/abhiTronix/vidgear/issues/172 | [
"BUG :bug:",
"SOLVED :checkered_flag:"
] | abhiTronix | 1 |
yt-dlp/yt-dlp | python | 12,559 | yt-dlp has issues with flag parsing when using it via API | ### Checklist
- [x] I'm reporting a bug unrelated to a specific site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
### Provide a description that is worded well enough to be understood
This problem arose while I was butting my head against #12558, but it seems unrelated to a specific site, so I opened a separate issue.
In the logs, you can see the options I've provided. I found some of them to be completely ignored for no discernible reason.
`--no-abort-on-error` doesn't work: The downloader fails completely if a video in the list I provide is unavailable
`-o` ignores anything besides the templated entires: I get the appropriately named in my file in my project root directory instead of any subpath I provide. I even tried a direct copy and paste from the examples and it failed (reflected in debug output)
`-f` seems to be ignored? Could be correct, not sure tbh
`-R` is ignored completely, but "retries" is so vague that I'm not sure about this either. Definitely fails completely on SSL handshake, which seems like the most obvious case for a retry.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.02.19 from yt-dlp/yt-dlp [4985a4041] (pip) API
[debug] params: {'-f': 'bestaudio', '-x': True, 'verbose': True, '--audio-quality': 0, '-P': 'music', '-o': '%(upload_date>%Y)s/%(title)s.%(ext)s', '--no-abort-on-error': True, '--ignore-no-format-error': True, '-R': 'infinite', 'compat_opts': set(), 'http_headers': {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.50 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-us,en;q=0.5', 'Sec-Fetch-Mode': 'navigate'}}
[debug] Python 3.13.0 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 3.0.15 3 Sep 2024)
[debug] exe versions: ffmpeg 7.1-full_build-www.gyan.dev (setts), ffprobe 7.1-full_build-www.gyan.dev
[debug] Optional libraries: sqlite3-3.45.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1841 extractors
[youtube] Extracting URL: https://youtu.be/cwZ1L_0QLjw?si=HjQvqiLF5VFrrH1v
[youtube] cwZ1L_0QLjw: Downloading webpage
[youtube] cwZ1L_0QLjw: Downloading tv client config
[youtube] cwZ1L_0QLjw: Downloading player f6e09c70
[youtube] cwZ1L_0QLjw: Downloading tv player API JSON
[youtube] cwZ1L_0QLjw: Downloading ios player API JSON
[debug] [youtube] Extracting signature function js_f6e09c70_101
[debug] Loading youtube-sigfuncs.js_f6e09c70_101 from cache
[debug] Loading youtube-nsig.f6e09c70 from cache
[debug] [youtube] Decrypted nsig NUZw5ZZPruZbbfwvkrs => PtRMQ9-H8qFraQ
[debug] [youtube] Extracting signature function js_f6e09c70_105
[debug] Loading youtube-sigfuncs.js_f6e09c70_105 from cache
[debug] Loading youtube-nsig.f6e09c70 from cache
[debug] [youtube] Decrypted nsig 8vhGIJYKrGxNz3m3Bdk => 5118Upcfntfb9Q
[debug] [youtube] cwZ1L_0QLjw: ios client https formats require a GVS PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a GVS PO Token for this client with --extractor-args "youtube:po_token=ios.gvs+XXX". For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[youtube] cwZ1L_0QLjw: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] cwZ1L_0QLjw: Downloading 1 format(s): 401+251
[debug] Invoking http downloader on "https://rr3---sn-5hneknes.googlevideo.com/videoplayback?expire=1741407740&ei=nHHLZ5r3NZTK6dsPnMmp8Qw&ip=38.180.168.46&id=o-ACAD_TKGjH6J7DDnxIw6KuO6Gi_XtbusdLa6yB2_pEqX&itag=401&aitags=133%2C134%2C135%2C136%2C137%2C160%2C242%2C243%2C244%2C247%2C248%2C271%2C278%2C313%2C394%2C395%2C396%2C397%2C398%2C399%2C400%2C401&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1741386140%2C&mh=2M&mm=31%2C29&mn=sn-5hneknes%2Csn-5hne6nzs&ms=au%2Crdu&mv=m&mvi=3&pl=25&rms=au%2Cau&initcwndbps=2362500&bui=AUWDL3y9wG9jwlOurbT5HUQNm3rSSVX1IKnuX6mlo3fNg7mzUhPqzBsu1B3YqaVeVEOPa5M7I-SPGEOS&vprv=1&svpuc=1&mime=video%2Fmp4&ns=mxe-P5fSAfvchfF1jf-Q5vsQ&rqh=1&gir=yes&clen=312341980&dur=213.600&lmt=1726280259760755&mt=1741385833&fvip=2&keepalive=yes&lmw=1&fexp=51326932%2C51358317%2C51411872&c=TVHTML5&sefc=1&txp=5532434&n=5118Upcfntfb9Q&sparams=expire%2Cei%2Cip%2Cid%2Caitags%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AFVRHeAwRQIhAO7aUC74o1ID1RKhf4VIdP81fvV63t1HOHUdVpZiNqlOAiAeUwpAuXrZc_KMghUyxVrO08X5diQnsCtBOfJZzbsuKQ%3D%3D&sig=AJfQdSswRQIgPPIGyZ-Tz0w_TR3HXy0fXoHVwXP9NvceyxdXgQimaRsCIQD516c6VfhfmaJkRr0Q1tN_bmTHi-e5gyqrXkPF0bB4aQ%3D%3D"
[debug] File locking is not supported. Proceeding without locking
[download] Destination: Charli xcx - Von dutch (official video) [cwZ1L_0QLjw].f401.mp4
[download] 100% of 297.87MiB in 00:00:23 at 12.82MiB/s
[debug] Invoking http downloader on "https://rr3---sn-5hneknes.googlevideo.com/videoplayback?expire=1741407740&ei=nHHLZ5r3NZTK6dsPnMmp8Qw&ip=38.180.168.46&id=o-ACAD_TKGjH6J7DDnxIw6KuO6Gi_XtbusdLa6yB2_pEqX&itag=251&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1741386140%2C&mh=2M&mm=31%2C29&mn=sn-5hneknes%2Csn-5hne6nzs&ms=au%2Crdu&mv=m&mvi=3&pl=25&rms=au%2Cau&initcwndbps=2362500&bui=AUWDL3y9wG9jwlOurbT5HUQNm3rSSVX1IKnuX6mlo3fNg7mzUhPqzBsu1B3YqaVeVEOPa5M7I-SPGEOS&vprv=1&svpuc=1&mime=audio%2Fwebm&ns=mxe-P5fSAfvchfF1jf-Q5vsQ&rqh=1&gir=yes&clen=3609304&dur=213.621&lmt=1726276040570009&mt=1741385833&fvip=2&keepalive=yes&lmw=1&fexp=51326932%2C51358317%2C51411872&c=TVHTML5&sefc=1&txp=5532434&n=5118Upcfntfb9Q&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AFVRHeAwRQIhAO7aUC74o1ID1RKhf4VIdP81fvV63t1HOHUdVpZiNqlOAiAeUwpAuXrZc_KMghUyxVrO08X5diQnsCtBOfJZzbsuKQ%3D%3D&sig=AJfQdSswRAIgFSywVZPQIK9Ol96iX_D_BsgX3Cqkh0k7hTtK8CVb7cwCIFiOzL5lSJ7IRa-Cr6ySf0uzUC6Z2cgCdnB_XxfYtr5w"
[download] Destination: Charli xcx - Von dutch (official video) [cwZ1L_0QLjw].f251.webm
[download] 100% of 3.44MiB in 00:00:00 at 6.00MiB/s
[Merger] Merging formats into "Charli xcx - Von dutch (official video) [cwZ1L_0QLjw].webm"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i "file:Charli xcx - Von dutch (official video) [cwZ1L_0QLjw].f401.mp4" -i "file:Charli xcx - Von dutch (official video) [cwZ1L_0QLjw].f251.webm" -c copy -map 0:v:0 -map 1:a:0 -movflags +faststart "file:Charli xcx - Von dutch (official video) [cwZ1L_0QLjw].temp.webm"
Deleting original file Charli xcx - Von dutch (official video) [cwZ1L_0QLjw].f401.mp4 (pass -k to keep)
Deleting original file Charli xcx - Von dutch (official video) [cwZ1L_0QLjw].f251.webm (pass -k to keep)
[youtube] Extracting URL: https://youtu.be/LwbleaczS-A?si=-lJ6k5fIpvYQPmJ0
[youtube] LwbleaczS-A: Downloading webpage
[youtube] LwbleaczS-A: Downloading tv client config
[youtube] LwbleaczS-A: Downloading tv player API JSON
[youtube] LwbleaczS-A: Downloading ios player API JSON
ERROR: [youtube] LwbleaczS-A: Video unavailable. This video is not available
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\extractor\common.py", line 746, in extract
ie_result = self._real_extract(url)
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\extractor\youtube.py", line 4724, in _real_extract
self.raise_no_formats(reason, expected=True)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\extractor\common.py", line 1267, in raise_no_formats
raise ExtractorError(msg, expected=expected, video_id=video_id)
Traceback (most recent call last):
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1637, in wrapper
return func(self, *args, **kwargs)
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1772, in __extract_info
ie_result = ie.extract(url)
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\extractor\common.py", line 746, in extract
ie_result = self._real_extract(url)
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\extractor\youtube.py", line 4724, in _real_extract
self.raise_no_formats(reason, expected=True)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\extractor\common.py", line 1267, in raise_no_formats
raise ExtractorError(msg, expected=expected, video_id=video_id)
yt_dlp.utils.ExtractorError: [youtube] LwbleaczS-A: Video unavailable. This video is not available
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "W:\Projects\Zarya\Playlist-DL\main.py", line 40, in <module>
main()
~~~~^^
File "W:\Projects\Zarya\Playlist-DL\main.py", line 37, in main
download(links)
~~~~~~~~^^^^^^^
File "W:\Projects\Zarya\Playlist-DL\main.py", line 33, in download
ydl.download(links)
~~~~~~~~~~~~^^^^^^^
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3618, in download
self.__download_wrapper(self.extract_info)(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
url, force_generic_extractor=self.params.get('force_generic_extractor', False))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3591, in wrapper
res = func(*args, **kwargs)
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1626, in extract_info
return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1655, in wrapper
self.report_error(str(e), e.format_traceback())
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1095, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "W:\Projects\Zarya\Playlist-DL\.venv\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1034, in trouble
raise DownloadError(message, exc_info)
yt_dlp.utils.DownloadError: ERROR: [youtube] LwbleaczS-A: Video unavailable. This video is not available
``` | closed | 2025-03-07T22:26:53Z | 2025-03-10T21:52:03Z | https://github.com/yt-dlp/yt-dlp/issues/12559 | [
"question"
] | bqback | 6 |
allenai/allennlp | data-science | 5,028 | Open Information Extraction Training fails with "status code 404" | <!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x] I have verified that the issue exists against the `master` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x] I have included in the "Environment" section below the output of `pip freeze`.
- [x] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
running the command
$ allennlp train \\
https://raw.githubusercontent.com/allenai/allennlp-models/main/training_config/structured-prediction/srl.jsonnet \\
-s /path/to/output
from [Model Usage](https://demo.allennlp.org/open-information-extraction) results in the following error message
OSError: HEAD request failed for url https://raw.githubusercontent.com/allenai/allennlp-models/main/training_config/structured-prediction/srl.jsonnet with status code 404
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
2021-02-28 17:49:16,852 - INFO - transformers.file_utils - PyTorch version 1.5.1 available.
Traceback (most recent call last):
File "/Users/mmir/Desktop/Explorations/allennlp_openie/venv/bin/allennlp", line 8, in <module>
sys.exit(run())
File "/Users/mmir/Desktop/Explorations/allennlp_openie/venv/lib/python3.7/site-packages/allennlp/__main__.py", line 19, in run
main(prog="allennlp")
File "/Users/mmir/Desktop/Explorations/allennlp_openie/venv/lib/python3.7/site-packages/allennlp/commands/__init__.py", line 92, in main
args.func(args)
File "/Users/mmir/Desktop/Explorations/allennlp_openie/venv/lib/python3.7/site-packages/allennlp/commands/train.py", line 112, in train_model_from_args
dry_run=args.dry_run,
File "/Users/mmir/Desktop/Explorations/allennlp_openie/venv/lib/python3.7/site-packages/allennlp/commands/train.py", line 162, in train_model_from_file
params = Params.from_file(parameter_filename, overrides)
File "/Users/mmir/Desktop/Explorations/allennlp_openie/venv/lib/python3.7/site-packages/allennlp/common/params.py", line 487, in from_file
params_file = cached_path(params_file)
File "/Users/mmir/Desktop/Explorations/allennlp_openie/venv/lib/python3.7/site-packages/allennlp/common/file_utils.py", line 105, in cached_path
return get_from_cache(url_or_filename, cache_dir)
File "/Users/mmir/Desktop/Explorations/allennlp_openie/venv/lib/python3.7/site-packages/allennlp/common/file_utils.py", line 301, in get_from_cache
etag = _http_etag(url)
File "/Users/mmir/Desktop/Explorations/allennlp_openie/venv/lib/python3.7/site-packages/allennlp/common/file_utils.py", line 211, in _http_etag
"HEAD request failed for url {} with status code {}".format(url, response.status_code)
OSError: HEAD request failed for url https://raw.githubusercontent.com/allenai/allennlp-models/main/training_config/structured-prediction/srl.jsonnet with status code 404
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: macOS 10.15.4
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.7.6
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
allennlp==1.0.0
allennlp-models==1.0.0
attrs==20.3.0
blis==0.4.1
boto3==1.17.14
botocore==1.20.14
cached-property==1.5.2
catalogue==1.0.0
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
conllu==3.0
cymem==2.0.5
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.5/en_core_web_sm-2.2.5.tar.gz
filelock==3.0.12
future==0.18.2
h5py==3.1.0
idna==2.10
importlib-metadata==3.5.0
iniconfig==1.1.1
jmespath==0.10.0
joblib==1.0.1
jsonnet==0.17.0
jsonpickle==2.0.0
murmurhash==1.0.5
nltk==3.5
numpy==1.20.1
overrides==3.0.0
packaging==20.9
plac==1.1.3
pluggy==0.13.1
preshed==3.0.5
protobuf==3.15.1
py==1.10.0
py-rouge==1.1
pyparsing==2.4.7
pytest==6.2.2
python-dateutil==2.8.1
regex==2020.11.13
requests==2.25.1
s3transfer==0.3.4
sacremoses==0.0.43
scikit-learn==0.24.1
scipy==1.6.1
sentencepiece==0.1.95
six==1.15.0
spacy==2.2.4
srsly==1.0.5
tensorboardX==2.1
thinc==7.4.0
threadpoolctl==2.1.0
tokenizers==0.7.0
toml==0.10.2
torch==1.5.1
tqdm==4.57.0
transformers==2.11.0
typing-extensions==3.7.4.3
urllib3==1.26.3
wasabi==0.8.2
word2number==1.1
zipp==3.4.0
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
$ pip install allennlp==1.0.0 allennlp-models==1.0.0
$ allennlp train \
https://raw.githubusercontent.com/allenai/allennlp-models/main/training_config/structured-prediction/srl.jsonnet \
-s /path/to/output
```
</p>
</details>
| closed | 2021-03-01T02:02:50Z | 2021-03-25T16:17:33Z | https://github.com/allenai/allennlp/issues/5028 | [
"bug",
"stale"
] | MM-Vianai | 7 |
GibbsConsulting/django-plotly-dash | plotly | 421 | Crashing if orjson is installed | Hi,
There seems to be an issue in [PR#408](https://github.com/GibbsConsulting/django-plotly-dash/pull/408/files )
In _patches.py
we have:
```from plotly.io._json import config```
but then later in the code
``` JsonConfig.validate_orjson()```
Which throws an error that JsonConfig doesn't exist
Looking at the plotly code, of course config = JsonConfig... | closed | 2022-10-30T18:36:23Z | 2022-11-10T23:19:28Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/421 | [
"bug"
] | mfriedy | 1 |
ray-project/ray | deep-learning | 50,958 | [RayServe] Documentation is misleading about what services get created by a RayService | ### Description
I tried creating a sample RayService, following the documentation: https://docs.ray.io/en/latest/serve/production-guide/kubernetes.html#deploying-a-serve-application :
```
$ kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/5b1a5a11f5df76db2d66ed332ff0802dc3bbff76/ray-operator/config/samples/ray-service.text-ml.yaml
rayservice.ray.io/rayservice-sample created
```
With KubeRay operator 1.3.0:
```
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
kuberay-operator default 2 2025-02-27 12:13:41.616923988 -0800 PST deployed kuberay-operator-1.3.0
```
Although the documentation implied that this would create services with names such as:
```
rayservice-sample-head-svc ClusterIP ... 8080/TCP,6379/TCP,8265/TCP,10001/TCP,8000/TCP,52365/TCP XXs
rayservice-sample-raycluster-454c4-dashboard-svc ClusterIP ... 52365/TCP XXs
rayservice-sample-raycluster-454c4-head-svc ClusterIP ... 8000/TCP,52365/TCP,8080/TCP,6379/TCP,8265/TCP,10001/TCP XXs
rayservice-sample-serve-svc ClusterIP ... 8000/TCP XXs
```
No service with a name `rayservice-sample-head-svc` was created:
```
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 2d18h
rayservice-sample-raycluster-47q27-head-svc ClusterIP None <none> 10001/TCP,8265/TCP,6379/TCP,8080/TCP,8000/TCP 4m10s
```
### Link
https://docs.ray.io/en/latest/serve/production-guide/kubernetes.html#deploying-a-serve-application | open | 2025-02-27T21:17:46Z | 2025-02-27T21:22:44Z | https://github.com/ray-project/ray/issues/50958 | [
"triage",
"docs"
] | boyleconnor | 0 |
redis/redis-om-python | pydantic | 295 | perf integrations | open | 2022-07-07T07:34:59Z | 2022-07-07T14:37:22Z | https://github.com/redis/redis-om-python/issues/295 | [
"maintenance"
] | chayim | 0 | |
aminalaee/sqladmin | fastapi | 502 | `action.name` should be unique per `ModelView`, not globally | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Now that 0.11.0 has released, I began working on the actual feature for my job. It works well, but with 1 unintuitve (to me) drawback I didn't discover earlier.
If you have:
```python
...
class FooAdmin(ModelView):
...
@action(name="duplicate")
async def duplicate(request: Request) -> Response:
return Response("Foo Duplicate")
...
class BarAdmin(ModelView):
...
@action(name="duplicate")
async def duplicate(request: Request) -> Response:
return Response("Bar Duplicate")
```
then hitting `GET /admin/foo/actions/duplicate` will return `Foo Duplicate` and hitting `GET /admin/bar/actions/duplicate` will return `Foo Duplicate` also. This is because the `identity` is a `path_param`, not an actual namespace. I recognize the action `name` needs to be unique, but I think it should be unique per `ModelView`.
### Steps to reproduce the bug
```python
from sqladmin import Admin, ModelView, action
from starlette.applications import Starlette
from starlette.responses import Response
from starlette.requests import Request
from sqlalchemy import create_engine
from sqlalchemy import (
Column,
Integer,
String,
)
from sqlalchemy.ext.declarative import declarative_base
test_database_uri_sync = "sqlite:///test.db?check_same_thread=False"
engine = create_engine(test_database_uri_sync)
Base = declarative_base()
app = Starlette()
admin = Admin(app, engine)
class Foo(Base):
__tablename__ = "foos"
id = Column(Integer, primary_key=True)
name = Column(String(length=16))
class Bar(Base):
__tablename__ = "bars"
id = Column(Integer, primary_key=True)
name = Column(String(length=16))
class FooAdmin(ModelView, model=Foo):
column_list = [Foo.id, Foo.name]
@action(name="duplicate")
async def duplicate(self, request: Request) -> Response:
return Response("Foo Duplicate")
class BarAdmin(ModelView, model=Bar):
column_list = [Bar.id, Bar.name]
@action(name="duplicate")
async def duplicate(self, request: Request) -> Response:
return Response("Bar Duplicate")
# Base.metadata.drop_all(engine)
# Base.metadata.create_all(engine)
admin.add_view(FooAdmin)
admin.add_view(BarAdmin)
```
### Expected behavior
Hitting `GET /admin/foo/actions/duplicate` should return `Foo Duplicate` and hitting `GET /admin/bar/actions/duplicate` should return `Bar Duplicate`.
### Actual behavior
Hitting `GET /admin/foo/actions/duplicate` will return `Foo Duplicate` and hitting `GET /admin/bar/actions/duplicate` will return `Foo Duplicate` also.
### Debugging material
_No response_
### Environment
- Ubuntu 22.04
- Python 3.11
### Additional context
_No response_ | closed | 2023-05-23T19:40:42Z | 2023-05-24T07:06:52Z | https://github.com/aminalaee/sqladmin/issues/502 | [] | murrple-1 | 0 |
marimo-team/marimo | data-science | 3,912 | Create permalink doesn't copy to clipboard in firefox | ### Describe the bug
Clicking "Create Permalink" worked in chrome, not firefox.
In firefox, on the latest version, `undefined` was copied to my clipboard.
### Environment
This was run on marimo.app
<details>
```
Replace this line with the output of marimo env. Leave the backticks in place.
```
</details>
### Code to reproduce
N/A | closed | 2025-02-25T18:37:02Z | 2025-02-26T00:01:25Z | https://github.com/marimo-team/marimo/issues/3912 | [
"bug"
] | paddymul | 4 |
vipstone/faceai | tensorflow | 50 | 安装OpenCV章节的简化 | 安装numpy
pip install numpy
安装whl
pip install wheel
安装OpenCV
pip install opencv-python | open | 2020-04-16T02:25:16Z | 2020-04-16T03:26:09Z | https://github.com/vipstone/faceai/issues/50 | [] | BackMountainDevil | 1 |
JaidedAI/EasyOCR | pytorch | 963 | Pytorch 2.0 support | Does the model support Pytorch 2.0? | open | 2023-03-11T14:15:55Z | 2023-03-11T14:15:55Z | https://github.com/JaidedAI/EasyOCR/issues/963 | [] | arcb01 | 0 |
flairNLP/flair | nlp | 2,692 | OneHotEmbeddings - RecursionError: maximum recursion depth exceeded | **Describe the bug**
After the first epoch of training, when trying to save the model, the code crashes with the following error:
> 2022-03-29 14:42:36,561 EPOCH 1 done: loss 2.2471 - lr 0.1000000
2022-03-29 14:42:47,611 DEV : loss 0.8668829202651978 - f1-score (micro avg) 0.7597
2022-03-29 14:42:47,665 BAD EPOCHS (no improvement): 0
2022-03-29 14:42:47,666 saving best model
Traceback (most recent call last):
File "my_main.py", line 271, in <module>
train(tag=opts.tag, tag2=opts.tag2, corpus=opts.corpus)
File "my_main.py", line 164, in train
max_epochs=100)
File "/lib/python3.7/site-packages/flair/trainers/trainer.py", line 672, in train
self.model.save(base_path / "best-model.pt")
File "/lib/python3.7/site-packages/flair/nn/model.py", line 85, in save
torch.save(model_state, str(model_file), pickle_protocol=4)
File "/lib/python3.7/site-packages/torch/serialization.py", line 379, in save
_save(obj, opened_zipfile, pickle_module, pickle_protocol)
File "/lib/python3.7/site-packages/torch/serialization.py", line 484, in _save
pickler.dump(obj)
File "/lib/python3.7/site-packages/torch/serialization.py", line 467, in persistent_id
if torch.is_storage(obj):
RecursionError: maximum recursion depth exceeded
**The issue seems to be with OneHotEmbeddings, alone or stacked...both cause issues. When I remove OneHotEmbeddings, it works fine.**
**To Reproduce**
My code is very similar to the tutorial on training a model:
```
# 4. initialize embeddings
embeddings_1 = TransformerWordEmbeddings('roberta-large')
embeddings_2 = OneHotEmbeddings(corpus=corpus, field='xml', embedding_length=8)
embedding_types = [
embeddings_1,
embeddings_2
]
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type,
use_crf=True,
use_rnn=True)
# 6. initialize trainer
trainer = ModelTrainer(tagger, corpus)
# 7. start training
trainer.train('trained_taggers/v1-march28/',
learning_rate=0.1,
mini_batch_size=32,
max_epochs=100)
```
**Expected behavior**
Save the checkpoint, continue training
**Environment:**
- OS: Linux
- python3.7
- Lib version:
flair: 0.9
torch: 1.10.0
transformers: 4.17.0
| closed | 2022-03-29T20:36:12Z | 2022-04-04T22:26:55Z | https://github.com/flairNLP/flair/issues/2692 | [
"bug"
] | shabnam-b | 6 |
modelscope/modelscope | nlp | 586 | Errors while using git lfs and code | 

As title, I got some errors while using git lfs and code
| closed | 2023-10-14T10:18:25Z | 2024-06-27T01:51:04Z | https://github.com/modelscope/modelscope/issues/586 | [
"Stale"
] | Ethan-Chen-plus | 5 |
SciTools/cartopy | matplotlib | 1,673 | Cannot make contour plot together with coastlines | ### Description
Hello lovely cartopy developers!
Thanks a lot for your great work here!
For some reasons, using `Geoaxes.contour` and `Geoaxes.coastlines` together fails with `AttributeError: 'list' object has no attribute 'xy'` (within shapely though, but I suppose the error lies within cartopy). Excluding either the `ax.contour` call or `ax.coastlines` call down below, or adding `ax.set_global()` before the `plt.savefig` call works. So it seems like cartopy is unable to determine the extent with the two polygons plotted on the `GeoAxes`.
#### Code to reproduce
Take the file at [psyplot/psy-maps/tests/test-t2m-u-v.nc](https://github.com/psyplot/psy-maps/blob/master/tests/test-t2m-u-v.nc) for instance.
```python
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import xarray as xr
ds = xr.open_dataset('test-t2m-u-v.nc')
ax = plt.axes(projection=ccrs.Orthographic())
ax.coastlines()
ax.contour(ds.lon.values, ds.lat.values, ds.t2m[0, 0].values, transform=ccrs.PlateCarree())
plt.savefig("test.pdf")
```
#### Traceback
```python
Traceback (most recent call last):
File "error.py", line 9, in <module>
plt.savefig("test.pdf")
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/pyplot.py", line 859, in savefig
res = fig.savefig(*args, **kwargs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/figure.py", line 2311, in savefig
self.canvas.print_figure(fname, **kwargs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/backend_bases.py", line 2210, in print_figure
result = print_method(
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/backend_bases.py", line 1639, in wrapper
return func(*args, **kwargs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/backends/backend_pdf.py", line 2593, in print_pdf
self.figure.draw(renderer)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/artist.py", line 41, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/figure.py", line 1863, in draw
mimage._draw_list_compositing_images(
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/image.py", line 131, in _draw_list_compositing_images
a.draw(renderer)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/artist.py", line 41, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/cartopy/mpl/geoaxes.py", line 479, in draw
return matplotlib.axes.Axes.draw(self, renderer=renderer, **kwargs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/artist.py", line 41, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/cbook/deprecation.py", line 411, in wrapper
return func(*inner_args, **inner_kwargs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/axes/_base.py", line 2747, in draw
mimage._draw_list_compositing_images(renderer, self, artists)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/image.py", line 131, in _draw_list_compositing_images
a.draw(renderer)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/matplotlib/artist.py", line 41, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/cartopy/mpl/feature_artist.py", line 152, in draw
extent = ax.get_extent(feature_crs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/cartopy/mpl/geoaxes.py", line 730, in get_extent
p = self._get_extent_geom(crs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/cartopy/mpl/geoaxes.py", line 775, in _get_extent_geom
geom_in_crs = proj.project_geometry(geom_in_src_proj,
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/cartopy/crs.py", line 218, in project_geometry
return getattr(self, method_name)(geometry, src_crs)
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/cartopy/crs.py", line 354, in _project_polygon
is_ccw = polygon.exterior.is_ccw
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/shapely/geometry/polygon.py", line 88, in is_ccw
return bool(self.impl['is_ccw'](self))
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/shapely/algorithms/cga.py", line 14, in is_ccw_op
return signed_area(ring) >= 0.0
File "/home/psommer/miniconda3/envs/test/lib/python3.8/site-packages/shapely/algorithms/cga.py", line 6, in signed_area
xs, ys = ring.coords.xy
AttributeError: 'list' object has no attribute 'xy'
```
<details>
<summary>Full environment definition</summary>
I created a fresh conda environment via
```bash
conda create -c conda-forge --override-channels cartopy scipy matplotlib xarray netcdf4 -n test
```
### Operating system
Linux
### Cartopy version
`conda-forge/linux-64::cartopy-0.18.0-py38h88488af_4`
### conda list
```
# packages in environment at /home/psommer/miniconda3/envs/test:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_gnu conda-forge
bzip2 1.0.8 h516909a_3 conda-forge
c-ares 1.16.1 h516909a_3 conda-forge
ca-certificates 2020.6.20 hecda079_0 conda-forge
cartopy 0.18.0 py38h88488af_4 conda-forge
certifi 2020.6.20 py38h924ce5b_2 conda-forge
cftime 1.2.1 py38hab2c0dc_1 conda-forge
curl 7.71.1 he644dc0_8 conda-forge
cycler 0.10.0 py_2 conda-forge
freetype 2.10.4 he06d7ca_0 conda-forge
geos 3.8.1 he1b5a44_0 conda-forge
hdf4 4.2.13 hf30be14_1003 conda-forge
hdf5 1.10.6 nompi_h1022a3e_1110 conda-forge
jpeg 9d h516909a_0 conda-forge
kiwisolver 1.3.0 py38hbf85e49_0 conda-forge
krb5 1.17.1 hfafb76e_3 conda-forge
lcms2 2.11 hbd6801e_0 conda-forge
ld_impl_linux-64 2.35 h769bd43_9 conda-forge
libblas 3.9.0 2_openblas conda-forge
libcblas 3.9.0 2_openblas conda-forge
libcurl 7.71.1 hcdd3856_8 conda-forge
libedit 3.1.20191231 he28a2e2_2 conda-forge
libev 4.33 h516909a_1 conda-forge
libffi 3.2.1 he1b5a44_1007 conda-forge
libgcc-ng 9.3.0 h5dbcf3e_17 conda-forge
libgfortran-ng 9.3.0 he4bcb1c_17 conda-forge
libgfortran5 9.3.0 he4bcb1c_17 conda-forge
libgomp 9.3.0 h5dbcf3e_17 conda-forge
liblapack 3.9.0 2_openblas conda-forge
libnetcdf 4.7.4 nompi_hefab0ff_106 conda-forge
libnghttp2 1.41.0 h8cfc5f6_2 conda-forge
libopenblas 0.3.12 pthreads_h4812303_1 conda-forge
libpng 1.6.37 hed695b0_2 conda-forge
libssh2 1.9.0 hab1572f_5 conda-forge
libstdcxx-ng 9.3.0 h2ae2ef3_17 conda-forge
libtiff 4.1.0 hc7e4089_6 conda-forge
libwebp-base 1.1.0 h516909a_3 conda-forge
lz4-c 1.9.2 he1b5a44_3 conda-forge
matplotlib 3.3.2 0 conda-forge
matplotlib-base 3.3.2 py38h4d1ce4f_1 conda-forge
ncurses 6.2 he1b5a44_2 conda-forge
netcdf4 1.5.4 nompi_py38hec8b9af_103 conda-forge
numpy 1.19.2 py38hf89b668_1 conda-forge
olefile 0.46 pyh9f0ad1d_1 conda-forge
openssl 1.1.1h h516909a_0 conda-forge
pandas 1.1.3 py38hddd6c8b_2 conda-forge
pillow 8.0.1 py38h9776b28_0 conda-forge
pip 20.2.4 py_0 conda-forge
proj 7.1.1 h966b41f_3 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pyshp 2.1.2 pyh9f0ad1d_0 conda-forge
python 3.8.6 h852b56e_0_cpython conda-forge
python-dateutil 2.8.1 py_0 conda-forge
python_abi 3.8 1_cp38 conda-forge
pytz 2020.1 pyh9f0ad1d_0 conda-forge
readline 8.0 he28a2e2_2 conda-forge
scipy 1.5.2 py38hd9480d8_2 conda-forge
setuptools 49.6.0 py38h924ce5b_2 conda-forge
shapely 1.7.1 py38hc7361b7_1 conda-forge
six 1.15.0 pyh9f0ad1d_0 conda-forge
sqlite 3.33.0 h4cf870e_1 conda-forge
tk 8.6.10 hed695b0_1 conda-forge
tornado 6.0.4 py38h1e0a361_2 conda-forge
wheel 0.35.1 pyh9f0ad1d_0 conda-forge
xarray 0.16.1 py_0 conda-forge
xz 5.2.5 h516909a_1 conda-forge
zlib 1.2.11 h516909a_1010 conda-forge
zstd 1.4.5 h6597ccf_2 conda-forge
```
### pip list
```
Package Version
--------------- -------------------
Cartopy 0.18.0
certifi 2020.6.20
cftime 1.2.1
cycler 0.10.0
kiwisolver 1.3.0
matplotlib 3.3.2
netCDF4 1.5.4
numpy 1.19.2
olefile 0.46
pandas 1.1.3
Pillow 8.0.1
pip 20.2.4
pyparsing 2.4.7
pyshp 2.1.2
python-dateutil 2.8.1
pytz 2020.1
scipy 1.5.2
setuptools 49.6.0.post20201009
Shapely 1.7.1
six 1.15.0
tornado 6.0.4
wheel 0.35.1
xarray 0.16.1
```
</details>
| closed | 2020-10-30T22:29:37Z | 2021-01-18T21:55:14Z | https://github.com/SciTools/cartopy/issues/1673 | [] | Chilipp | 3 |
jina-ai/clip-as-service | pytorch | 578 | 用docker启动的时候,有时候bertclient可以连接上,有时候不行,加上timeout ,会出现超时 | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [ ] Are you running the latest `bert-as-service`?
* [ ] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [ ] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [ ] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- TensorFlow installed from (source or binary):
- TensorFlow version:
- Python version:
- `bert-as-service` version:
- GPU model and memory:
- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start YOUR_SERVER_ARGS
```
and calling the server via:
```python
bc = BertClient(YOUR_CLIENT_ARGS)
bc.encode()
```
Then this issue shows up:
... | open | 2020-07-23T06:34:25Z | 2020-07-23T06:34:56Z | https://github.com/jina-ai/clip-as-service/issues/578 | [] | youbingchenyoubing | 0 |
iterative/dvc | data-science | 10,506 | dvc update should consider "cache: false" setting of output in imported `.dvc` | On suggestion by @shcheklein, I added `cache: false` to the output in the `.dvc` file created by `dvc import` to be able to track the imported file with Git instead of DVC. However, `dvc update` still adds the output to `.gitignore`. Also, when I use `git add -f` to track the file despite it being ignored, then `dvc update` will complain that the output is already tracked by the SCM.
Should `dvc update` take into account the `cache: false` flag of the output in the input `.dvc` file? | open | 2024-08-06T10:25:04Z | 2024-10-23T08:06:37Z | https://github.com/iterative/dvc/issues/10506 | [
"bug",
"A: data-sync"
] | aschuh-hf | 4 |
dinoperovic/django-salesman | rest-api | 47 | Remove address fields or make them optional. | For stores that sell something intangible (courses, images, videos and much more), indicating the delivery and purchase address is not required. I found a workaround and allow empty strings in the address validator, but it doesn't look very nice in the admin panel | closed | 2024-05-06T21:34:34Z | 2024-05-12T21:14:33Z | https://github.com/dinoperovic/django-salesman/issues/47 | [] | GvozdevLeonid | 1 |
Lightning-AI/pytorch-lightning | machine-learning | 19,664 | torch.hstack() will cause the lose of the gradient flow track | ### Bug description
concatenating multiple `nn.parameters` with `torch.hstack` will cause the loss of gradient flow, i.e. `grad_fn` is none
### What version are you seeing the problem on?
v1.9
### How to reproduce the bug
```python
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import lightning as L
from learner.train_utils import hinge_loss
from learner.mlp import projector
from torch.utils.data import Dataset, DataLoader
class RealDataset(Dataset):
def __init__(self, ds_list):
self.data = ds_list
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]
class simpleModel(nn.Module):
def __init__(self):
super().__init__()
self.dim=8
self.num_protos=5
self.user_weights = nn.ParameterDict()
self.protos = nn.Parameter(
torch.tensor(np.random.multivariate_normal(np.zeros(self.dim),np.eye(self.dim),self.num_protos).T).float(),
requires_grad=True
)
self.projector = projector(self.dim,1,self.dim,False)
self.softmax = nn.Softmax(dim=0)
def init_param(self,user_id):
new_weight = nn.Parameter(
torch.tensor(np.random.randn(self.num_protos,1)).float(),
requires_grad=True
)
self.user_weights[str(user_id)] = new_weight
def forward(self,x):
user_ids, x0, x1 = x[0], x[1][0], x[1][1]
x0, x1 = torch.Tensor(x0.T).float(), torch.Tensor(x1.T).float()
user_weights = [self.user_weights[str(key.item())] for key in user_ids]
print(user_weights)
user_weights = torch.hstack(user_weights)
print(user_weights.grad_fn)
us_probs = self.softmax(user_weights)
# print(self.protos.shape)
# print(us_probs.shape)
user_ideal_points = self.protos @ us_probs
x_0_minus_us = (x0 - user_ideal_points).T
x_1_minus_us = (x1 - user_ideal_points).T
ele_1 = self.projector(x_0_minus_us)
ele_2 = self.projector(x_1_minus_us)
delta = torch.sum(ele_1 * ele_1, dim=1) - torch.sum(ele_2 * ele_2, dim=1)
return delta
class simpleDataModule(L.LightningDataModule):
def __init__(self,ds_list) -> None:
super().__init__()
self.ds_list = ds_list
self.ds = RealDataset(self.ds_list)
self.batch_size = 4
def prepare_data(self):
pass
def setup(self, stage: str):
if stage == "fit":
self.ds_train = self.ds
self.ds_val = self.ds
if stage == "test":
self.ds_test = self.ds
if stage == "predict":
self.ds_test = self.ds
def train_dataloader(self):
return DataLoader(self.ds_train, batch_size=self.batch_size, num_workers=4, persistent_workers=True,shuffle=True)
def val_dataloader(self):
return self.train_dataloader()
def test_dataloader(self):
return self.train_dataloader()
def predict_dataloader(self):
return self.train_dataloader()
class simpleModelWrapLightning(L.LightningModule):
def __init__(self, simple_model) -> None:
super().__init__()
self.simple_model = simple_model
self.automatic_optimization = False
self.loss_fn = hinge_loss
def _wrap_forward(self, batch, batch_idx):
x,y = batch
y_hat = self.simple_model(x)
loss = self.loss_fn(y_hat, y)
accu = torch.mean(((y_hat * y) > 0).to(torch.float))
return loss, accu
def training_step(self, batch, batch_idx):
opt1, opt2, opt3 = self.optimizers()
opt1.zero_grad()
opt2.zero_grad()
opt3.zero_grad()
loss,accu = self._wrap_forward(batch,batch_idx)
self.manual_backward(loss)
opt1.step()
opt2.step()
opt3.step()
return loss
def validation_step(self, batch, batch_idx):
return self.training_step(batch, batch_idx)
def test_step(self, batch, batch_idx):
return self.training_step(batch, batch_idx)
def configure_optimizers(self):
user_weights_params = []
prototype_params = []
projector_params = []
for name,param in self.simple_model.named_parameters():
if 'proto' in name:
prototype_params.append(param)
elif 'projector' in name:
projector_params.append(param)
for user_id in self.simple_model.user_weights:
user_weights_params.append(self.simple_model.user_weights[user_id])
optimizer1 = optim.AdamW(user_weights_params,lr=0.01)
optimizer2 = optim.AdamW(prototype_params,lr=0.01)
optimizer3 = optim.AdamW(projector_params,lr=0.01)
return [optimizer1, optimizer2, optimizer]
simple_m = simpleModel()
for user_id in range(5):
simple_m.init_param(user_id)
simple_ds_list = [((i,(np.random.randn(8),np.random.randn(8))),np.random.choice([-1,1])) for i in range(10)]
simpleDs = simpleDataModule(simple_ds_list)
user_ids = np.unique([sample[0][0] for sample in simple_ds_list])
for i in user_ids:
simple_m.init_param(i)
simpleDs = simpleDataModule(simple_ds_list)
trainer = L.Trainer(max_epochs=10)
simple_m_L = simpleModelWrapLightning(simple_m)
trainer.fit(simple_m_L, simpleDs)
```
### Error messages and logs
```
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
You are using a CUDA device ('NVIDIA GeForce RTX 4090') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
---------------------------------------------
0 | simple_model | simpleModel | 154
---------------------------------------------
154 Trainable params
0 Non-trainable params
154 Total params
0.001 Total estimated model params size (MB)
Sanity Checking: | | 0/? [00:00<?, ?it/s]/home/daiwei/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:492: Your `val_dataloader`'s sampler has shuffling enabled, it is strongly recommended that you turn shuffling off for val/test dataloaders.
Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s][Parameter containing:
tensor([[ 0.0235],
[ 1.2737],
[ 1.3439],
[-1.7743],
[-0.5568]], device='cuda:0', requires_grad=True), Parameter containing:
tensor([[ 0.9459],
[-0.9973],
[ 0.4710],
[ 2.1656],
[-1.6106]], device='cuda:0', requires_grad=True), Parameter containing:
tensor([[-0.2413],
[-0.4596],
[ 1.3030],
[-0.0996],
[ 0.7247]], device='cuda:0', requires_grad=True), Parameter containing:
tensor([[-0.8224],
[ 1.0738],
[ 1.5441],
[-1.0892],
[-0.3891]], device='cuda:0', requires_grad=True)]
None
{
"name": "RuntimeError",
"message": "element 0 of tensors does not require grad and does not have a grad_fn",
"stack": "---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 4
2 trainer = L.Trainer(max_epochs=10)
3 simple_m_L = simpleModelWrapLightning(simple_m)
----> 4 trainer.fit(simple_m_L, simpleDs)
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py:544, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
542 self.state.status = TrainerStatus.RUNNING
543 self.training = True
--> 544 call._call_and_handle_interrupt(
545 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
546 )
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/trainer/call.py:44, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
42 if trainer.strategy.launcher is not None:
43 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
---> 44 return trainer_fn(*args, **kwargs)
46 except _TunerExitException:
47 _call_teardown_hook(trainer)
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py:580, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
573 assert self.state.fn is not None
574 ckpt_path = self._checkpoint_connector._select_ckpt_path(
575 self.state.fn,
576 ckpt_path,
577 model_provided=True,
578 model_connected=self.lightning_module is not None,
579 )
--> 580 self._run(model, ckpt_path=ckpt_path)
582 assert self.state.stopped
583 self.training = False
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py:987, in Trainer._run(self, model, ckpt_path)
982 self._signal_connector.register_signal_handlers()
984 # ----------------------------
985 # RUN THE TRAINER
986 # ----------------------------
--> 987 results = self._run_stage()
989 # ----------------------------
990 # POST-Training CLEAN UP
991 # ----------------------------
992 log.debug(f\"{self.__class__.__name__}: trainer tearing down\")
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py:1031, in Trainer._run_stage(self)
1029 if self.training:
1030 with isolate_rng():
-> 1031 self._run_sanity_check()
1032 with torch.autograd.set_detect_anomaly(self._detect_anomaly):
1033 self.fit_loop.run()
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py:1060, in Trainer._run_sanity_check(self)
1057 call._call_callback_hooks(self, \"on_sanity_check_start\")
1059 # run eval step
-> 1060 val_loop.run()
1062 call._call_callback_hooks(self, \"on_sanity_check_end\")
1064 # reset logger connector
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/loops/utilities.py:182, in _no_grad_context.<locals>._decorator(self, *args, **kwargs)
180 context_manager = torch.no_grad
181 with context_manager():
--> 182 return loop_run(self, *args, **kwargs)
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/loops/evaluation_loop.py:135, in _EvaluationLoop.run(self)
133 self.batch_progress.is_last_batch = data_fetcher.done
134 # run step hooks
--> 135 self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
136 except StopIteration:
137 # this needs to wrap the `*_step` call too (not just `next`) for `dataloader_iter` support
138 break
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/loops/evaluation_loop.py:396, in _EvaluationLoop._evaluation_step(self, batch, batch_idx, dataloader_idx, dataloader_iter)
390 hook_name = \"test_step\" if trainer.testing else \"validation_step\"
391 step_args = (
392 self._build_step_args_from_hook_kwargs(hook_kwargs, hook_name)
393 if not using_dataloader_iter
394 else (dataloader_iter,)
395 )
--> 396 output = call._call_strategy_hook(trainer, hook_name, *step_args)
398 self.batch_progress.increment_processed()
400 if using_dataloader_iter:
401 # update the hook kwargs now that the step method might have consumed the iterator
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/trainer/call.py:309, in _call_strategy_hook(trainer, hook_name, *args, **kwargs)
306 return None
308 with trainer.profiler.profile(f\"[Strategy]{trainer.strategy.__class__.__name__}.{hook_name}\"):
--> 309 output = fn(*args, **kwargs)
311 # restore current_fx when nested context
312 pl_module._current_fx_name = prev_fx_name
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/strategies/strategy.py:412, in Strategy.validation_step(self, *args, **kwargs)
410 if self.model != self.lightning_module:
411 return self._forward_redirection(self.model, self.lightning_module, \"validation_step\", *args, **kwargs)
--> 412 return self.lightning_module.validation_step(*args, **kwargs)
Cell In[2], line 97, in simpleModelWrapLightning.validation_step(self, batch, batch_idx)
96 def validation_step(self, batch, batch_idx):
---> 97 return self.training_step(batch, batch_idx)
Cell In[2], line 91, in simpleModelWrapLightning.training_step(self, batch, batch_idx)
89 opt3.zero_grad()
90 loss,accu = self._wrap_forward(batch,batch_idx)
---> 91 self.manual_backward(loss)
92 opt1.step()
93 opt2.step()
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/core/module.py:1071, in LightningModule.manual_backward(self, loss, *args, **kwargs)
1069 else:
1070 self._verify_is_manual_optimization(\"manual_backward\")
-> 1071 self.trainer.strategy.backward(loss, None, *args, **kwargs)
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/strategies/strategy.py:213, in Strategy.backward(self, closure_loss, optimizer, *args, **kwargs)
210 assert self.lightning_module is not None
211 closure_loss = self.precision_plugin.pre_backward(closure_loss, self.lightning_module)
--> 213 self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
215 closure_loss = self.precision_plugin.post_backward(closure_loss, self.lightning_module)
216 self.post_backward(closure_loss)
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/plugins/precision/precision.py:72, in Precision.backward(self, tensor, model, optimizer, *args, **kwargs)
52 @override
53 def backward( # type: ignore[override]
54 self,
(...)
59 **kwargs: Any,
60 ) -> None:
61 r\"\"\"Performs the actual backpropagation.
62
63 Args:
(...)
70
71 \"\"\"
---> 72 model.backward(tensor, *args, **kwargs)
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/lightning/pytorch/core/module.py:1090, in LightningModule.backward(self, loss, *args, **kwargs)
1088 self._fabric.backward(loss, *args, **kwargs)
1089 else:
-> 1090 loss.backward(*args, **kwargs)
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/torch/_tensor.py:492, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
482 if has_torch_function_unary(self):
483 return handle_torch_function(
484 Tensor.backward,
485 (self,),
(...)
490 inputs=inputs,
491 )
--> 492 torch.autograd.backward(
493 self, gradient, retain_graph, create_graph, inputs=inputs
494 )
File ~/miniconda3/envs/rlhf/lib/python3.9/site-packages/torch/autograd/__init__.py:251, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
246 retain_graph = create_graph
248 # The reason we repeat the same comment below is that
249 # some Python versions print out the first line of a multi-line function
250 # calls in the traceback and some print out the last line
--> 251 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
252 tensors,
253 grad_tensors_,
254 retain_graph,
255 create_graph,
256 inputs,
257 allow_unreachable=True,
258 accumulate_grad=True,
259 )
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn"
}
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): Trainer
#- PyTorch Lightning Version (e.g., 1.5.0): 1.9.5
#- PyTorch Version (e.g., 2.0):2.1.2
#- Python version (e.g., 3.9): 3.9.18
#- OS (e.g., Linux): Pop!_OS 22.04 LTS
#- CUDA/cuDNN version: 12.1.0
#- GPU models and configuration: RTX4090
#- How you installed Lightning(`conda`, `pip`, source): pip
</details>
### More info
_No response_ | closed | 2024-03-17T22:17:21Z | 2024-03-18T03:49:55Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19664 | [
"bug",
"needs triage",
"ver: 1.9.x"
] | ChenDaiwei-99 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.