repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
freqtrade/freqtrade | python | 11,364 | Lock all trading pairs after completing both entry and exit of a trade within the same candle to prevent any additional pairs from opening trades in the current candle. | ## Describe your environment
* Operating system: Linux
* Python Version: 3.10.12
* CCXT version:
* Freqtrade Version: 2024.12
## Your question
I know that currently, freqtrade auto-locks the pair after exiting a trade to prevent reopening new trades within the same candle. I'm wondering if there's a way to support auto-locking all pairs in the configuration after opening & exiting a trade within the same candle? Because afaik, in backtesting, the result is that each candle can have a maximum of max_open_trades trades. I expect my live/dry_run results to be as close as possible to backtesting.
| closed | 2025-02-10T10:27:33Z | 2025-02-10T15:12:17Z | https://github.com/freqtrade/freqtrade/issues/11364 | [
"Question"
] | RickConsss | 1 |
xonsh/xonsh | data-science | 4,913 | Current job is not updated in terminal window's title | When I run `xonsh` is using its default settings, the `$TITLE` format string (responsible for setting the terminal window's title) is
```
{current_job:{} | }{user}@{hostname}: {cwd} | xonsh
```
The `current_job` variable in `$TITLE` means that when a foreground job is running, the terminal's title should be updated with the job's command.
For example, suppose my terminal's title is `yaxollum@fedora: ~ | xonsh` when no jobs are running.
When I launch the `cat` command, my terminal's title should be updated to `cat | yaxollum@fedora: ~ | xonsh`. However, under the current `main` version of xonsh, my terminal's title stays unchanged. `git bisect` shows that this was introduced by #4697.
Both this issue and #4034 appear to be related to setting the terminal's title, so I'll try to fix both of them in a PR.
## xonfig
<details>
```
+------------------+-------------------------+
| xonsh | 0.13.0 |
| Git SHA | f2ca59a2 |
| Commit Date | Aug 6 05:07:09 2022 |
| Python | 3.9.13 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.29 |
| shell type | prompt_toolkit |
| history backend | sqlite |
| pygments | 2.7.4 |
| on posix | True |
| on linux | True |
| distro | fedora |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file 1 | /home/yaxollum/.xonshrc |
+------------------+-------------------------+
```
</details>
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| closed | 2022-08-08T00:25:55Z | 2022-08-10T04:20:10Z | https://github.com/xonsh/xonsh/issues/4913 | [
"prompt"
] | yaxollum | 1 |
microsoft/nni | machine-learning | 5,383 | The following code block gives me error | The following code block gives me error
```
import bz2
import urllib.request
import numpy as np
from sklearn.datasets import load_svmlight_file
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel
from nni.algorithms.feature_engineering.gradient_selector import FeatureGradientSelector
def test():
url_zip_train = 'https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/rcv1_train.binary.bz2'
urllib.request.urlretrieve(url_zip_train, filename='train.bz2')
f_svm = open('train.svm', 'wt')
with bz2.open('train.bz2', 'rb') as f_zip:
data = f_zip.read()
f_svm.write(data.decode('utf-8'))
f_svm.close()
X, y = load_svmlight_file('train.svm')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
pipeline = make_pipeline(FeatureGradientSelector(
n_epochs=1, n_features=10), LogisticRegression())
# pipeline = make_pipeline(SelectFromModel(ExtraTreesClassifier(n_estimators=50)), LogisticRegression())
pipeline.fit(X_train, y_train)
print("Pipeline Score: ", pipeline.score(X_train, y_train))
if __name__ == "__main__":
test()
```

_Originally posted by @AbdelrahmanHamdy1996 in https://github.com/microsoft/nni/discussions/5382_
| closed | 2023-02-19T14:12:35Z | 2023-09-15T04:22:12Z | https://github.com/microsoft/nni/issues/5383 | [] | AbdelrahmanHamdy1996 | 3 |
aio-libs/aiomysql | sqlalchemy | 93 | how to select specific column? | how to select specific column?
`android_push.select(columns=[android_push.c.uid, ])` is not work
| closed | 2016-08-04T11:20:36Z | 2016-08-05T09:42:08Z | https://github.com/aio-libs/aiomysql/issues/93 | [] | 631068264 | 1 |
laughingman7743/PyAthena | sqlalchemy | 309 | Support for reading in chunks with PandasCursor | closed | 2022-05-07T07:42:42Z | 2022-07-31T06:52:55Z | https://github.com/laughingman7743/PyAthena/issues/309 | [] | laughingman7743 | 0 | |
modin-project/modin | pandas | 6,767 | Provide the ability to use experimental functionality when experimental mode is not enabled globally via an environment variable. | Example where it can be useful:
```python
import modin.pandas as pd
df = pd.DataFrame([1,2,3,4])
# [some code]
with modin.utils.enable_exp_mode():
# this import has side effects that will need to be removed when leaving the context
# for example:
# 1. `IsExperimental.put(True)`
# 2. `setattr(DataFrame, "to_pickle_distributed", to_pickle_distributed)`
# 3. Modification of internal factory and IO classes
from modin.experimental.pandas import read_pickle_distributed, to_pickle_distributed
to_pickle_distributed(df, "test_file*.pkl")
# [some code]
new_df = read_pickle_distributed("test_file*.pkl")
``` | closed | 2023-11-23T16:07:58Z | 2023-12-08T16:31:15Z | https://github.com/modin-project/modin/issues/6767 | [
"new feature/request 💬",
"P1"
] | anmyachev | 0 |
2noise/ChatTTS | python | 702 | 有甚麽好方法可以让模型念出数字吗 | closed | 2024-08-20T06:32:55Z | 2024-10-05T04:01:28Z | https://github.com/2noise/ChatTTS/issues/702 | [
"duplicate",
"stale"
] | weichen11011 | 2 | |
Guovin/iptv-api | api | 784 | 生成文件时间与docker内部时间不一致 | ### Don't skip these steps | 不要跳过这些步骤
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field | 我明白,如果我“故意”删除或跳过任何强制性的\*字段,我将被**封锁**
- [X] I have checked through the search that there are no similar issues that already exist | 我已经通过搜索仔细检查过没有存在已经创建的相似问题
- [X] I will not submit any issues that are not related to this project | 我不会提交任何与本项目无关的问题
### Occurrence environment | 触发环境
- [ ] Workflow | 工作流
- [ ] GUI | 软件
- [X] Docker
- [ ] Command line | 命令行
### Bug description | 具体描述
![Uploading screenshot_2025-01-03_09-40-39.png…]()
相差8个小时
### Error log | 报错日志
[Uploading sort.log…]()
| closed | 2025-01-03T01:44:49Z | 2025-01-08T06:44:27Z | https://github.com/Guovin/iptv-api/issues/784 | [
"duplicate"
] | FRANKASEE | 2 |
brightmart/text_classification | nlp | 122 | TextGCN modles | hello,
TextGCN is the latest article on text classification using GCN. Can you add this model to the model comparison?
github address: https://github.com/yao8839836/text_gcn
paper: "Graph Convolutional Networks for Text Classification."
Thanks. | open | 2019-05-30T01:49:51Z | 2019-05-30T01:49:51Z | https://github.com/brightmart/text_classification/issues/122 | [] | CigaLi | 0 |
OFA-Sys/Chinese-CLIP | computer-vision | 268 | 尝试使用CLIP模型来进行文搜文遇到的问题 | 我看CLIP模型也是支持文本向量化的,所以我尝试了一下只使用CLIP,文搜文也是用CLIP,然后发现如果用自己写的简单TXT文档的话insert和search都正常,但是在使用比较长的PDF文档的时候,向量化文档报错
Token indices sequence length is longer than the specified maximum sequence length for this model (356 > 77). Running this sequence through the model will result in indexing errors
看起来是模型不支持这么长的文档?想请教一下有没有什么办法查看模型支持处理多长的文档 | closed | 2024-03-08T06:58:08Z | 2024-06-17T07:49:19Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/268 | [] | Amphetaminewei | 1 |
matterport/Mask_RCNN | tensorflow | 2,342 | How to find perimeter in mask rcnn or opencv?? | does anyone know a command that can extract the perimeter of an image in the mask rcnn or opencv? | open | 2020-08-26T16:17:34Z | 2020-08-26T16:17:34Z | https://github.com/matterport/Mask_RCNN/issues/2342 | [] | Enolahlm | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,557 | save output as '.npy' file | hello,
I'm using the cycleGAN model for street view image (.jpg) to mel-spectrogram (.npy, range 0 to 1) conversion.So, I created the trainA folder to store jpg, and the trainB folder to store npy files I would like to know
①If I want the output npy to have the same value range as the input npy([0,1]), what should I do?
| closed | 2023-03-27T03:54:20Z | 2024-03-29T03:34:09Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1557 | [] | Ivvvvvvvvvvy | 1 |
flairNLP/flair | nlp | 3,096 | [Question]: Error when loading Camembert Embeddings | ### Question
Hello,
I could not load the Camembert Embeddings.
----> 1 embedding = CamembertEmbeddings()
File ~/venv-3.8/lib/python3.8/site-packages/deprecated/classic.py:285, in deprecated.<locals>.wrapper_function(wrapped_, instance_, args_, kwargs_)
283 else:
284 warnings.warn(msg, category=category, stacklevel=_routine_stacklevel)
--> 285 return wrapped_(*args_, **kwargs_)
File ~/venv-3.8/lib/python3.8/site-packages/flair/embeddings/legacy.py:690, in CamembertEmbeddings.__init__(self, pretrained_model_name_or_path, layers, pooling_operation, use_scalar_mix)
687 self.use_scalar_mix = use_scalar_mix
688 self.static_embeddings = True
--> 690 dummy_sentence: Sentence = Sentence()
691 dummy_sentence.add_token(Token("hello"))
692 embedded_dummy = self.embed(dummy_sentence)
TypeError: __init__() missing 1 required positional argument: 'text' | closed | 2023-02-09T13:55:23Z | 2023-07-12T21:56:19Z | https://github.com/flairNLP/flair/issues/3096 | [
"question"
] | ellzx | 1 |
miguelgrinberg/Flask-SocketIO | flask | 1,494 | SocketIO fails to start in a thread (ValueError: signal only works in main thread) | **Describe the bug**
SocketIO fails to start in a thread with error `ValueError: signal only works in main thread`
**To Reproduce**
```
import threading
import flask
import flask_socketio
class Web:
def websocket(self):
ws = flask.Flask('web')
ws.config['SECRET_KEY'] = 'aaa'
socketio = flask_socketio.SocketIO(ws)
self.log.info("starting websocket server")
socketio.run(ws, port=6543, debug=True)
@flask_socketio.SocketIO.event
def on_message(self, data):
print(f"event: {data}")
if __name__ == "__main__":
web = Web()
# start the websocket
threading.Thread(target=web.websocket).run()
```
**Expected behavior**
The websocket server starts on port `6543` and handles incoming messages via `on_message`
**Logs**
```
Exception in thread Thread-3:
Traceback (most recent call last):
File "C:\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "D:/Nextcloud/dev-perso/note/note_web.py", line 34, in websocket
socketio.run(ws, port=6543, debug=True)
File "C:\Python38\lib\site-packages\flask_socketio\__init__.py", line 619, in run
run_with_reloader(run_server, extra_files=extra_files)
File "C:\Python38\lib\site-packages\werkzeug\serving.py", line 1060, in run_with_reloader
return run_with_reloader(*args, **kwargs)
File "C:\Python38\lib\site-packages\werkzeug\_reloader.py", line 330, in run_with_reloader
signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
File "C:\Python38\lib\signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread
```
| closed | 2021-03-04T14:30:51Z | 2021-03-04T20:41:13Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1494 | [
"question"
] | wsw70 | 2 |
tqdm/tqdm | pandas | 1,115 | Progress bar re-appears after a short time despite of calling `.clear()` before calling `input()` | I have an application which runs through a few thousand of iterations and some require user interaction. I have the problem that after a few iterations and when an input prompt appears, the progress bar will just render over the prompt even though calling `.clear()` beforehand (as suggested on [SO](https://stackoverflow.com/questions/56791800/getting-user-input-within-tqdm-loops)).
Here's a short code example:
```python
import time
import tqdm
with tqdm.tqdm(total=5000) as pbar:
for i in range(5000):
pbar.update()
if i < 2000:
time.sleep(0.01)
else:
pbar.write('some text')
pbar.clear()
value = input('input something > ')
pbar.refresh()
pbar.write('user wrote {}'.format(value))
```
when the code in the `else` path is executed, I get the following:
```sh
user@machine: python test.py
some text
input something >
```
then, just by waiting a few seconds and *without doing anything*, tqdm will render the progress bar over the prompt:
```sh
user@machine: python test.py
some text
40%|████████████████████████████▊ | 2001/5000 [00:40<00:30, 98.70it/s]
```
I am using `tqdm==4.56.0` on Debian 10, installed through `pip3` and `zsh` as shell.
Also, please note that I could not reproduce this bug with just a small number of iterations (e.g. if I change the line `if i < 2000:` to something like `if i < 2:`. | open | 2021-01-15T17:18:15Z | 2021-01-15T17:18:15Z | https://github.com/tqdm/tqdm/issues/1115 | [] | charlydelta | 0 |
ionelmc/pytest-benchmark | pytest | 40 | Make test suite fail if benchmark is unsatisfactory | Hello,
I read the documentation and hope I did not miss something completely obvious.
My use case: I would like to use `pytest-benchmark` for continuous integration. A part of my test suite is actually performance tests, i.e. making sure that my `toto` function does not take longer than, let's say, 20ms.
I would like the test suite to _fail_ is some modifications in the code make `toto()` exceeds the 20ms. I am aware of the `--benchmark-compare-fail=EXPR`, but I think what I am looking for is more specific.
I have no idea what would be the best way to describe this, perhaps:
``` python
@pytest.mark.benchmark(fail_at=0.02)
def test_toto(benchmark):
benchmark(toto)
```
Or maybe:
``` python
def test_toto(benchmark):
results = benchmark(toto, fail_at=0.02)
```
Or provide a way for the user to access the results of the benchmark? Before using `pytest-benchmark`, I would do something like this:
``` python
def test_toto(benchmark):
results = benchmark(toto)
if results.average_time > 0.02:
pytest.fail('Exceeding 20ms!')
```
Would this make sense in `pytest-benchmark`?
| closed | 2015-12-10T13:45:22Z | 2025-01-29T07:52:37Z | https://github.com/ionelmc/pytest-benchmark/issues/40 | [] | alexprengere | 5 |
hankcs/HanLP | nlp | 1,134 | 新词提取英文拆词了 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.2
我使用的版本是:1.7.2
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
在新词提取中,英文单词被拆分了
## 复现问题
### 步骤
### 触发代码
```
BufferedReader bufferedReader = new BufferedReader(new FileReader(fileName));
List<WordInfo> keywordList = HanLP.extractWords(bufferedReader, 100,true);
for (WordInfo info : keywordList) {
System.out.println(info.text+"_"+info.frequency+"_"+info.entropy+"_"+info.aggregation);
insertData(info.text,info.frequency);
}
```
### 期望输出
```
make
```
### 实际输出
```
ma
mak
make
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->

| closed | 2019-03-26T06:54:48Z | 2020-01-01T10:55:12Z | https://github.com/hankcs/HanLP/issues/1134 | [
"ignored"
] | liangzhimingcp3 | 3 |
davidsandberg/facenet | tensorflow | 725 | Preparing a custom trained model for inference by adding placeholder input | I've had an great experience using this library in order to train a custom model. I'm now wanting to optimize the model for inference. I've frozen the weights and am trying to make the model more portable. The issue is that because the model uses queues and such to load data, there isn't a simple way to feed images to the frozen model. Does anyone have a simple way to replace the FIFO queue, batch join, etc. with a simple image data placeholder? | open | 2018-04-26T12:59:55Z | 2018-05-30T01:03:09Z | https://github.com/davidsandberg/facenet/issues/725 | [] | tslater | 3 |
3b1b/manim | python | 1,622 | Have you gone into the same problem? | ### Describe the error
<!-- A clear and concise description of what you want to make. -->
I was encountered with some problem.I hope you can give me some advice to solve it.I am trying hard but can not make it.
### Code and Error
**Code**:
<!-- The code you run -->
vm=VMobject()
vm.start_new_path(UR)
vm.add_line_to(RIGHT)
vm.add_quadratic_bezier_curve_to(DOWN,LEFT)
self.add(vm)
**Error**:
<!-- The error traceback you get when run your code -->
I creat a VMobjet which is comprised of two curves,one has a degree of 1 and the other has a degree of 2.When I run those codes,I can see almost nothing.I found this problem seems have something to do with the curves' degrees.If the curves have the same degree,everything goes on well.If the degrees are different things go weird.These problem can also affect the pointwise_become_partial method,since some curves' degrees are changed into 0.
### Environment
**OS System**: macOS Mojave 10.14.6
**manim version**: master <!-- make sure you are using the latest version of master branch -->
**python version**:3.9.0
<img width="638" alt="Screen Shot 2021-09-08 at 4 45 16 PM" src="https://user-images.githubusercontent.com/10624431/132477285-712d48d4-9fad-4acc-84af-2279025ed2d5.png">
| open | 2021-09-08T08:50:26Z | 2021-09-08T08:50:26Z | https://github.com/3b1b/manim/issues/1622 | [] | tjhdzxf | 0 |
deepinsight/insightface | pytorch | 1,980 | 关于SCRFD人脸检测模型onnx转tensorrt(c++)报错:Assertion failed: !isDynamic(tensorPtr->getDimensions()) && "InstanceNormalization does not support dynamic inputs!" | 你好,我现在在用scrfd的onnx转tensorrt报了这个错误,貌似是算子不支持动态输入
我的转换命令是./trtexec --onnx=scrfd_10g.onnx --saveEngine=scrfd_10g_fp32.trt --workspace=4096 --minShapes=input:1x3x640x640 --optShapes=input:8x3x640x640 --maxShapes=input:32x3x640x640 --shapes=input:8x3x640x640,请问这是什么原因导致的? | open | 2022-04-20T08:28:21Z | 2022-04-20T11:10:21Z | https://github.com/deepinsight/insightface/issues/1980 | [] | Lucky-BenXie | 1 |
RobertCraigie/prisma-client-py | pydantic | 4 | Add a generator option to signify that recursive types are supported | We should avoid the whole dance with generating our own pseudo-recursive types if the end user is using a type checker that supports recursive types. | closed | 2021-01-13T19:52:38Z | 2021-07-18T12:47:43Z | https://github.com/RobertCraigie/prisma-client-py/issues/4 | [
"topic: types",
"kind/feature"
] | RobertCraigie | 0 |
robotframework/robotframework | automation | 4,750 | jquery-tmpl JS library used in log.html is not maintained anymore | We had hit issue where if a Keyword prints about 10k lines the Chrome/Edge browser mainly with Windows OS doesn't load the content and it simply crashes!, we realize its to do with JS library which loads the content. We also tried --split log option, its same behaviour with that too. Its also suggested on the jquery-tmpl repo to try jsRender.
I'm creating this ticket to track the migration of JS library.
I had tried with Robot framework 4.2.1. Please let me know if this is already addressed in newer version. | closed | 2023-04-25T03:58:23Z | 2023-04-25T09:04:56Z | https://github.com/robotframework/robotframework/issues/4750 | [
"duplicate"
] | tsrinivas7 | 1 |
horovod/horovod | tensorflow | 3,924 | fatal error: nccl.h: No such file or directory | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet)
2. Framework version:PyTorch
3. Horovod version:0.28.0
4. MPI version:4.0.7
5. CUDA version:11.4
6. NCCL version:2.11.4
7. Python version:3.7
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:7.5
12. CMake version:3.26.3
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
I install nccl by commands:
Network Installer for Ubuntu20.04
$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
$ sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
$ sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub
$ sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
$ sudo apt-get updat
and install horovod by commands:
HOROVOD_WITH_PYTORCH=1 HOROVOD_NCCL_INCLUDE=/usr/include HOROVOD_NCCL_LIB=/usr/lib/x86_64-linux-gnu HOROVOD_CUDA_HOME=/usr/local/cuda-11.4 HOROVOD_CUDA_INCLUDE=/usr/local/cuda-11.4/include HOROVOD_GPU_OPERATIONS=NCCL HOROVOD_WITHOUT_MPI=1 HOROVOD_WITHOUT_GLOO=1 pip install --no-cache-dir horovod
the errors are as follows:
running install
/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-37
creating build/lib.linux-x86_64-cpython-37/horovod
copying horovod/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod
creating build/lib.linux-x86_64-cpython-37/horovod/mxnet
copying horovod/mxnet/mpi_ops.py -> build/lib.linux-x86_64-cpython-37/horovod/mxnet
copying horovod/mxnet/functions.py -> build/lib.linux-x86_64-cpython-37/horovod/mxnet
copying horovod/mxnet/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/mxnet
copying horovod/mxnet/compression.py -> build/lib.linux-x86_64-cpython-37/horovod/mxnet
creating build/lib.linux-x86_64-cpython-37/horovod/data
copying horovod/data/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/data
copying horovod/data/data_loader_base.py -> build/lib.linux-x86_64-cpython-37/horovod/data
creating build/lib.linux-x86_64-cpython-37/horovod/keras
copying horovod/keras/elastic.py -> build/lib.linux-x86_64-cpython-37/horovod/keras
copying horovod/keras/callbacks.py -> build/lib.linux-x86_64-cpython-37/horovod/keras
copying horovod/keras/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/keras
creating build/lib.linux-x86_64-cpython-37/horovod/common
copying horovod/common/elastic.py -> build/lib.linux-x86_64-cpython-37/horovod/common
copying horovod/common/exceptions.py -> build/lib.linux-x86_64-cpython-37/horovod/common
copying horovod/common/util.py -> build/lib.linux-x86_64-cpython-37/horovod/common
copying horovod/common/basics.py -> build/lib.linux-x86_64-cpython-37/horovod/common
copying horovod/common/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/common
copying horovod/common/process_sets.py -> build/lib.linux-x86_64-cpython-37/horovod/common
creating build/lib.linux-x86_64-cpython-37/horovod/tensorflow
copying horovod/tensorflow/mpi_ops.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow
copying horovod/tensorflow/elastic.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow
copying horovod/tensorflow/util.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow
copying horovod/tensorflow/functions.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow
copying horovod/tensorflow/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow
copying horovod/tensorflow/sync_batch_norm.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow
copying horovod/tensorflow/compression.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation_eager.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow
creating build/lib.linux-x86_64-cpython-37/horovod/spark
copying horovod/spark/mpi_run.py -> build/lib.linux-x86_64-cpython-37/horovod/spark
copying horovod/spark/gloo_run.py -> build/lib.linux-x86_64-cpython-37/horovod/spark
copying horovod/spark/runner.py -> build/lib.linux-x86_64-cpython-37/horovod/spark
copying horovod/spark/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/spark
copying horovod/spark/conf.py -> build/lib.linux-x86_64-cpython-37/horovod/spark
creating build/lib.linux-x86_64-cpython-37/horovod/runner
copying horovod/runner/mpi_run.py -> build/lib.linux-x86_64-cpython-37/horovod/runner
copying horovod/runner/run_task.py -> build/lib.linux-x86_64-cpython-37/horovod/runner
copying horovod/runner/js_run.py -> build/lib.linux-x86_64-cpython-37/horovod/runner
copying horovod/runner/gloo_run.py -> build/lib.linux-x86_64-cpython-37/horovod/runner
copying horovod/runner/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/runner
copying horovod/runner/launch.py -> build/lib.linux-x86_64-cpython-37/horovod/runner
copying horovod/runner/task_fn.py -> build/lib.linux-x86_64-cpython-37/horovod/runner
creating build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/strategy.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/elastic.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/elastic_v2.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/runner.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/utils.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/worker.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/adapter.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/ray_logger.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
copying horovod/ray/driver_service.py -> build/lib.linux-x86_64-cpython-37/horovod/ray
creating build/lib.linux-x86_64-cpython-37/horovod/torch
copying horovod/torch/mpi_ops.py -> build/lib.linux-x86_64-cpython-37/horovod/torch
copying horovod/torch/functions.py -> build/lib.linux-x86_64-cpython-37/horovod/torch
copying horovod/torch/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/torch
copying horovod/torch/sync_batch_norm.py -> build/lib.linux-x86_64-cpython-37/horovod/torch
copying horovod/torch/compression.py -> build/lib.linux-x86_64-cpython-37/horovod/torch
copying horovod/torch/optimizer.py -> build/lib.linux-x86_64-cpython-37/horovod/torch
creating build/lib.linux-x86_64-cpython-37/horovod/_keras
copying horovod/_keras/elastic.py -> build/lib.linux-x86_64-cpython-37/horovod/_keras
copying horovod/_keras/callbacks.py -> build/lib.linux-x86_64-cpython-37/horovod/_keras
copying horovod/_keras/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/_keras
creating build/lib.linux-x86_64-cpython-37/horovod/tensorflow/data
copying horovod/tensorflow/data/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow/data
copying horovod/tensorflow/data/compute_worker.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow/data
copying horovod/tensorflow/data/compute_service.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow/data
creating build/lib.linux-x86_64-cpython-37/horovod/tensorflow/keras
copying horovod/tensorflow/keras/elastic.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow/keras
copying horovod/tensorflow/keras/callbacks.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow/keras
copying horovod/tensorflow/keras/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/tensorflow/keras
creating build/lib.linux-x86_64-cpython-37/horovod/spark/task
copying horovod/spark/task/mpirun_exec_fn.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/task
copying horovod/spark/task/task_info.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/task
copying horovod/spark/task/gloo_exec_fn.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/task
copying horovod/spark/task/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/task
copying horovod/spark/task/task_service.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/task
creating build/lib.linux-x86_64-cpython-37/horovod/spark/data_loaders
copying horovod/spark/data_loaders/pytorch_data_loaders.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/data_loaders
copying horovod/spark/data_loaders/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/data_loaders
creating build/lib.linux-x86_64-cpython-37/horovod/spark/lightning
copying horovod/spark/lightning/util.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/lightning
copying horovod/spark/lightning/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/lightning
copying horovod/spark/lightning/datamodule.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/lightning
copying horovod/spark/lightning/remote.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/lightning
copying horovod/spark/lightning/legacy.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/lightning
copying horovod/spark/lightning/estimator.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/lightning
creating build/lib.linux-x86_64-cpython-37/horovod/spark/keras
copying horovod/spark/keras/tensorflow.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/keras
copying horovod/spark/keras/util.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/keras
copying horovod/spark/keras/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/keras
copying horovod/spark/keras/datamodule.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/keras
copying horovod/spark/keras/remote.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/keras
copying horovod/spark/keras/bare.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/keras
copying horovod/spark/keras/optimizer.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/keras
copying horovod/spark/keras/estimator.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/keras
creating build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/constants.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/util.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/_namedtuple_fix.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/backend.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/store.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/datamodule.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/cache.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/params.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/estimator.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
copying horovod/spark/common/serialization.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/common
creating build/lib.linux-x86_64-cpython-37/horovod/spark/tensorflow
copying horovod/spark/tensorflow/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/tensorflow
copying horovod/spark/tensorflow/compute_worker.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/tensorflow
creating build/lib.linux-x86_64-cpython-37/horovod/spark/driver
copying horovod/spark/driver/host_discovery.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/driver
copying horovod/spark/driver/mpirun_rsh.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/driver
copying horovod/spark/driver/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/driver
copying horovod/spark/driver/rendezvous.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/driver
copying horovod/spark/driver/driver_service.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/driver
copying horovod/spark/driver/job_id.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/driver
copying horovod/spark/driver/rsh.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/driver
creating build/lib.linux-x86_64-cpython-37/horovod/spark/torch
copying horovod/spark/torch/util.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/torch
copying horovod/spark/torch/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/torch
copying horovod/spark/torch/datamodule.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/torch
copying horovod/spark/torch/remote.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/torch
copying horovod/spark/torch/estimator.py -> build/lib.linux-x86_64-cpython-37/horovod/spark/torch
creating build/lib.linux-x86_64-cpython-37/horovod/runner/task
copying horovod/runner/task/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/task
copying horovod/runner/task/task_service.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/task
creating build/lib.linux-x86_64-cpython-37/horovod/runner/util
copying horovod/runner/util/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/util
copying horovod/runner/util/cache.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/util
copying horovod/runner/util/remote.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/util
copying horovod/runner/util/lsf.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/util
copying horovod/runner/util/streams.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/util
copying horovod/runner/util/network.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/util
copying horovod/runner/util/threads.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/util
creating build/lib.linux-x86_64-cpython-37/horovod/runner/http
copying horovod/runner/http/http_server.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/http
copying horovod/runner/http/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/http
copying horovod/runner/http/http_client.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/http
creating build/lib.linux-x86_64-cpython-37/horovod/runner/common
copying horovod/runner/common/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common
creating build/lib.linux-x86_64-cpython-37/horovod/runner/driver
copying horovod/runner/driver/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/driver
copying horovod/runner/driver/driver_service.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/driver
creating build/lib.linux-x86_64-cpython-37/horovod/runner/elastic
copying horovod/runner/elastic/discovery.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/elastic
copying horovod/runner/elastic/registration.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/elastic
copying horovod/runner/elastic/settings.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/elastic
copying horovod/runner/elastic/constants.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/elastic
copying horovod/runner/elastic/worker.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/elastic
copying horovod/runner/elastic/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/elastic
copying horovod/runner/elastic/driver.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/elastic
copying horovod/runner/elastic/rendezvous.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/elastic
creating build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/settings.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/safe_shell_exec.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/tiny_shell_exec.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/env.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/timeout.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/host_hash.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/codec.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/config_parser.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/secret.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/hosts.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
copying horovod/runner/common/util/network.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/util
creating build/lib.linux-x86_64-cpython-37/horovod/runner/common/service
copying horovod/runner/common/service/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/service
copying horovod/runner/common/service/task_service.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/service
copying horovod/runner/common/service/compute_service.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/service
copying horovod/runner/common/service/driver_service.py -> build/lib.linux-x86_64-cpython-37/horovod/runner/common/service
creating build/lib.linux-x86_64-cpython-37/horovod/torch/elastic
copying horovod/torch/elastic/sampler.py -> build/lib.linux-x86_64-cpython-37/horovod/torch/elastic
copying horovod/torch/elastic/state.py -> build/lib.linux-x86_64-cpython-37/horovod/torch/elastic
copying horovod/torch/elastic/__init__.py -> build/lib.linux-x86_64-cpython-37/horovod/torch/elastic
running build_ext
Running CMake in build/temp.linux-x86_64-cpython-37/RelWithDebInfo:
cmake /tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125 -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/build/lib.linux-x86_64-cpython-37 -DPYTHON_EXECUTABLE:FILEPATH=/home/xcc/anaconda3/envs/lanegcn/bin/python
cmake --build . --config RelWithDebInfo -- -j8 VERBOSE=1
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is GNU 7.3.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /home/xcc/anaconda3/envs/lanegcn/bin/x86_64-conda_cos6-linux-gnu-c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags: -mf16c -mavx -mfma
-- Using command /home/xcc/anaconda3/envs/lanegcn/bin/python
-- The CUDA compiler identification is NVIDIA 11.4.152
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda-11.4/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: /usr/local/cuda-11.4/include (found version "11.4.152")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Linking against static NCCL library
-- Found NCCL: /usr/include
-- Determining NCCL version from the header file: /usr/include/nccl.h
-- NCCL_MAJOR_VERSION: 2
-- NCCL_VERSION_CODE: 21104
-- Found NCCL (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnccl_static.a)
-- Found NVTX: /usr/local/cuda-11.4/include
-- Found NVTX (include: /usr/local/cuda-11.4/include, library: dl)
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorflow'
-- Could NOT find Tensorflow (missing: Tensorflow_LIBRARIES) (Required is at least version "1.15.0")
-- Found Pytorch: 1.5.1 (found suitable version "1.5.1", minimum required is "1.5.0")
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'mxnet'
-- Could NOT find Mxnet (missing: Mxnet_LIBRARIES) (Required is at least version "1.4.1")
-- HVD_NVCC_COMPILE_FLAGS = -O3 -Xcompiler -fPIC -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -gencode arch=compute_87,code=\"sm_87,compute_87\"
-- Configuring done (10.5s)
CMake Warning (dev) in horovod/common/ops/cuda/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.
CUDA_ARCHITECTURES is empty for target "compatible_horovod_cuda_kernels".
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) in horovod/common/ops/cuda/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.
CUDA_ARCHITECTURES is empty for target "compatible_horovod_cuda_kernels".
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) in horovod/common/ops/cuda/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.
CUDA_ARCHITECTURES is empty for target "horovod_cuda_kernels".
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) in horovod/common/ops/cuda/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.
CUDA_ARCHITECTURES is empty for target "horovod_cuda_kernels".
This warning is for project developers. Use -Wno-dev to suppress it.
-- Generating done (0.0s)
.....
CMakeFiles/pytorch.dir/__/common/process_set.cc.o.d -o CMakeFiles/pytorch.dir/__/common/process_set.cc.o -c /tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc
In file included from /tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/operations.cc:63:0:
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/ops/nccl_operations.h:22:10: fatal error: nccl.h: No such file or directory
#include <nccl.h>
^~~~~~~~
compilation terminated.
make[2]: *** [horovod/torch/CMakeFiles/pytorch.dir/build.make:174: horovod/torch/CMakeFiles/pytorch.dir/__/common/operations.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc: In constructor 'horovod::common::ProcessSetTable::ProcessSetTable()':
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:165:8: warning: unused variable 'process_set_id' [-Wunused-variable]
auto process_set_id = RegisterProcessSet();
^~~~~~~~~~~~~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc: In member function 'void horovod::common::ProcessSetTable::Initialize_(const Context&)':
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:173:10: error: 'struct horovod::common::ProcessSet' has no member named 'Initialize'; did you mean 'Finalize'?
Get(0).Initialize(global_context);
^~~~~~~~~~
Finalize
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc: In member function 'int32_t horovod::common::ProcessSetTable::InitializeRegisteredAndRemoveMarkedIfReady_(const Context&, const horovod::common::Status&)':
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:209:39: error: 'struct horovod::common::ProcessSet' has no member named 'Initialize'; did you mean 'Finalize'?
bool newly_registered = Get(id).Initialize(global_context);
^~~~~~~~~~
Finalize
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:212:13: error: 'TRACE' was not declared in this scope
LOG(TRACE, global_controller.GetRank())
^~~~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:212:13: note: suggested alternative: 'ERANGE'
LOG(TRACE, global_controller.GetRank())
^~~~~
ERANGE
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:212:9: error: there are no arguments to 'LOG' that depend on a template parameter, so a declaration of 'LOG' must be available [-fpermissive]
LOG(TRACE, global_controller.GetRank())
^~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:212:9: note: (if you use '-fpermissive', G++ will accept your code, but allowing the use of an undeclared name is deprecated)
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:226:11: error: 'TRACE' was not declared in this scope
LOG(TRACE, global_controller.GetRank())
^~~~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:226:11: note: suggested alternative: 'ERANGE'
LOG(TRACE, global_controller.GetRank())
^~~~~
ERANGE
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:226:7: error: there are no arguments to 'LOG' that depend on a template parameter, so a declaration of 'LOG' must be available [-fpermissive]
LOG(TRACE, global_controller.GetRank())
^~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc: In member function 'void horovod::common::ProcessSetTable::Finalize_(const Context&, const horovod::common::Status&)':
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:247:9: error: 'TRACE' was not declared in this scope
LOG(TRACE, Get(0).controller->GetRank())
^~~~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:247:9: note: suggested alternative: 'ERANGE'
LOG(TRACE, Get(0).controller->GetRank())
^~~~~
ERANGE
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:247:5: error: there are no arguments to 'LOG' that depend on a template parameter, so a declaration of 'LOG' must be available [-fpermissive]
LOG(TRACE, Get(0).controller->GetRank())
^~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:265:7: error: 'TRACE' was not declared in this scope
LOG(TRACE, Get(0).controller->GetRank())
^~~~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:265:7: note: suggested alternative: 'ERANGE'
LOG(TRACE, Get(0).controller->GetRank())
^~~~~
ERANGE
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:265:3: error: there are no arguments to 'LOG' that depend on a template parameter, so a declaration of 'LOG' must be available [-fpermissive]
LOG(TRACE, Get(0).controller->GetRank())
^~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc: At global scope:
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/process_set.cc:26:13: warning: 'std::string horovod::common::{anonymous}::RanksString(const std::vector<int>&)' defined but not used [-Wunused-function]
std::string RanksString(const std::vector<int>& ranks) {
^~~~~~~~~~~
make[2]: *** [horovod/torch/CMakeFiles/pytorch.dir/build.make:202: horovod/torch/CMakeFiles/pytorch.dir/__/common/process_set.cc.o] Error 1
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/controller.cc: In member function 'horovod::common::Response horovod::common::Controller::ConstructResponse(const string&, int)':
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/controller.cc:811:27: warning: 'reduce_op' may be used uninitialized in this function [-Wmaybe-uninitialized]
response.set_reduce_op(reduce_op);
~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/controller.cc:831:34: warning: 'postscale_factor' may be used uninitialized in this function [-Wmaybe-uninitialized]
response.set_postscale_factor(postscale_factor);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~
/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/horovod/common/controller.cc:830:33: warning: 'prescale_factor' may be used uninitialized in this function [-Wmaybe-uninitialized]
response.set_prescale_factor(prescale_factor);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
make[2]: Leaving directory '/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/build/temp.linux-x86_64-cpython-37/RelWithDebInfo'
make[1]: *** [CMakeFiles/Makefile2:154: horovod/torch/CMakeFiles/pytorch.dir/all] Error 2
make[1]: Leaving directory '/tmp/pip-install-j_are0ec/horovod_20a0d596553a4c759ad0de51f2163125/build/temp.linux-x86_64-cpython-37/RelWithDebInfo'
make: *** [Makefile:91: all] Error 2
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
| open | 2023-05-13T02:44:33Z | 2023-05-15T16:12:43Z | https://github.com/horovod/horovod/issues/3924 | [
"bug"
] | CoconutSweet999 | 1 |
pallets/flask | python | 4,991 | Issues with flask Blueprint | ### Description of problem
I had catched up a issue with the flask Blueprint. When trying to register a blueprint, using the code (used the documentation as reference):
```
## Flask imports
from flask import Flask, Blueprint
from blueprints import about_blueprint
## Declaration of Flask application
app = Flask(__name__)
## Registration of blueprint
@app.register_blueprint(about_blueprint)
```
The blueprints/about_blueprint.py has the content:
```
## Import Blueprint from Flask
from flask import Blueprint
## Declare blueprint
about_blueprint = Blueprint(__name__)
## Blueprint declared actions
@about_blueprint.route('/about?go=<go>')
def about(go):
if go is None:
return '<h4>About page</h4>'
else:
return f'<h4>About: {go}</h4>'
```
While attempting to run it, I recieve a following error:
```
Traceback (most recent call last):
File "main.py", line 25, in <module>
@app.register_blueprint(about_blueprint)
File "/home/runner/Site/venv/lib/python3.10/site-packages/flask/scaffold.py", line 50, in wrapper_func
return f(self, *args, **kwargs)
File "/home/runner/Site/venv/lib/python3.10/site-packages/flask/app.py", line 1296, in register_blueprint
blueprint.register(self, options)
TypeError: register() takes 1 positional argument but 2 were given
```
### Environment:
- Python version: 3.10
- Flask version: latest
| closed | 2023-02-21T12:20:49Z | 2023-03-08T00:06:18Z | https://github.com/pallets/flask/issues/4991 | [] | mdziczkowski | 4 |
jumpserver/jumpserver | django | 14,675 | [Bug] 工单审批选择指定账号时无可选账号 | ### Product Version
v3.10.16
### Product Edition
- [ ] Community Edition
- [X] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [X] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
单机部署,Chrome 新版
### 🐛 Bug Description
管理人员进行工单审批时,修改账号为“指定账号”,可选列表会闪现账号列表,然后就没有可选值了,同一个资产在创建授权规则时设置“指定账号”是有可选账号的
### Recurrence Steps
1、多组织,例如 A、B
2、用户申请 A 组织资产授权
3、审批人在 B 组织工作,此时打开站内信工单进行审批,就会出现如上所述问题
### Expected Behavior
_No response_
### Additional Information
_No response_
### Attempted Solutions
_No response_ | closed | 2024-12-18T03:56:43Z | 2024-12-19T10:47:14Z | https://github.com/jumpserver/jumpserver/issues/14675 | [
"🐛 Bug",
"✅ Done",
"📦 z~release:v4.5.0",
"📦 z~release:v3.10.17"
] | gerry-f2c | 3 |
apachecn/ailearning | nlp | 430 | 第7章_集成方法 - ApacheCN | http://ailearning.apachecn.org/ml/7.Ensemble/
ApacheCN 专注于优秀项目维护的开源组织 | closed | 2018-08-24T07:09:35Z | 2021-09-07T17:40:17Z | https://github.com/apachecn/ailearning/issues/430 | [
"Gitalk",
"15ffafa1fb605f895ec68fafb4757d58"
] | jiangzhonglian | 0 |
iterative/dvc | machine-learning | 10,258 | run-cache: cache stage runs with no dependencies? | ```yaml
stages:
stage1:
cmd: echo foo > foo
outs:
- foo
```
Let's say, if we have above stage, with no dependencies and an output. When I run it and rerun it again, it says
```console
$ dvc repro
Running stage 'stage1':
> echo foo > foo
Generating lock file 'dvc.lock'
Updating lock file 'dvc.lock'
To track the changes with git, run:
git add dvc.lock .gitignore
To enable auto staging, run:
dvc config core.autostage true
Use `dvc push` to send your updates to remote storage.
Stage 'stage1' didn't change, skipping
$ dvc repro
Stage 'stage1' didn't change, skipping
Data and pipelines are up to date.
```
But if I have a lock file missing or stage name changed, it will force a rerun.
Ideally, run-cache is supposed to prevent this scenario, but it does not work for a stage without any dependencies. Should it cache those kinds of stages?
cc @efiop
Related: https://iterativeai.slack.com/archives/C044738NACC/p1706207735608469 | open | 2024-01-27T16:15:05Z | 2024-01-31T16:52:00Z | https://github.com/iterative/dvc/issues/10258 | [
"A: run-cache"
] | skshetry | 7 |
microsoft/unilm | nlp | 1,624 | Textdiffuser-2 : runwayml/stable-diffusion-v1-5 model unavailable | Hello ,
Since the runwayml/stable-diffusion-v1-5 template is not available, do you know which other model I can replace it with and still get good results?
Thanks ! | open | 2024-09-17T08:26:21Z | 2024-09-18T01:46:44Z | https://github.com/microsoft/unilm/issues/1624 | [] | ibou810 | 1 |
ultralytics/yolov5 | deep-learning | 12,617 | Why are targets and labels inconsistent? | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
When I look at the targets in the code, why is it different from labels?
I closed mosaic, batchsize is 2.
Then I printed the targets.
Image path is:path = ['. /datasets/VOC/images/train2007/006128.jpg', '. /datasets/VOC/images/val2007/000109.jpg'].
The labels for these two images are:
6 0.592 0.7972136222910218 0.812 0.3993808049535604
6 0.28700000000000003 0.7074303405572756 0.322 0.21362229102167185
and
2 0.497 0.4822834645669291 0.93 0.8779527559055118.
but its targets are:
targets = tensor([[0.00000, 6.00000, 0.51329, 0.83264, 0.97342, 0.33472],
[0.00000, 6.00000, 0.19774, 0.76502, 0.39548, 0.19387],
[1.00000, 2.00000, 0.49563, 0.53170, 0.53806, 0.25794]])
I want to know why their coordinates, height and width are not consistent?
### Additional
_No response_ | closed | 2024-01-12T08:46:37Z | 2024-10-20T19:37:01Z | https://github.com/ultralytics/yolov5/issues/12617 | [
"question",
"Stale"
] | WX-yh | 3 |
2noise/ChatTTS | python | 606 | 选择流式模式为啥输出还很慢 | 
| closed | 2024-07-20T05:48:47Z | 2024-07-24T08:54:28Z | https://github.com/2noise/ChatTTS/issues/606 | [
"invalid"
] | peak-coco | 2 |
pytest-dev/pytest-qt | pytest | 92 | assertSignal | I'd like to add a `qtbot.assertSignal` to pytest-qt
/cc @acogneau
# Motivation
Something I need to do regularily is to make sure a given code snippet calls a signal, or doesn't - but I don't want to wait for the signal.
The current way to do that works, but is unpythonic:
``` python
spy = QSignalSpy(foo.signal)
foo.do_stuff()
assert len(spy) == 1
assert spy[1] == "signal_arg"
```
``` python
spy = QSignalSpy(foo.error)
foo.do_stuff()
assert not spy
```
# API
After some thinking, I think a context manager with this signature would be best:
``` python
qtbot.assertSignal(signal, count=None) -> QSignalSpy
```
- `signal`: The signal it should listen to
- `count`: Either `None` (in which case it expects the signal to be emitted >= 1 times), or an int. By setting `count` to `0`, it can be ensured the signal is _not_ emitted.
- return value: A `QSignalSpy` which can be used to check the signal arguments.
# Examples
``` python
with qtbot.assertSignal(foo.signal, count=1) as spy:
foo.do_stuff()
assert spy[1] == "signal_arg"
```
``` python
with qtbot.assertSignal(foo.error, count=0):
foo.do_stuff()
```
# Alternatives
I came up with some other ideas while thinking about this, but I think all of them are worse:
- `qtbot.assertSignal(signal, emitted=True)`: Doesn't give you the possibility to check a signal was emitted exactly once, and isn't better in any way.
- `qtbot.assertSignal(signal, calls=[("signal arg 1", "signal arg 2")])`: Too complicated, and can be done easier by checking the `QSignalSpy`
- `qtbot.assertNotEmitted(signal)` - less functionality, and I think passing `count=0` is okay if you want that (that was the usecase I had in mind originally)
- Shoehorning this function into `waitSignal` - it's complex enough already, and `qtbot.waitSignal(signal, do_not_actually_wait=True)` just sounds wrong :wink:
- Adding a multi-signal variant (AND): Simply use `with qtbot.assertSignal(foo), qtbot.assertSignal(bar):` instead - that isn't an issue because it doesn't _wait_ for the signal.
- Adding a multi-signal variant (OR): I don't really see the usecase for it.
| closed | 2015-09-04T04:41:00Z | 2016-01-06T23:22:29Z | https://github.com/pytest-dev/pytest-qt/issues/92 | [
"enhancement :cool:"
] | The-Compiler | 6 |
mitmproxy/mitmproxy | python | 6,977 | CONTENT MISSING and PROTOCOL_ERROR | #### Problem Description
When fetching some larger files, errors such as **CONTENT MISSING** and **PROTOCOL_ERROR** will occur.
This is a log.
```
127.0.0.1:5760: GET https://game.maj-soul.com/1/v0.10.306.w/lang/scene/Assets/Resource/lobby/beijing_leyuanbaitian.jpg HTTP/2.0
sec-ch-ua: "Not/A)Brand";v="8", "Chromium";v="126", "Microsoft Edge";v="126"
accept: image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8
dnt: 1
origin: https://game.maj-soul.com
sec-ch-ua-mobile: ?0
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 Edg/126.0.0.0
sec-ch-ua-platform: "Windows"
sec-fetch-site: same-origin
sec-fetch-mode: cors
sec-fetch-dest: empty
referer: https://game.maj-soul.com/1/
accept-encoding: gzip, deflate, br, zstd
accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6
priority: i
<< HTTP/2.0 200 OK (content missing)
content-type: image/jpeg
content-length: 774368
date: Fri, 07 Jun 2024 17:20:30 GMT
server: nginx
x-oss-request-id: 6663415E829A1834377E8370
accept-ranges: bytes
etag: "C8B584D27F26FC49D01C0EA0666E50DF"
last-modified: Wed, 29 Nov 2023 08:01:13 GMT
x-oss-object-type: Normal
x-oss-hash-crc64ecma: 16259342251819115523
x-oss-storage-class: Standard
content-md5: yLWE0n8m/EnQHA6gZm5Q3w==
x-oss-server-time: 118
x-via: 2.0 PS-000-046rA218 [HIT]
age: 1966216
x-ws-request-id: 668141e6_PS-000-046rA218_62771-7463
x-cache-status: HIT
<< stream reset by client (PROTOCOL_ERROR)
```
#### Steps to reproduce the behavior:
1. mitmdump.exe -p 12345 -v
2. visit game.maj-soul.com and use SwitchOmega to proxy traffic to lcoalhost:12345
#### System Information
```
Mitmproxy: 10.3.1
Python: 3.10.11
OpenSSL: OpenSSL 3.2.2 4 Jun 2024
Platform: Windows-10-10.0.22631-SP0
```
| closed | 2024-06-30T11:41:36Z | 2024-09-30T19:33:37Z | https://github.com/mitmproxy/mitmproxy/issues/6977 | [
"kind/triage"
] | void0red | 7 |
mirumee/ariadne | graphql | 1,001 | multipart.File has no MIME type information | Unfortunately, multipart's `File` class seems to have a serious regression compared to cgi's `FieldStorage` class: Where `FieldStorage.type` contained the declared MIME type of the uploaded file (or `None` if not given), `File` does not seem to have this information. This makes is basically impossible to download uploaded files while keeping the file type intact. | open | 2022-12-21T15:46:30Z | 2023-07-21T10:31:59Z | https://github.com/mirumee/ariadne/issues/1001 | [
"help wanted"
] | srittau | 7 |
ResidentMario/geoplot | matplotlib | 291 | Exporting a KDEPlot as KML | I was wondering if you know of a way to export the KDEPlot as KML so I could view it via Google Earth...? | open | 2024-01-23T14:14:47Z | 2024-01-23T14:14:47Z | https://github.com/ResidentMario/geoplot/issues/291 | [] | eric-g-97477 | 0 |
zappa/Zappa | flask | 954 | (Discussion) Keep using kappa library | The kappa library used for the events sources seems to be have been totally abandoned. The last release is from february 2017...
AWS and Boto3 have a lot new functionalities and it seems that currently we just override the kappa main classes to add those new functionalities when they comes up (e.g the `ExtendedSnsEventSource`).
Wouldn't it be more maintanable to create a whole new class in the Zappa project for this only purpose ?
There is nothing really hard in this, it just need some time to think about making it really generic, robust and easily maintainable using only Boto3.
All of the main code for event sources is in the `utilities.py` so there is no huge changes accross the project.
Any other ideas are welcome, and I can take some of my time to start this refacto if you think it can be a good thing to do.
Related issues :
#413
#504 - #545
#512 - #668 - #809
#557
#718
#761 - #762
#817
#854
#861
#907
#958 | closed | 2021-03-11T12:29:44Z | 2024-08-17T08:18:48Z | https://github.com/zappa/Zappa/issues/954 | [
"no-activity",
"auto-closed"
] | Yaronn44 | 10 |
mirumee/ariadne | api | 253 | Is Python `multipart` module used anywhere? | I can't find it being imported anywhere in the code. I'm trying to get my Ariadne environment slimmed down as much as possible, so it would be useful to know if it is safe to remove that module. Thanks! | closed | 2019-10-03T19:01:36Z | 2019-11-25T11:29:08Z | https://github.com/mirumee/ariadne/issues/253 | [
"enhancement",
"roadmap",
"dependencies"
] | caladd | 4 |
capitalone/DataProfiler | pandas | 274 | Creating a customized column | Trying to add a customized column (eg: driver_license) in the data profiler library so that the final profiled json contains the customized column (eg: driver_license) with all the usual statistics. Will it be possible to include functionality to make adding a customized column with all the required statistics easier? | open | 2021-06-21T15:49:36Z | 2021-08-31T21:23:41Z | https://github.com/capitalone/DataProfiler/issues/274 | [
"Medium Priority",
"New Feature"
] | simratbhandari2 | 8 |
plotly/dash-table | plotly | 726 | Drag to select range of cells | In the below image when I select cell 1 and drag to cell 2 it also selects range B and C. I only want to select range A like excel.

Also when I want to copy and past the cells all the cells convert to a row and it can not copy and paste data properly between plotly and excel.
| open | 2020-04-01T16:39:12Z | 2020-10-24T22:31:39Z | https://github.com/plotly/dash-table/issues/726 | [
"♥ NEEDS SPON$OR"
] | reza1615 | 1 |
sktime/sktime | scikit-learn | 7,523 | [ENH] move `_sk_visual_block` and `visual_block_kind` logic for meta-estimators to `scikit-base` | In https://github.com/sktime/sktime/pull/7233, @mateuszkasprowicz has introduced a neat developer feature where developers of meta-estimators can use the new tag `visual_block_kind` to select serial or parallel display, and has streamlined the extender pattern to ensure html display of meta-estimators.
This pattern would be nice to have already in `scikit-base`, so all dependent packages can use it.
As usually, the pattern to move the logic one layer up is:
1. copy-paste with minor modifications, to `scikit-base`, ensure tests are also included
2. after a deprecation period, which can be silent in this case of private features, remove in `sktime`. Perhaps at the 1.0.0 mark, so we need to set release manager notes. | open | 2024-12-14T09:52:35Z | 2025-01-27T13:29:27Z | https://github.com/sktime/sktime/issues/7523 | [
"enhancement",
"module:base-framework"
] | fkiraly | 5 |
open-mmlab/mmdetection | pytorch | 11,794 | Assertion Error "assert isinstance(item, IndexType.__args__)" in mmengine in Tutorial | Hej,
I am using
M3 Pro
and have the packages
mmdetection: 3.3.0
mmcv: 2.1.0
mmengine: 0.10.4
torchvision: 0.18.1
torch: 2.3.1
I am running the Tutorial: "MMDet_Tutorial" and in during the training I get an assertion error. I have only changed parts of the tutorial to fit it for CPUs. Does anyone know what I do wrong?
config = "configs/rtmdet/rtmdet_tiny_1xb4-20e_balloon.py"
python_executable = "/Users/lucabernecker/anaconda3/envs/mdetect2/bin/python"
script_path = "tools/train.py"
!{python_executable} "{script_path}" '{config}'
I get the following printout with error:
06/14 13:43:07 - mmengine - INFO -
------------------------------------------------------------
System environment:
sys.platform: darwin
Python: 3.9.19 (main, May 6 2024, 14:39:30) [Clang 14.0.6 ]
CUDA available: False
MUSA available: False
numpy_random_seed: 602692056
GCC: Apple clang version 15.0.0 (clang-1500.3.9.4)
PyTorch: 2.3.1
PyTorch compiling details: PyTorch built with:
- GCC 4.2
- C++ Version: 201703
- clang 14.0.3
- OpenMP 201811
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: NO AVX
- Build settings: BLAS_INFO=accelerate, BUILD_TYPE=Release,
Traceback (most recent call last):
File "/Users/lucabernecker/PHD-UiT/mmdetection/tools/train.py", line 121, in <module>
main()
File "/Users/lucabernecker/PHD-UiT/mmdetection/tools/train.py", line 117, in main
runner.train()
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/mmengine/runner/loops.py", line 103, in run
self.runner.val_loop.run()
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/mmengine/runner/loops.py", line 373, in run
self.run_iter(idx, data_batch)
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/mmengine/runner/loops.py", line 393, in run_iter
outputs = self.runner.model.val_step(data_batch)
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/mmengine/model/base_model/base_model.py", line 133, in val_step
return self._run_forward(data, mode='predict') # type: ignore
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/mmengine/model/base_model/base_model.py", line 361, in _run_forward
results = self(**data, mode=mode)
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/lucabernecker/PHD-UiT/mmdetection/mmdet/models/detectors/base.py", line 94, in forward
return self.predict(inputs, data_samples)
File "/Users/lucabernecker/PHD-UiT/mmdetection/mmdet/models/detectors/single_stage.py", line 110, in predict
results_list = self.bbox_head.predict(
File "/Users/lucabernecker/PHD-UiT/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 197, in predict
predictions = self.predict_by_feat(
File "/Users/lucabernecker/PHD-UiT/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 279, in predict_by_feat
results = self._predict_by_feat_single(
File "/Users/lucabernecker/PHD-UiT/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 423, in _predict_by_feat_single
return self._bbox_post_process(
File "/Users/lucabernecker/PHD-UiT/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 480, in _bbox_post_process
results = results[valid_mask]
File "/Users/lucabernecker/anaconda3/envs/mdetect2/lib/python3.9/site-packages/mmengine/structures/instance_data.py", line 175, in __getitem__
assert isinstance(item, IndexType.__args__)
AssertionError | open | 2024-06-14T12:09:42Z | 2024-11-07T14:19:34Z | https://github.com/open-mmlab/mmdetection/issues/11794 | [] | LucaBernecker | 5 |
jupyter/nbgrader | jupyter | 1,883 | Validate button returns Validation failed Cannot check version: TypeError: undefined is not an object (evaluating 't.setAttribute') | JupyterHub is running on k8s cluster and users pods are based on docker image with `FROM quay.io/jupyter/minimal-notebook:hub-4.1.3`
Hub image in helm chart values.yaml is a docker image with `FROM jupyterhub/k8s-hub:3.0.3`
### Operating system
Ubuntu 22.04.2 LTS
### `nbgrader --version`
0.9.1
### `jupyterhub --version` (if used with JupyterHub)
4.1.3
### `jupyter notebook --version`
7.1.2
### Expected behavior
When clicking "validate" button, it should be successful.
### Actual behavior
When clicking "validate" button I get "Validation failed" with such reason:
`Cannot check version: TypeError: undefined is not an object (evaluating 't.setAttribute')`
If I validate notebook in jupyterhub terminal by the command `nbgrader validate notebook_name.ipynb`, it finishes with success
I found [code for this error dialog](https://github.com/jupyter/nbgrader/blame/5b3fe5ebfbffab6e14ffc92bf1eb30981c411af5/src/validate_assignment/index.ts#L125) and checked if I have validate_assignment server extension. I checked it by running the command `jupyter server extension list` in terminal and got
```
...
nbgrader.server_extensions.validate_assignment enabled
- Validating nbgrader.server_extensions.validate_assignment...
Extension package nbgrader.server_extensions.validate_assignment took 0.6952s to import
nbgrader.server_extensions.validate_assignment OK
...
```
### Steps to reproduce the behavior
Click validate button in any notebook
| closed | 2024-05-08T04:58:25Z | 2024-05-13T20:14:28Z | https://github.com/jupyter/nbgrader/issues/1883 | [] | PolinaChubenko | 1 |
laughingman7743/PyAthena | sqlalchemy | 266 | Length-less VARCHAR aren't supported in DDL statements | When one defines a table as follows:
```python
table = Table(
"table_name",
MetaData(),
Column("c", String),
schema=SCHEMA,
awsathena_location="s3://bucket/prefix/table_name",
)
```
And then tries to add in to Athena's catalog with:
```python
table.create(bind=connection)
```
The operation will fail because Athena does not support creating columns of type VARCHAR without a maximum length.
```
FAILED: ParseException line 2:0 mismatched input ')' expecting ( near 'VARCHAR' in primitive type specification
```
Given Athena's not always very user-friendly error messages, maybe we could catch the problem before sending the query to Athena, providing users with a more helpful and actionable diagnostic. | closed | 2022-01-16T20:04:45Z | 2022-01-22T14:02:44Z | https://github.com/laughingman7743/PyAthena/issues/266 | [] | cansjt | 1 |
lanpa/tensorboardX | numpy | 9 | Can't add graph with pytorch v0.2 | I tried to run this snippet:
```
import torch
import torch.nn as nn
from torch.autograd import Variable
from datetime import datetime
from tensorboard import SummaryWriter
x = Variable(torch.rand(10, 10), requires_grad=True)
model = nn.Linear(10, 10)
h = model(x)
writer = SummaryWriter('runs/'+datetime.now().strftime('%B%d %H:%M:%S'))
writer.add_graph(model, h)
writer.close()
```
but I get this error message:
```
Traceback (most recent call last):
File "/Users/Miguel/Documents/Unbabel/pytorch-tools/tests/test_tensorboard.py", line 16, in <module>
writer.add_graph(model, h)
File "/Users/Miguel/anaconda/envs/venv/lib/python2.7/site-packages/tensorboard/writer.py", line 259, in add_graph
self.file_writer.add_graph(graph(model, lastVar))
File "/Users/Miguel/anaconda/envs/venv/lib/python2.7/site-packages/tensorboard/graph.py", line 39, in graph
make_list_of_nodes(lastVar.grad_fn)
File "/Users/Miguel/anaconda/envs/venv/lib/python2.7/site-packages/tensorboard/graph.py", line 23, in make_list_of_nodes
inputs.append(make_name(next_fn))
File "/Users/Miguel/anaconda/envs/venv/lib/python2.7/site-packages/tensorboard/graph.py", line 12, in make_name
return id2name[id(obj.variable)]+'_'+str(id(obj.variable))
KeyError: 4654832816
```
| closed | 2017-08-12T16:29:43Z | 2017-12-29T10:42:50Z | https://github.com/lanpa/tensorboardX/issues/9 | [] | miguelvr | 8 |
pydantic/pydantic | pydantic | 11,240 | New JSON validation mode for callables | ### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing
### Description
When using a `TypeAdapter` on a callable (or using the `@validate_call()`) decorator, validation from JSON is limited:
```python
from pydantic import TypeAdapter
from pydantic_core import ArgsKwargs
def f(a: int, /, *, b: int):
...
ta = TypeAdapter(f)
# From Python, you use `ArgsKwargs`:
ta.validate_python(ArgsKwargs((1,), {'b': 1})
# No way to validate from JSON
ta.validate_json('{"a": 1, "b": 2}') # error
```
The idea would be to introduce a flag on the [`arguments_schema()`](https://docs.pydantic.dev/latest/api/pydantic_core_schema/#pydantic_core.core_schema.arguments_schema)/a new schema, so that we can validate like this:
```python
def f(a: int, /, *, b: int):
...
...
ta.validate_json('{"a": 1, "b": 2}')
def f(*args: int, **kwargs: str):
...
...
ta.validate_json('{"args": [1, 2, 3], "kwargs": {"a": "string"}}')
```
This should work as parameter names should be unique. Note that while something like this could also be done for kwargs:
```python
def f(*args: int, **kwargs: str):
...
...
ta.validate_json('{"args": [1, 2, 3], "a": "string"')
```
This won't play well if you have a parameter named `a`.
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [ ] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [ ] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [ ] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [X] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc. | closed | 2025-01-08T21:29:21Z | 2025-03-14T15:27:22Z | https://github.com/pydantic/pydantic/issues/11240 | [
"feature request"
] | Viicos | 1 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 214 | Cannot cast ufunc 'multiply' output from dtype('float64') to dtype('uint8') with casting rule 'same_kind' | When I run this library on Windows 10, it throws out the following error:
Running Stage 4: Blending
Traceback (most recent call last):
File "align_warp_back_multiple_dlib.py", line 428, in <module>
blended = blur_blending_cv2(warped_back, blended, backward_mask)
File "align_warp_back_multiple_dlib.py", line 219, in blur_blending_cv2
mask *= 255.0
numpy.core._exceptions.UFuncTypeError: Cannot cast ufunc 'multiply' output from dtype('float64') to dtype('uint8') with casting rule 'same_kind'
Finish Stage 4 ...
Environment details:
- System: Windows 10
- Python: 3.8 (via Anaconda)
- PyTorch installed with this command: conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
- Numpy version:
```
conda list numpy
# packages in environment at D:\conda\envs\py38:
#
# Name Version Build Channel
numpy 1.21.2 py38hfca59bb_0
numpy-base 1.21.2 py38h0829f74_0
```
- OpenCV version:
```
conda list opencv
# packages in environment at D:\conda\envs\py38:
#
# Name Version Build Channel
libopencv 4.0.1 hbb9e17c_0
opencv 4.0.1 py38h2a7c758_0
opencv-python 4.5.5.62 pypi_0 pypi
py-opencv 4.0.1 py38he44ac1e_0
``` | open | 2021-12-31T20:03:21Z | 2023-06-16T20:02:59Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/214 | [] | tang2087 | 4 |
AutoGPTQ/AutoGPTQ | nlp | 181 | can't install autogptq_cuda when build docker image | I build a image with `auto-gptq`,the Dockerfile simplified like this:
```yaml
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-devel
RUN pip install --no-cache-dir auto-gptq>=0.2.2
```
it won't install the `autogptq_cuda`,because the `setup.py` install `autogptq_cuda` requires
```python
if BUILD_CUDA_EXT and (torch.cuda.is_available() or IN_GITHUB_ACTIONS):
```
but `torch.cuda.is_available()` is always false at build stage, if I set `IN_GITHUB_ACTIONS` as True,it conflicts following code(as `CUDA_VERSION` was available)
```python
version = "0.2.2" + (f"+cu{CUDA_VERSION}" if CUDA_VERSION and IN_GITHUB_ACTIONS else "")
```
pip check version fail,throw error like this
>Discarding https://mirrors.cloud.tencent.com/pypi/packages/94/07/3f3f6905a9bd334c6ee8025df42e4789379612703b935be328caaaa41c23/auto_gptq-0.2.2.tar.gz#sha256=4885af2514c21242ae7d902bfa78e32fa99e542784df10062b768780da228224 (from https://mirrors.cloud.tencent.com/pypi/simple/auto-gptq/) (requires-python:>=3.8.0): Requested auto-gptq>=0.2.2 from https://mirrors.cloud.tencent.com/pypi/packages/94/07/3f3f6905a9bd334c6ee8025df42e4789379612703b935be328caaaa41c23/auto_gptq-0.2.2.tar.gz#sha256=4885af2514c21242ae7d902bfa78e32fa99e542784df10062b768780da228224 (from -r /requirements.txt (line 7)) has inconsistent version: expected '0.2.2', but metadata has '0.2.2+cu1170'
Some solutions might work:
1. the version add a cuda version might be unnecessary
use
```python
version = "0.2.2"
```
rather than
```python
version = "0.2.2" + (f"+cu{CUDA_VERSION}" if CUDA_VERSION and IN_GITHUB_ACTIONS else "")
```
2. the condition check of `autogptq_cuda` might be roughly like this:
```python
FORCE_BUILD_CUDA_EXT = int(os.environ.get('FORCE_BUILD_CUDA_EXT', '0')) == 1
if BUILD_CUDA_EXT and (torch.cuda.is_available() or IN_GITHUB_ACTIONS or FORCE_BUILD_CUDA_EXT):
```
| open | 2023-06-29T11:52:48Z | 2023-09-04T06:57:11Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/181 | [] | TangoW | 6 |
Johnserf-Seed/TikTokDownload | api | 238 | [bug和需要帮助]解决脚本闪退后遭遇脚本报错,无奈下载exe版本再遇新难题! | **描述出现的错误**
下载源代码后,按照要求修改conf.ini链接后打开TikTokTool.py出现cmd窗口后直接闪退,还原conf.ini后一样闪退。遂怀疑环境问题,卸载重装python重装并勾选添加path后依旧闪退,检查系统环境也没问题,电脑就只有一个python,电脑的环境变量path路径也是正常的python安装路径和python下的Scripts文件夹。

然后用idle打开TikTokTool.py运行,发现错误是
`ModuleNotFoundError: No module named 'requests'`
引起的。提示是找不到模块requests,python安装目录里的Lib\site-packages\pip文件夹里面也有这个,感觉这个应该不属于第三方库,是python自带的吧。我看说明文档也没提到这个,然后我查阅资料后试着
`pip install requests `
能打开了,想请问下这个是本来就需要的步骤,因为太小白所以教程就没提还是属于不正常的范畴。
然后我打开后,把conf.ini里面的链接替换成自己想要下载的主页链接,跑起来一段时间后没下载完成就又闪退了,重试也不行。还是在idle里面运行,报错代码是
`Traceback (most recent call last):`
`File "D:\常用软件\TikTokDownload\TikTokTool.py", line 29, in <module>
profile.getProfile(cmd.setting())`
` File "D:\常用软件\TikTokDownload\Util\Profile.py", line 103, in getProfile
self.getData(self.api_post_url)`
` File "D:\常用软件\TikTokDownload\Util\Profile.py", line 142, in getData
self.getVideoInfo(result)`
` File "D:\常用软件\TikTokDownload\Util\Profile.py", line 245, in getVideoInfo
self.getNextData()`
` File "D:\常用软件\TikTokDownload\Util\Profile.py", line 184, in getNextData
self.getVideoInfo(result)`
` File "D:\常用软件\TikTokDownload\Util\Profile.py", line 245, in getVideoInfo
self.getNextData()`
` File "D:\常用软件\TikTokDownload\Util\Profile.py", line 184, in getNextData
self.getVideoInfo(result)`
` File "D:\常用软件\TikTokDownload\Util\Profile.py", line 242, in getVideoInfo
datas = Util.Images().get_all_images(self.image_list)`
` File "D:\常用软件\TikTokDownload\Util\Images.py", line 49, in get_all_images
self.position = js['item_list'][0]['aweme_poi_info']['poi_name']`
`KeyError: 'aweme_poi_info'`

怀疑路径问题,改成英文路径依然不行
花费了数小时各种研究无果,只好用打包好的TikTokTool.exe(之前就看到有这个了,主要想知道报错的原理)。替换好自己想要下载的链接,这次不报错了,视频也能正常下载,但是图文下载出来会有很多空文件夹,试验过多次,是碰到固定文件夹就会这样,不是随机空文件夹,字符显示是 **提示:发生了点意外**,后面发现用TikTokPic.exe能一个个单独下载,就花一下午把空文件夹给下载了。。
特意注册账号来报这个issue,希望能够告知原因。可以测试,她的个人主页链接是https://v.douyin.com/Mf9Endf/

话说这个怎么变来变去的,今天到现在变了三次了。
**截图**
已添加
**桌面(请填写以下信息):**
-操作系统:[windows10 64bit]
-python版本:3.10
-代理[关闭](看见有的issue的bug是这个引起的,没开代理)
-版本[最新源代码]
**附文**
盼复,感谢!
| closed | 2022-10-20T12:41:37Z | 2022-10-21T13:41:06Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/238 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | ynct200123 | 13 |
ckan/ckan | api | 8,719 | When displaying a resource with a large numeric ID (MD5SUM), a 500 error is produced | ## CKAN version
2.9 (also in 2.10)
## Describe the bug
A clear and concise description of what the bug is.
For historical reasons, we use MD5SUMS as the ID for our resources. These are 32 character hexadecimal values. Resources are created programatically using CKAN API. We have ONE resource where, by chance, the MD5SUM consists of only digits (the value is 78599293762288737036487668342945, but any 32 decimal digit value is expected to cause the issue).
When CKAN attempts to display the resource, the error reported below in "Additional Details" is generated, and a 500 error is returned to the user.
### Steps to reproduce
Until our CKAN instance is patched, this error can be viewed by attempting to view this resource:
https://data.bioplatforms.com/base-genomics-amplicon/bpa-base-genomics-amplicon-its-39289_1-bc267/resource/78599293762288737036487668342945
To reproduce on another CKAN instance you would need to use the CKAN API to create a resource with a 32-decimal digit ID (using the UI to create a resource uses a UUID as the ID).
Then attempt to View the resource.
Editing the resource works as expected, as the edit page does not attempt to format the id.
### Expected behavior
Regardless of whether the resource ID contains only decimal digits or hex digits (or any other chars), display the resource successfully.
### Additional details
We have located the cause of the issue, and are working on a patch.
https://github.com/ckan/ckan/blob/9e5242088f4a148308967c3bab176dea873b10e4/ckan/lib/helpers.py#L2279
This code assumes that if the resource field contains only digits, it is an integer, and that the formatters.localised_number can format it. Unfortunately flask_babel tries to convert the integer to a decimal, which fails.
Our proposed solution is 2-fold:
1. Add id to the list of blacklisted fields which it does not try to reformat.
2. Place a try: except: around the block of code that attempts to use the formatters, catching any exception, and returning the original value if there was issue during the formatting attempt.
When time permits, a Pull Request with these will be submitted.
Stack trace:
| Converting this value into a localised Number: 78599293762288737036487668342945
ckan_1 | The number as an integer is: 78599293762288737036487668342945
ckan_1 | 2025-03-13 00:45:43,050 ERROR [ckan.config.middleware.flask_app] [<class 'decimal.InvalidOperation'>]
ckan_1 | Traceback (most recent call last):
ckan_1 | File "/env/lib/python3.9/site-packages/flask/app.py", line 1949, in full_dispatch_request
ckan_1 | rv = self.dispatch_request()
ckan_1 | File "/env/lib/python3.9/site-packages/flask/app.py", line 1935, in dispatch_request
ckan_1 | return self.view_functions[rule.endpoint](**req.view_args)
ckan_1 | File "/app/ckan/ckan/config/middleware/../../views/resource.py", line 151, in read
ckan_1 | return base.render(template, extra_vars)
ckan_1 | File "/app/ckan/ckan/lib/base.py", line 151, in render
ckan_1 | return flask_render_template(template_name, **extra_vars)
ckan_1 | File "/env/lib/python3.9/site-packages/flask/templating.py", line 137, in render_template
ckan_1 | return _render(
ckan_1 | File "/env/lib/python3.9/site-packages/flask/templating.py", line 120, in _render
ckan_1 | rv = template.render(context)
ckan_1 | File "/env/lib/python3.9/site-packages/jinja2/asyncsupport.py", line 76, in render
ckan_1 | return original_render(self, *args, **kwargs)
ckan_1 | File "/env/lib/python3.9/site-packages/jinja2/environment.py", line 1008, in render
ckan_1 | return self.environment.handle_exception(exc_info, True)
ckan_1 | File "/env/lib/python3.9/site-packages/jinja2/environment.py", line 780, in handle_exception
ckan_1 | reraise(exc_type, exc_value, tb)
ckan_1 | File "/env/lib/python3.9/site-packages/jinja2/_compat.py", line 37, in reraise
ckan_1 | raise value.with_traceback(tb)
ckan_1 | File "/app/ckanext-scheming/ckanext/scheming/templates/scheming/package/resource_read.html", line 9, in <module>
ckan_1 | {%- set schema = h.scheming_get_dataset_schema(dataset_type) -%}
ckan_1 | File "/app/ckanext-bpatheme/ckanext/bpatheme/templates/package/resource_read.html", line 4, in <module>
ckan_1 | {% set authorized = h.check_access('resource_show', {'id': res.id, 'resource': res }) %}
ckan_1 | File "/app/ckan/ckanext/datastore/templates/package/resource_read.html", line 1, in <module>
ckan_1 | {% ckan_extends %}
ckan_1 | File "/app/ckan/ckan/templates/package/resource_read.html", line 3, in <module>
ckan_1 | {% set res = resource %}
ckan_1 | File "/app/ckanext-ytp-request/ckanext/ytp_request/templates/package/base.html", line 1, in <module>
ckan_1 | {% ckan_extends %}
ckan_1 | File "/app/ckanext-bpatheme/ckanext/bpatheme/templates/package/base.html", line 4, in <module>
ckan_1 | {% set dataset_type = dataset_type or pkg.type or 'dataset' %}
ckan_1 | File "/app/ckanext-bpatheme/ckanext/bpatheme/templates/page.html", line 1, in <module>
ckan_1 | {% ckan_extends %}
ckan_1 | File "/app/ckan/ckan/templates/page.html", line 1, in <module>
ckan_1 | {% extends "base.html" %}
ckan_1 | File "/app/ckanext-ytp-request/ckanext/ytp_request/templates/base.html", line 1, in <module>
ckan_1 | {% ckan_extends %}
ckan_1 | File "/app/ckanext-bpatheme/ckanext/bpatheme/templates/base.html", line 1, in <module>
ckan_1 | {% ckan_extends %}
ckan_1 | File "/env/lib/python3.9/site-packages/ckanext/geoview/plugin/../templates/base.html", line 1, in <module>
ckan_1 | {% ckan_extends %}
ckan_1 | File "/app/ckanext-scheming/ckanext/scheming/templates/base.html", line 1, in <module>
ckan_1 | {% ckan_extends %}
ckan_1 | File "/app/ckanext-bulk/ckanext/bulk/templates/base.html", line 1, in <module>
ckan_1 | {% ckan_extends %}
ckan_1 | File "/env/lib/python3.9/site-packages/ckanext/googleanalytics/plugin/../templates/base.html", line 1, in <module>
ckan_1 | {% ckan_extends %}
ckan_1 | File "/app/ckan/ckan/templates/base.html", line 105, in <module>
ckan_1 | {%- block page %}{% endblock -%}
ckan_1 | File "/app/ckan/ckan/templates/page.html", line 19, in <module>
ckan_1 | {%- block content %}
ckan_1 | File "/app/ckan/ckan/templates/page.html", line 22, in <module>
ckan_1 | {% block main_content %}
ckan_1 | File "/app/ckan/ckan/templates/page.html", line 74, in <module>
ckan_1 | {% block primary %}
ckan_1 | File "/app/ckan/ckan/templates/page.html", line 87, in <module>
ckan_1 | {% block primary_content %}
ckan_1 | File "/app/ckan/ckan/templates/package/resource_read.html", line 170, in <module>
ckan_1 | {% block resource_additional_information %}
ckan_1 | File "/app/ckan/ckan/templates/package/resource_read.html", line 173, in <module>
ckan_1 | {% block resource_additional_information_inner %}
ckan_1 | File "/app/ckanext-scheming/ckanext/scheming/templates/scheming/package/resource_read.html", line 60, in <module>
ckan_1 | {%- block resource_more_items -%}
ckan_1 | File "/app/ckanext-scheming/ckanext/scheming/templates/scheming/package/resource_read.html", line 61, in <module>
ckan_1 | {% for key, value in h.format_resource_items(res.items()) %}
ckan_1 | File "/app/ckan/ckan/lib/helpers.py", line 2424, in format_resource_items
ckan_1 | value = formatters.localised_number(int(value))
ckan_1 | File "/app/ckan/ckan/lib/formatters.py", line 63, in localised_number
ckan_1 | return format_number(number)
ckan_1 | File "/env/lib/python3.9/site-packages/flask_babel/__init__.py", line 475, in format_number
ckan_1 | return numbers.format_number(number, locale=locale)
ckan_1 | File "/env/lib/python3.9/site-packages/babel/numbers.py", line 353, in format_number
ckan_1 | return format_decimal(number, locale=locale)
ckan_1 | File "/env/lib/python3.9/site-packages/babel/numbers.py", line 415, in format_decimal
ckan_1 | return pattern.apply(
ckan_1 | File "/env/lib/python3.9/site-packages/babel/numbers.py", line 996, in apply
ckan_1 | number = self._quantize_value(value, locale, frac_prec)
ckan_1 | File "/env/lib/python3.9/site-packages/babel/numbers.py", line 1065, in _quantize_value
ckan_1 | rounded = value.quantize(quantum)
ckan_1 | decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>]
ckan_1 | 2025-03-13 00:45:43,131 INFO [ckan.config.middleware.flask_app] 500 /base-genomics-amplicon/bpa-base-genomics-amplicon-its-39289_1-bc267/resource/78599293762288737036487668342945 render time 0.734 seconds
| open | 2025-03-13T02:10:47Z | 2025-03-13T13:04:54Z | https://github.com/ckan/ckan/issues/8719 | [] | BrigetteGonch | 0 |
Josh-XT/AGiXT | automation | 505 | Plan to leave beta in v1.2.0 | ### Problem Description
- Desire to leave "Beta" status in the future
- Some functionality in AGiXT is still broken keeping it in beta
- Some features aren't up to my own personal expectations yet, I think I can do better on those and plan to.
### Proposed Solution
## Automated Tests - Done!
- ~Test and confirm all FastAPI endpoints are responding as intended. Fix any issues with any endpoints.~
- ~Write tests for all commands that do not require API keys to confirm command functionality as desired for all commands. See issue #504 .~
## Extensions - Done!
- ~See issue #502 - I consider this to be important and essential for the `v1.2.0` non-beta release as it will make everything consistent and easy to add to in the future.~
## Streamlit Changes - Done!
- ~See issue #503 - Streamlit needs better separation from the back end core and should utilize FastAPI instead of directly accessing AGiXT.~
| closed | 2023-05-28T19:40:49Z | 2023-06-07T18:28:47Z | https://github.com/Josh-XT/AGiXT/issues/505 | [
"planned"
] | Josh-XT | 3 |
mwaskom/seaborn | data-science | 3,197 | Marginal distributions for `heatmap` | `heatmap()` is very useful for confusion matrices and the like, in particular with `annot = True`. Still, it would be nice to be able to see the marginal distributions as well, much like with `jointplot()`. That would allow to assess imbalanced distributions much better. | closed | 2022-12-21T19:44:31Z | 2022-12-21T20:05:37Z | https://github.com/mwaskom/seaborn/issues/3197 | [
"wishlist"
] | hmeine | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,633 | Is the Model Capable of Processing and Maintaining Consistent Output Sizes Across Varied Image Dimensions? | @junyanz @ssnl @AyushExel
So far i know, model can accept the any image size then implement the preprocess step.
can Model handle images of different sizes and generate outputs in the same size? For example, if I throw in a pic that's 256x256 and another that's 500x300, can the model keep the output sizes consistent?
Curious to know! Thanks a bunch! | closed | 2024-03-11T11:37:18Z | 2024-03-15T17:30:22Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1633 | [] | arrrrr3186 | 6 |
laughingman7743/PyAthena | sqlalchemy | 359 | Change schema names retrieval in SQLAlchemy from information schema to API | https://github.com/laughingman7743/PyAthena/blob/master/pyathena/sqlalchemy_athena.py#L845-L851
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html#Athena.Client.list_data_catalogs
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html#Athena.Client.list_databases | closed | 2022-08-07T08:04:09Z | 2022-08-14T11:49:28Z | https://github.com/laughingman7743/PyAthena/issues/359 | [] | laughingman7743 | 0 |
ading2210/poe-api | graphql | 141 | Server returned a status code of 500 while downloading https://poe.com/api/gql_POST | Exception in thread Thread-6 (get_bot_thread):
Traceback (most recent call last):
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
Exception in thread Exception in thread Thread-9 (get_bot_thread)self.run()
:
Thread-11 (get_bot_thread) File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
Traceback (most recent call last):
:
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
Traceback (most recent call last):
self._target(*self._args, **self._kwargs) File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
self.run()
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self.run() chat_data = self.get_bot(bot["node"]["displayName"])
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
self._target(*self._args, **self._kwargs) File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"]) ^^^^^^
^^^chat_data = self.get_bot(bot["node"]["displayName"])
^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^
^ ^^^^^^^^^^^^^^^^ File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]^^^
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]~~~~~~~~~~~~~ ~~~~~~~~ ~~~~~~~~~~~~
~~ ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^ ~~~~~~~~~~
KeyError: ~~~~'chatOfBotDisplayName'^^^^^^^^^^^^^^^^^^^^^^^^
~
~~~KeyError: 'chatOfBotDisplayName'
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-10 (get_bot_thread):
Traceback (most recent call last):
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
Exception in thread Thread-8 (get_bot_thread) ^^^^^^^^^^^^^^^^^:
^^^^^^Traceback (most recent call last):
^^ File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
^^^^^^^^^^^ self.run()^^^^
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
self._target(*self._args, **self._kwargs)~
~ File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
~~~~~~~~ chat_data = self.get_bot(bot["node"]["displayName"])~~
~ ~ ~~~~ Exception in thread ~ Thread-7 (get_bot_thread):
Traceback (most recent call last):
~^^^^~ File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
^~^~~^~^^^^^^ ^^^^self.run()^^^^
^^^^ File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
^^^^^^^^^^^^^^^^^^^^ ^^^^self._target(*self._args, **self._kwargs)^^
^^^^^^^
KeyError^ File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
: ^^^'chatOfBotDisplayName'^
^
chat_data = self.get_bot(bot["node"]["displayName"])
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
^^ ^^^^^^^^ ^Exception in thread ^^^^^^^^^Thread-15 (get_bot_thread) ^:
^^^^^Traceback (most recent call last):
^ File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
~^^^^~^~^~~~~~~~~~~~~~~~~~^^^ ~^self.run()~~~^
~^^
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
^ File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
^^^^^^ ^^ self._target(*self._args, **self._kwargs)
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"] File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
^^
^ ^^ chat_data = self.get_bot(bot["node"]["displayName"])^^^ ~^^^^^^^
~~~~~~~~~~~
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^KeyError
: 'chatOfBotDisplayName'KeyError ^^^^^^^^^^:
^'chatOfBotDisplayName'^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-13 (get_bot_thread):
Traceback (most recent call last):
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-14 (get_bot_thread):
Traceback (most recent call last):
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-12 (get_bot_thread):
Traceback (most recent call last):
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\igovn\AppData\Local\Programs\Python\Python311\Lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'chatOfBotDisplayName'
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (1/10)...
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (2/10)...
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (3/10)...
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (4/10)...
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (5/10)...
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (6/10)...
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (7/10)...
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (8/10)...
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (9/10)...
WARNING:root:Server returned a status code of 500 while downloading https://poe.com/api/gql_POST. Retrying (10/10)...
Code:
import poe
poe_token = 'token'
client = poe.Client(poe_token)
message = "Summarize the GNU GPL v3"
for chunk in client.send_message("capybara", message):
pass
print(chunk["text"])
my ip is not banned, i can access poe.com. | closed | 2023-07-03T15:59:36Z | 2023-07-04T01:58:17Z | https://github.com/ading2210/poe-api/issues/141 | [
"bug"
] | nextmine | 4 |
mkhorasani/Streamlit-Authenticator | streamlit | 220 | For snowflake we use authenticator='externalbrowser' can this be implemented? | For snowflake we use authenticator='externalbrowser' can this be implemented?
Is there any way to do SSO with Snowflake
https://discuss.streamlit.io/t/streamlit-cloud-snowflake-sso-integration/39269 | closed | 2024-10-07T19:28:39Z | 2024-10-08T07:22:35Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/220 | [
"help wanted"
] | mahanteshimath | 1 |
horovod/horovod | deep-learning | 3,091 | horovod installation: tensorflow not detected when using intel-tensorflow-avx512. | **Environment:**
1. Framework: TensorFlow
2. Framework version: intel-tensorflow-avx512==2.5.0
3. Horovod version: 0.22.1
4. MPI version: openmpi 4.0.3
5. CUDA version: N/A, cpu only
6. NCCL version: N/A, cpu only
7. Python version: 3.8
10. OS and version: Ubuntu focal
11. GCC version: 9.3.0
12. CMake version: 3.16.3
**Bug report:**
I'm trying to install horovod after installing intel-tensorflow-avx512. horovod fails to detect that version of tensorflow.
singularity buildfile is here:
https://github.com/kaufman-lab/build_containers/blob/8145f3c58d237e0c3953d45ff58cf750397bc781/geospatial_plus_ml_horovod4.1.0.def
in particular:
```
HOROVOD_WITH_TENSORFLOW=1 HOROVOD_WITH_MPI=1 HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITHOUT_MXNET=1 HOROVOD_CPU_OPERATIONS=MPI HOROVOD_WITHOUT_PYTORCH=1 pip install --no-cache-dir horovod[tensorflow]==0.22.1 --no-dependencies --force-reinstall
```
build log is here:
https://github.com/kaufman-lab/build_containers/runs/3268356819?check_suite_focus=true
in particular, note the successful installation of tensorflow (specifically the intel-tensorflow-avx512 variant)
```
+ python3 -m pip freeze
absl-py==0.13.0
astunparse==1.6.3
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.6
charset-normalizer==2.0.4
cloudpickle==1.6.0
flatbuffers==1.12
future==0.18.2
gast==0.4.0
GDAL==3.0.4
google-auth==1.34.0
google-auth-oauthlib==0.4.5
google-pasta==0.2.0
grpcio==1.34.1
h5py==3.1.0
idna==3.2
intel-tensorflow-avx512==2.5.0
keras-nightly==2.5.0.dev2021032900
Keras-Preprocessing==1.1.2
Markdown==3.3.4
numpy==1.19.5
oauthlib==3.1.1
opt-einsum==3.3.0
packaging==21.0
Pillow==8.3.1
protobuf==3.17.3
psutil==5.8.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pyparsing==2.4.7
PyYAML==5.4.1
requests==2.26.0
requests-oauthlib==1.3.0
rsa==4.7.2
scipy==1.7.1
six==1.15.0
tensorboard==2.5.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow-estimator==2.5.0
termcolor==1.1.0
typing==3.7.4.3
typing-extensions==3.7.4.3
urllib3==1.26.6
Werkzeug==2.0.1
wrapt==1.12.1
```
and the message saying that tensorflow couldn't be found:
```
CMake Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
Could NOT find Tensorflow (missing: Tensorflow_LIBRARIES) (Required is at
least version "1.15.0")
``` | open | 2021-08-07T07:31:46Z | 2021-08-09T15:37:07Z | https://github.com/horovod/horovod/issues/3091 | [
"bug"
] | myoung3 | 4 |
mlfoundations/open_clip | computer-vision | 394 | seems a typo brought by the recent commit | ERROR: type should be string, got "https://github.com/mlfoundations/open_clip/blob/191d138296d6749396b2b15e5c3c9459b05e65b3/src/training/main.py#L88\r\n\r\nshould be `\"%Y_%m_%d-%H_%M_%S\"` without the ``\"`\"`` character." | closed | 2023-01-30T09:37:59Z | 2023-01-30T17:17:10Z | https://github.com/mlfoundations/open_clip/issues/394 | [] | wjfwzzc | 0 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 686 | InfoNCE of pre-defined anchor, positive and negative pairs |
To begin with, I appreciate the tremendous amount of work that has been put into this repo! I have a question about using [InfoNCE](https://arxiv.org/pdf/1807.03748.pdf) loss.
I have a pre-defined anchor (Nxd) , one positive (Nxd), and multiple negatives (NxMxd), where N is the number of triplets in the batch, d is the vector embedding size, and M is the number of negatives per one anchor. The question now is how can we calculate the InfoNCE loss of anchor, positive, and negative? | closed | 2024-02-29T20:04:17Z | 2024-03-06T17:21:29Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/686 | [] | alaaj27 | 1 |
python-visualization/folium | data-visualization | 1,363 | difference in function and argument naming convention from Javascript API | It seems that the naming conventions of both function name and argument name are different from Javascript API.
Using draw-circle function as an example, on function names:
In Javascript API (see https://leafletjs.com/reference-1.6.0.html#circle ), the function is
`L.circle([50.5, 30.5], {radius: 200}).addTo(map);`
In Python API (see https://python-visualization.github.io/folium/modules.html ), the function is
`folium.vector_layers.Circle(location, radius, popup=None, tooltip=None, **kwargs)`
The difference lies in the leading-character capitalization, i.e., `circle()` vs `Circle()`.
On argument names, e.g., fill opacity for polygons:
In Javascript API, the argument is _fillOpacity_ , (see https://leafletjs.com/reference-1.6.0.html#path below)
Option | Type | Default | Description
-- | -- | -- | --
stroke | Boolean | true | Whether to draw stroke along the path. Set it to false to disable borders on polygons or circles.
color | String | '#3388ff' | Stroke color
weight | Number | 3 | Stroke width in pixels
opacity | Number | 1.0 | Stroke opacity
lineCap | String | 'round' | A string that defines shape to be used at the end of the stroke.
lineJoin | String | 'round' | A string that defines shape to be used at the corners of the stroke.
dashArray | String | null | A string that defines the stroke dash pattern. Doesn't work on Canvas-powered layers in some old browsers.
dashOffset | String | null | A string that defines the distance into the dash pattern to start the dash. Doesn't work on Canvas-powered layers in some old browsers.
fill | Boolean | depends | Whether to fill the path with color. Set it to false to disable filling on polygons or circles.
fillColor | String | * | Fill color. Defaults to the value of the color option
**fillOpacity** | Number | 0.2 | Fill opacity.
The Python API has no documentation, it says "refers to Javascript API"
`**kwargs – Other valid (possibly inherited) options. See: https://leafletjs.com/reference-1.6.0.html#circle`
But I have tested, all those arguments with capital letters do not work. The Python version is in fact underscore followed by lowercase character, e.g., fillOpacity => fill_opacity, lineCap => line_cap, etc.
The difference in naming convention will cause quite a lot of inconvenience and incompatibility when creating interface for both Python and Javascript. May I know is that idea by design and intentional? Thanks! | closed | 2020-07-02T02:31:14Z | 2022-11-23T13:53:53Z | https://github.com/python-visualization/folium/issues/1363 | [] | xuancong84 | 3 |
graphql-python/graphene-django | django | 1,448 | SyntaxError When Trying to Run Django Project with Graphene | Hello,
I am currently working on a large Django project that utilizes the Graphene framework to implement GraphQL in the application. I've encountered an issue while attempting to run the project, where a SyntaxError is raised, pointing to the `graphene_django/__init__.py` file on line 1.
Error message:
```
SyntaxError: invalid syntax
File "/media/fares/01D884D05EFC32C0/Projects When Learn/GraphQl/graphql/books/urls.py", line 3, in <module>
from graphene_django.views import GraphQLView
File "/media/fares/01D884D05EFC32C0/Projects When Learn/GraphQl/graphql/.venv/lib/python3.11/site-packages/graphene_django/__init__.py", line 1
xfrom .fields import DjangoConnectionField, DjangoListField
```
Upon investigating the issue, I noticed that there is a mistake in the import statement in the `graphene_django/__init__.py` file on the first line. An extra "x" character was mistakenly included at the beginning of the line.

The correct import should be as follows:
```python
from .fields import DjangoConnectionField, DjangoListField
```
Please review your imports in this file and related files to ensure proper syntax.
I hope this information proves helpful, and feel free to reach out if you need further assistance or have additional questions.
**Thank you very much!**
| closed | 2023-08-09T19:30:01Z | 2023-08-09T22:52:12Z | https://github.com/graphql-python/graphene-django/issues/1448 | [] | faresemad | 2 |
keras-team/keras | data-science | 20,830 | AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'LocallyConnected1D' | AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'LocallyConnected1D'

| closed | 2025-01-31T06:13:11Z | 2025-03-01T02:07:46Z | https://github.com/keras-team/keras/issues/20830 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | dongloong | 3 |
noirbizarre/flask-restplus | flask | 132 | hosting behind nginx proxy error | I have the Api docs working perfectly locally but when I host with nginx I get this error:
Can't read from server. It may not have the appropriate access-control-origin settings.
I have enabled CORS on /doc/ in nginx. Any ideas? Is there a way to set the url? For prod environment, @noirbizarre is there a way to specify the url? It seems like behind nginx, the swagger.json url doesn't get set properly which can prevent hosting this in prod.
| closed | 2016-02-03T01:30:52Z | 2020-10-07T18:06:01Z | https://github.com/noirbizarre/flask-restplus/issues/132 | [] | harishkashyap | 11 |
HIT-SCIR/ltp | nlp | 696 | 关于4.2.13版本的依存句法分析 | 请问4.2.13版本的依存句法分析的结果为什么只有head和label,没有relation部分?这样对于分析和结果存储而言很不方便。

在[4.1.4版本文档](https://ltp.readthedocs.io/zh-cn/latest/index.html)中这个功能尚且保有,请问4.2的“破坏性更新”没有更新这个功能吗?
麻烦提供一些可能的解决方案,谢谢! | open | 2024-05-11T08:53:06Z | 2024-05-11T08:53:06Z | https://github.com/HIT-SCIR/ltp/issues/696 | [] | hezonglianheng | 0 |
PaddlePaddle/ERNIE | nlp | 466 | Cuda error(77), an illegal memory access was encountered. | `140it [03:24, 1.25s/it]train loss 0.00011
150it [03:39, 1.63s/it]train loss 0.00007
160it [03:55, 1.49s/it]train loss 0.00009
166it [04:03, 1.33s/it]Traceback (most recent call last):
File "es.py", line 85, in <module>
opt.minimize(loss)
File "</usr/local/lib/python3.6/site-packages/decorator.py:decorator-gen-65>", line 2, in minimize
File "/usr/local/lib/python3.6/site-packages/paddle/fluid/dygraph/base.py", line 273, in __impl__
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/paddle/fluid/optimizer.py", line 837, in minimize
loss, startup_program=startup_program, params_grads=params_grads)
File "/usr/local/lib/python3.6/site-packages/ernie/optimization.py", line 158, in apply_optimize
L.assign(p * (1. - self.wd * self.current_step_lr()), p)
File "</usr/local/lib/python3.6/site-packages/decorator.py:decorator-gen-62>", line 2, in current_step_lr
File "/usr/local/lib/python3.6/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/paddle/fluid/framework.py", line 216, in __impl__
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/paddle/fluid/optimizer.py", line 349, in current_step_lr
if current_lr:
File "/usr/local/lib/python3.6/site-packages/paddle/fluid/dygraph/varbase_patch_methods.py", line 215, in __bool__
return self.__nonzero__()
File "/usr/local/lib/python3.6/site-packages/paddle/fluid/dygraph/varbase_patch_methods.py", line 212, in __nonzero__
return bool(np.all(tensor.__array__() > 0))
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2 paddle::platform::GpuMemcpySync(void*, void const*, unsigned long, cudaMemcpyKind)
----------------------
Error Message Summary:
----------------------
ExternalError: Cuda error(77), an illegal memory access was encountered.
[Advise: The device encountered a load or store instruction on an invalid memory address. This leaves the process in an inconsistentstate and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched. ] at (/paddle/paddle/fluid/platform/gpu_info.cc:281)
` | closed | 2020-05-26T01:11:43Z | 2021-03-23T19:29:19Z | https://github.com/PaddlePaddle/ERNIE/issues/466 | [] | jtyoui | 7 |
errbotio/errbot | automation | 1,372 | documentation website is broken | In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [x] Reporting a bug
* [ ] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: n/a
* OS version:
* Python version:
* Using a virtual environment: yes/no
### Issue description
The documentation website [errbot.io](http://errbot.io/) is currently down.
### Steps to reproduce
Go to [errbot.io](http://errbot.io/)
Error 1014 Ray ID: 5018a421cc92c16b • 2019-08-05 12:05:45 UTC
CNAME Cross-User Banned
### Additional info
If you have any more information, please specify it here.
| closed | 2019-08-05T12:07:29Z | 2019-08-10T01:16:46Z | https://github.com/errbotio/errbot/issues/1372 | [
"type: bug",
"type: documentation"
] | rparsonsbb | 2 |
Yorko/mlcourse.ai | matplotlib | 756 | Proofread topic 5 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | closed | 2023-10-24T07:41:21Z | 2024-08-25T07:50:33Z | https://github.com/Yorko/mlcourse.ai/issues/756 | [
"enhancement",
"wontfix",
"articles"
] | Yorko | 1 |
vllm-project/vllm | pytorch | 14,399 | [Bug]: stop_sequences is applied to both reasoning_content and content | ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
I'm using v0.7.3 with QwQ 32B.
It looks like vllm uses stop_sequences to truncate both reasoning_content and content.
If vllm truncates reasoning_content, there will no content returned.
It is not expected, and stop_sequences shall be only applied to content.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-07T02:31:02Z | 2025-03-21T01:40:13Z | https://github.com/vllm-project/vllm/issues/14399 | [
"bug"
] | tonyaw | 5 |
pallets-eco/flask-sqlalchemy | flask | 992 | Using dataclass with declarative | The SQL-Alchemy docs [show an example](https://docs.sqlalchemy.org/en/14/orm/mapping_styles.html#example-two-dataclasses-with-declarative-table) to map a dataclass to a table using:
```Python
from dataclasses import dataclass
from dataclasses import field
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy.orm import registry
mapper_registry = registry()
@mapper_registry.mapped
@dataclass
class User:
__tablename__ = "user"
__sa_dataclass_metadata_key__ = "sa"
id: int = field(
init=False, metadata={"sa": Column(Integer, primary_key=True)}
)
```
I tried to achieve the same thing using Flask-SQLAlchemy:
```Python
from dataclasses import dataclass
from dataclasses import field
from sqlalchemy import Column
from sqlalchemy import Integer
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
@dataclass
class User(db.Model):
__sa_dataclass_metadata_key__ = "sa"
id: int = field(
init=False, metadata={"sa": Column(Integer, primary_key=True)}
)
```
which yields the following error: `sqlalchemy.exc.ArgumentError: Mapper mapped class User->user could not assemble any primary key columns for mapped table 'user'`
I also tried to use the `mapper_registry` with
```Python
@db.Model.registry.mapped
@dataclass
class User:
__tablename__ = 'user'
__sa_dataclass_metadata_key__ = "sa"
id: int = field(
init=False, metadata={"sa": Column(Integer, primary_key=True)}
)
```
but I then lose all the benefits from Flask-SQLAlchemy.
Would it be possible to add support for dataclasses in Flask-SQLAlchemy? | closed | 2021-08-19T15:20:41Z | 2021-12-26T00:09:37Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/992 | [] | jonathanberthias | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,664 | inference of a single image (help) | I have trained a cycleGan model using ```!python train.py --dataroot /content/drive/MyDrive/project/dataset --name F2F --model cycle_gan --display_id -1 ```
To infer set of images from a folder i used
```
opt = TestOptions()
#defined options occurs here
dataset = create_dataset(opt)
Initialize the model
model = create_model(opt)
model.setup(opt)
model.eval()
data_iter = iter(dataset.dataloader)
data_dict = next(data_iter)
input_image_tensor = data_dict['A']
data = {'A': input_image_tensor,'A_paths': ''}
model.set_input(data)
model.test()
visuals = model.get_current_visuals()
output_image = visuals['fake']
output_image_np = output_image.squeeze().cpu().numpy().transpose(1, 2, 0)
output_image_np = ((output_image_np - output_image_np.min()) / (output_image_np.max() - output_image_np.min()) * 255).astype(np.uint8)
output_image_np = cv2.cvtColor(output_image_np, cv2.COLOR_BGR2RGB)
cv2_imshow(output_image_np)
```
The above snippet worked as expected and generated good results.
I would like to infer a single image without going through the loader.
I tried to imitate the transforms inside the ```create_dataset(opt)``` function using this ...
```
def preprocess(image):
if image.ndim == 2 or image.shape[2] == 1:
image = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
elif image.shape[2] == 4:
image = cv2.cvtColor(image, cv2.COLOR_BGRA2BGR)
elif image.shape[2] == 3:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
pil_image = transforms.ToPILImage()(image)
transform_pipeline = transforms.Compose([
transforms.Resize(286),
transforms.CenterCrop(256),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
image_tensor = transform_pipeline(pil_image)
image_tensor = image_tensor.unsqueeze(0)
return image_tensor
```
```
input_image_tensor = preprocess(input_image)
data = {'A': input_image_tensor,'A_paths': ''}
model.set_input(data)
model.test()
visuals = model.get_current_visuals()
output_image = visuals['fake']
output_image_np = output_image.squeeze().cpu().numpy().transpose(1, 2, 0)
output_image_np = ((output_image_np - output_image_np.min()) / (output_image_np.max() - output_image_np.min()) * 255).astype(np.uint8)
output_image_np = cv2.cvtColor(output_image_np, cv2.COLOR_BGR2RGB)
cv2_imshow(output_image_np)
```
But the results are very blurry.
Any help of how can achieve this would be much appreciated !
| open | 2024-06-25T21:00:42Z | 2024-06-25T21:09:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1664 | [] | themoshe12 | 1 |
RomelTorres/alpha_vantage | pandas | 33 | get_digital_currency_daily() market = USD | One small issue using "get_digital_currency_daily()". The result is returned in two currencies, market and USD, OK. But if I set the market to USD I get duplicate Pandas labels.
So indexing a row ( row['. close (USD)'] ) results in a two element series for USD, and a scalar for other markets.
The duplicate data comes from AV, but is not really an issue for them as they present data as a structured list and parsing would normally take the first match. However inserting all this data into a table (Pandas) results in duplicate keys. Indexing the table gets two matches and returns a list of two values rather than the single USD value we get from other markets.
Test case:
[usd.zip](https://github.com/RomelTorres/alpha_vantage/files/1552627/usd.zip)
This prints:
For GBP price returns <class 'numpy.float64'>
For USD price returns <class 'pandas.core.series.Series'>
| closed | 2017-12-12T17:31:22Z | 2018-01-06T23:16:10Z | https://github.com/RomelTorres/alpha_vantage/issues/33 | [
"bug"
] | RobertFlatt | 4 |
OpenVisualCloud/CDN-Transcode-Sample | dash | 3 | Feature request to support AV1 RTMP | Support AV1 RTMP streaming | closed | 2019-04-15T06:26:33Z | 2019-12-25T06:08:14Z | https://github.com/OpenVisualCloud/CDN-Transcode-Sample/issues/3 | [
"enhancement"
] | czhou26 | 0 |
AirtestProject/Airtest | automation | 716 | MacOS airtest1.2.3图像识别rgb选项不起作用 | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
一个控件中内容是一模一样的只有颜色不相同,于是我翻开教程,查到了rgb设置为True可以区分颜色。但是当我把rgb选项开启后,依然对颜色不同的模板捕获成功。
(简洁清晰得概括一下遇到的问题是什么。或者是报错的traceback信息。)
```shell
[DEBUG][Start debugging...]
/Applications/AirtestIDE.app/Contents/MacOS/airtest/core/android/static/adb/mac/adb -P 5037 -s e66cdc75 wait-for-device
/Applications/AirtestIDE.app/Contents/MacOS/airtest/core/android/static/adb/mac/adb -P 5037 -s e66cdc75 shell getprop ro.build.version.sdk
Try finding:
Template(tpl1586245090478.png)
/Applications/AirtestIDE.app/Contents/MacOS/airtest/core/android/static/adb/mac/adb -P 5037 -s e66cdc75 shell screencap -p
/Applications/AirtestIDE.app/Contents/MacOS/airtest/core/android/static/adb/mac/adb -P 5037 -s e66cdc75 shell dumpsys display
/Applications/AirtestIDE.app/Contents/MacOS/airtest/core/android/static/adb/mac/adb -P 5037 -s e66cdc75 shell dumpsys SurfaceFlinger
/Applications/AirtestIDE.app/Contents/MacOS/airtest/core/android/static/adb/mac/adb -P 5037 -s e66cdc75 shell getevent -p
try match with SURFMatching
[SURF] threshold=0.7, result={'result': (546, 1394), 'rectangle': [(135, 1330), (135, 1458), (958, 1458), (958, 1330)], 'confidence': 0.7778783074021339}
find_best_result() run time is 2.48 s.
match result: {'result': (546, 1394), 'rectangle': [(135, 1330), (135, 1458), (958, 1458), (958, 1330)], 'confidence': 0.7778783074021339}
(546, 1394)
[DEBUG][End debugging....]
(在这里粘贴traceback或其他报错信息)
```
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**复现步骤**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**预期效果**
对于颜色识别的结果正常处理,勾选rgb选项后能够正常的区分红色灰色
**python 版本:** `python3.7.3`
**airtest 版本:** `1.2.3`
**设备:**
- 型号: [e.g. 360 N7lite]
- 系统: [e.g. Android 8.1]
- (别的信息)
**其他相关环境信息**
条件有限,未尝试
(其他运行环境,例如在linux ubuntu16.04上运行异常,在windows上正常。)
| open | 2020-04-07T09:42:09Z | 2020-07-08T09:49:53Z | https://github.com/AirtestProject/Airtest/issues/716 | [] | wxhou | 6 |
proplot-dev/proplot | data-visualization | 82 | Adding cyclic point feature | I will address the same issue as for **Cartopy** (https://github.com/SciTools/cartopy/issues/1402). I would find extremely useful to have an automatic behavior of adding a **cyclic point** when needed, or at least add an option when doing a plot.
Here is an example of what I mean (data: [ens_avg_snow.zip](https://github.com/lukelbd/proplot/files/3922831/ens_avg_snow.zip)):
```python
import xarray as xr
import proplot as plot
ens_avg_snow = xr.open_dataarray('ens_avg_snow.nc')
f, axs = plot.subplots(proj='cyl', width=8)
m = axs[0].contourf(ens_avg_snow, cmap='BuRd')
f.colorbar(m)
axs.format(
geogridlinewidth=0.5, geogridcolor='gray8', geogridalpha=0.5, labels=True,
coast=True, ocean=True, oceancolor='gray3', latlim=(20,90),
)
```

As you can see, there is a white line at 0° of longitude. I usually use the `add_cyclic_point` function of `cartopy.utils` but it is not straightforward for DataArrays because it doesn't keep the coordinates and attributes. I recently found some very useful code here: https://github.com/darothen/plot-all-in-ncfile/blob/master/plot_util.py that allows to do it easily with the function `cyclic_dataarray` and there is also a `check_cyclic` that could be used for automatically checking it.
Here is the function that allows adding a cyclic point (from the link above):
```python
from cartopy.util import add_cyclic_point
# https://github.com/darothen/plot-all-in-ncfile/blob/master/plot_util.py
def cyclic_dataarray(da, coord='lon'):
""" Add a cyclic coordinate point to a DataArray along a specified
named coordinate dimension.
"""
assert isinstance(da, xr.DataArray)
lon_idx = da.dims.index(coord)
cyclic_data, cyclic_coord = add_cyclic_point(da.values,
coord=da.coords[coord],
axis=lon_idx)
# Copy and add the cyclic coordinate and data
new_coords = dict(da.coords)
new_coords[coord] = cyclic_coord
new_values = cyclic_data
new_da = xr.DataArray(new_values, dims=da.dims, coords=new_coords)
# Copy the attributes for the re-constructed data and coords
for att, val in da.attrs.items():
new_da.attrs[att] = val
for c in da.coords:
for att in da.coords[c].attrs:
new_da.coords[c].attrs[att] = da.coords[c].attrs[att]
return new_da
```
Thus `ens_avg_snow_cyclic = cyclic_dataarray(ens_avg_snow)` allows to make back the previous plot without the white line. Incorporating this directly into **ProPlot** would be nice so that we can just add an option like for example `add_cyclic=True` (just need to be careful about the dimension to do the cyclic or have another way to pass it).
| closed | 2019-12-04T16:27:00Z | 2021-06-28T23:34:14Z | https://github.com/proplot-dev/proplot/issues/82 | [
"support"
] | mickaellalande | 13 |
plotly/dash | plotly | 2,490 | Fixed headers in table modify conditional column width | **Describe your context**
```
dash 2.7.1
dash-bootstrap-components 1.2.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-mantine-components 0.11.0
dash-table 5.0.0
```
**Describe the bug**
Fixing the headers of a table modifies the height of the table and the pre-specified width of columns
**Expected behavior**
When these two properties are specified in a table, the fixed_rows setting shouldn't overwrite/modify the column width.
```
fixed_rows={'headers': True, 'data':0}
style_cell_conditional=[
{"if": {"column_id": "company_name"}, "width": "45%","min-width": "45%","max-width": "45%"},
{"if": {"column_id": "status"}, "width": "12%", "min-width": "12%","max-width": "12%"},
```
**Code to replicate the issue**
```
import dash
from dash import dash_table, html, dcc, Input, Output, State, callback
import pandas as pd
import dash_design_kit as ddk
import random
N=50
df = pd.DataFrame({
'company_name': random.choices(['A','B','C','D'], k=N),
'status': random.choices(['Complete','Approved','Pending','Declined'], k=N),
'date_requested': random.choices(['Jan','Feb','Mar','Apr','May'], k=N),
'link': ['link']*N
})
app = dash.Dash(__name__)
# https://github.com/plotly/dash-table/pull/793#pullrequestreview-431923424
app.layout = html.Div([
html.Button(id='switch', children='Fix headers'),
dash_table.DataTable(
id='table',
#fixed_rows={'headers': True, 'data':0},
columns=[{"name":c, "id":c} for c in df.columns],
data=df.to_dict('records'),
style_header={
"backgroundColor": "#002B80",
"fontWeight": "bold",
"color": "#FFFFFF",
"text-align": "left",
"border": "0px",
"padding-left": "5px",
},
style_cell={"textAlign": "left"},
style_data_conditional=[
{"if": {"filter_query": "{status} = Complete","column_id": "status",},"color": "#428959"},
{"if": {"filter_query": "{status} = Approved", "column_id": "status", }, "color": "#428959"},
{"if": {"filter_query": "{status} = Pending","column_id": "status"},"color": "#F1C00E"},
{"if": {"filter_query": "{status} = Declined", "column_id": "status"}, "color": "#C73E1D"},
{"if": {"filter_query": "{status} contains 'Review'", "column_id": "status"}, "color": "#2A4F87"},
],
style_cell_conditional=[
{"if": {"column_id": "company_name"}, "width": "45%","min-width": "45%","max-width": "45%"},
{"if": {"column_id": "status"}, "width": "12%", "min-width": "12%","max-width": "12%"},
],
)],
style={'padding':'20px'})
@callback(
Output('table', 'fixed_rows'),
Output('switch', 'children'),
Input('switch', 'n_clicks'),
State('switch', 'children'),
prevent_initial_call=True
)
def switch_fixed_rows(click, current_value) :
if current_value == 'Fix headers':
return {'headers': True, 'data':0}, "Unfix headers"
elif current_value == 'Unfix headers':
return {'headers': False}, "Fix headers"
if __name__ == '__main__':
app.run_server(debug=True)
```
**Screenshots**
https://user-images.githubusercontent.com/101562106/229153874-31f5f7b9-25b0-4f11-8c6f-b45783473b36.mov
| open | 2023-03-31T14:49:33Z | 2024-08-13T19:29:52Z | https://github.com/plotly/dash/issues/2490 | [
"bug",
"dash-data-table",
"P3"
] | celia-lm | 2 |
pydantic/FastUI | pydantic | 366 | Add support for file links | Hi! We're looking to adopt FastUI for better experience of using and testing our backend web services. It's going down well here so far, so thanks for the great work.
As I see it, there are two kinds of link component FastUI supports:
1. links to routes defined within the application, with addresses with respect to a root for the current router (at minimum, your host URL)
2. links that begin with "http" for external stuff, which is the only exception to (1).
Could you extend the second of these to cover paths beginning with "file" as well please?
Thanks in advance for considering this. | closed | 2024-11-13T21:16:16Z | 2024-11-16T14:50:23Z | https://github.com/pydantic/FastUI/issues/366 | [] | JamesRamsden-Naimuri | 2 |
piskvorky/gensim | nlp | 3,166 | LSI gets stuck and connection to Jupyter is lost | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I want to achieve a LSI. But it gets stuck midway at every chunk, no matter how small the chunk size
#### Steps/code/corpus to reproduce
```python
lsi_model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=300, chunksize=500)
```
Here is the log from loglevel=DEBUG:
```
2021-06-06 19:35:18,168 : INFO : using serial LSI version on this node
2021-06-06 19:35:18,169 : INFO : updating model with new documents
Processing 299/3332249 (0.01%)
2021-06-06 19:35:18,522 : INFO : preparing a new chunk of documents
2021-06-06 19:35:18,523 : DEBUG : converting corpus to csc format
2021-06-06 19:35:18,531 : INFO : using 100 extra samples and 2 power iterations
2021-06-06 19:35:18,532 : INFO : 1st phase: constructing (1542840, 400) action matrix
Processing 500/3332249 (0.02%)
2021-06-06 19:35:18,563 : INFO : orthonormalizing (1542840, 400) action matrix
2021-06-06 19:35:21,646 : DEBUG : computing QR of (1542840, 400) dense matrix
2021-06-06 19:35:44,717 : DEBUG : running 2 power iterations
2021-06-06 19:35:52,571 : DEBUG : computing QR of (1542840, 400) dense matrix
2021-06-06 19:36:21,354 : DEBUG : computing QR of (1542840, 400) dense matrix
2021-06-06 19:36:48,119 : INFO : 2nd phase: running dense svd on (400, 500) matrix
2021-06-06 19:36:49,419 : INFO : computing the final decomposition
2021-06-06 19:36:49,420 : INFO : keeping 300 factors (discarding 9.516% of energy spectrum)
```
Then it gets stuck and after some time Jupyter shows me the following error message:
```
Server Connection Error
A connection to the Jupyter server could not be established. JupyterLab will continue trying to reconnect. Check your network connection or Jupyter server configuration.
```
#### Versions
```
Python 3.9.5 (default, May 14 2021, 00:00:00)
[GCC 11.1.1 20210428 (Red Hat 11.1.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import platform; print(platform.platform())
Linux-5.12.8-300.fc34.x86_64-x86_64-with-glibc2.33
>>> import sys; print("Python", sys.version)
Python 3.9.5 (default, May 14 2021, 00:00:00)
[GCC 11.1.1 20210428 (Red Hat 11.1.1-1)]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.19.5
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.6.3
>>> import gensim; print("gensim", gensim.__version__)
gensim 4.0.1
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
### Additional info
```
>>> print(numpy.show_config())
blas_mkl_info:
NOT AVAILABLE
blis_info:
NOT AVAILABLE
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
None
>>> print(scipy.show_config())
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_mkl_info:
NOT AVAILABLE
blis_info:
NOT AVAILABLE
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
None
``` | closed | 2021-06-06T17:42:02Z | 2021-06-06T20:20:59Z | https://github.com/piskvorky/gensim/issues/3166 | [] | raffaem | 3 |
ipython/ipython | data-science | 14,775 | Release and unpin matplotlib-inline | open | 2025-02-23T18:11:36Z | 2025-02-23T18:11:36Z | https://github.com/ipython/ipython/issues/14775 | [] | Carreau | 0 | |
OpenInterpreter/open-interpreter | python | 1,537 | Bonkers | ### Describe the bug
"Open Interpreter overcomes these limitations by running in your local environment. It has full access to the internet, isn't restricted by time or file size, and can utilize any package or library.
This combines the power of GPT-4's Code Interpreter with the flexibility of your local development environment."
Are you insane?
### Reproduce
Open up supply chain poisoning with root access
Put computer under control of random entity
'Press Go! button'
### Expected behavior
i'm smart enough not to do this
### Screenshots
_No response_
### Open Interpreter version
all
### Python version
all
### Operating System name and version
all
### Additional context
_No response_ | closed | 2024-11-25T07:52:52Z | 2024-11-25T14:50:31Z | https://github.com/OpenInterpreter/open-interpreter/issues/1537 | [] | Wokemup | 0 |
oegedijk/explainerdashboard | dash | 243 | Does dtreeviz lib update of API for 2.0 release affect explainerdashboard? | Hi @oegedijk, @tlapusan and I are creating a big change to the API for dtreeviz, although we hope it is backwards compatible. Regardless, you will start to get deprecation warnings if you pull in dtreeviz 2.0 (coming out shortly). The current dev branch is the following if you'd like to take a look:
https://github.com/parrt/dtreeviz/tree/dev
Happy to make tweaks to smooth any transitions! The functionality should be the same. | closed | 2022-12-26T22:30:50Z | 2023-02-12T12:47:35Z | https://github.com/oegedijk/explainerdashboard/issues/243 | [] | parrt | 18 |
koaning/scikit-lego | scikit-learn | 479 | [FEATURE] Alternative Scoring and Threshold Adjustment for sklego.meta.ZeroInflatedRegressor |
I have been doing some experimentation with the Meta Model sklego.meta.ZeroInflatedRegressor
One of the key things I need to do is adjust the probability threshold at which a prediction is considered to be Zero or Non-Zero. This is critical because it allows me to calibrate the number of zeros output.
In addition, I am looking at alternative scoring methods. For example, that the output of the classifier model is used to modulate the output of the regression. So rather than final scoring being a Boolean choice between zero or regression model output, the output becomes prob_non_zero * Regression_output. This approach looks promising as a way to recalibrate a regression model that is slightly biased toward predictions on the high side.
I have an initial implementation of this, and I have set it up such that the default parameters will maintain current behaviour.
Cheers
John
| closed | 2021-08-27T10:20:07Z | 2022-08-02T06:06:21Z | https://github.com/koaning/scikit-lego/issues/479 | [
"enhancement"
] | john-hawkins | 11 |
fastapi/sqlmodel | sqlalchemy | 360 | Joining b/w two tables | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class UserInSchema(SQLModel):
name: str
phone: str
class User(UserInSchema, table=True):
__tablename__ = "user_data"
id: Optional[int] = Field(default=None, primary_key=True)
cars: List["Car"] = Relationship(back_populates="user")
class CarInSchema(SQLModel):
name: str
color: str
class Car(CarInSchema, table=True):
__tablename__ = "car_data"
id: Optional[int] = Field(default=None, primary_key=True)
user_id: int = Field(default=None, foreign_key="user_data.id")
user: Optional[User] = Relationship(back_populates="cars")
```
### Description
I have two model one is user and other one is car .. i want to make the relationship b/w these two model as one user can buy many cars .
@router.get("/")
def get():
statement = select(Car, User).join(User)
result = session.exec(statement)
return result
but it give the error:
TypeError: User() missing 1 required positional argument: 'data'
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.9.13
### Additional Context
_No response_ | closed | 2022-06-09T12:41:36Z | 2022-06-10T04:39:03Z | https://github.com/fastapi/sqlmodel/issues/360 | [
"question"
] | israr96418 | 2 |
open-mmlab/mmdetection | pytorch | 11,570 | grounding dino swin large finetune error | I have prepared my own data set according to the steps of 'Example of Fine-tuning Custom Dataset' in the [https://github.com/open-mmlab/mmdetection/blob/main/configs/mm_grounding_dino/usage_zh-CN.md#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%95%B0%E6%8D%AE%E9%9B%86%E5%BE%AE%E8%B0%83%E8%AE%AD%E7%BB%83%E6%A1%88%E4%BE%8B](url), and the format is the same as that of 'cat' data set provided by the author. When I use grounding_dino_swin-l for finetune, pretrain weight is "grounding_dino_swin-l_pretrain_obj365_goldg-34dcdc53.pth" , I found that when I set frozen_stages=-1 in the model, the code training works fine, but if I set frozen_stages to another value, such as 1 or 2, I get an error:
`Traceback (most recent call last):
2024-03-19 21:23 File "/lpai/volumes/cloudmodel-muses/aaa_lin/mmdetection/./tools/train.py", line 121, in <module>
2024-03-19 21:23 main()
2024-03-19 21:23 File "/lpai/volumes/cloudmodel-muses/aaa_lin/mmdetection/./tools/train.py", line 117, in main
2024-03-19 21:23 runner.train()
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/mmengine/runner/runner.py", line 1777, in train
2024-03-19 21:23 model = self.train_loop.run() # type: ignore
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py", line 96, in run
2024-03-19 21:23 self.run_epoch()
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py", line 112, in run_epoch
2024-03-19 21:23 self.run_iter(idx, data_batch)
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/mmengine/runner/loops.py", line 128, in run_iter
2024-03-19 21:23 outputs = self.runner.model.train_step(
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/mmengine/model/wrappers/distributed.py", line 121, in train_step
2024-03-19 21:23 losses = self._run_forward(data, mode='loss')
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/mmengine/model/wrappers/distributed.py", line 161, in _run_forward
2024-03-19 21:23 results = self(**data, mode=mode)
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1510, in _wrapped_call_impl
2024-03-19 21:23 return self._call_impl(*args, **kwargs)
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1519, in _call_impl
2024-03-19 21:23 return forward_call(*args, **kwargs)
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1505, in forward
2024-03-19 21:23 inputs, kwargs = self._pre_forward(*inputs, **kwargs)
2024-03-19 21:23 File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1399, in _pre_forward
2024-03-19 21:23 if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
2024-03-19 21:23 RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
2024-03-19 21:23 making sure all `forward` function outputs participate in calculating loss.
2024-03-19 21:23 If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
2024-03-19 21:23 Parameter indices which did not receive grad for rank 1: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 326 327
2024-03-19 21:23 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error`
Here is my configs, I use 8 card and set batch_size=4 for training_dataloader, batch_size=1 for val_dataloader:
`_base_ = 'grounding_dino_swin-t_pretrain_obj365.py'
data_root = 'some_root_path/'
class_name = ("signal triangle", "horizontal tire", "cardboard box", )
num_classes = len(class_name)
metainfo = dict(classes=class_name, palette=[(220, 20, 60)])
num_levels = 5
model = dict(
use_autocast=True,
num_feature_levels=num_levels,
backbone=dict(
_delete_=True,
type='SwinTransformer',
pretrain_img_size=384,
embed_dims=192,
depths=[2, 2, 18, 2],
num_heads=[6, 12, 24, 48],
window_size=12,
mlp_ratio=4,
qkv_bias=True,
qk_scale=None,
drop_rate=0.,
attn_drop_rate=0.,
drop_path_rate=0.2,
patch_norm=True,
out_indices=(0, 1, 2, 3),
# Please only add indices that would be used
# in FPN, otherwise some parameter will not be used
with_cp=True,
convert_weights=True,
frozen_stages=1,
init_cfg=None),
neck=dict(in_channels=[192, 384, 768, 1536], num_outs=num_levels),
encoder=dict(layer_cfg=dict(self_attn_cfg=dict(num_levels=num_levels))),
decoder=dict(layer_cfg=dict(cross_attn_cfg=dict(num_levels=num_levels))))
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='RandomFlip', prob=0.5),
dict(type='RandomChoice',
transforms=[
[
dict(
type='RandomChoiceResize',
scales=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
(608, 1333), (640, 1333), (672, 1333), (704, 1333),
(736, 1333), (768, 1333), (800, 1333)],
keep_ratio=True)
]
]),
dict(type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor', 'flip', 'flip_direction', 'text',
'custom_entities'))
]
train_dataloader = dict(
dataset=dict(
_delete_=True,
type='CocoDataset',
data_root=data_root,
metainfo=metainfo,
return_classes=True,
pipeline=train_pipeline,
filter_cfg=dict(filter_empty_gt=False, min_size=32),
ann_file='train_od_.json',
data_prefix=dict(img='images')))
val_dataloader = dict(
dataset=dict(
metainfo=metainfo,
data_root=data_root,
ann_file='val_od.json',
data_prefix=dict(img='images')))
test_dataloader = val_dataloader
val_evaluator = dict(ann_file=data_root + 'test_od.json',)
test_evaluator = val_evaluator
max_epoch = 20
default_hooks = dict(
checkpoint=dict(interval=1, max_keep_ckpts=1, save_best='auto'),
logger=dict(type='LoggerHook', interval=5))
train_cfg = dict(max_epochs=max_epoch, val_interval=1)
param_scheduler = [
dict(
type='MultiStepLR',
begin=0,
end=max_epoch,
by_epoch=True,
milestones=[15],
gamma=0.1)
]
optim_wrapper = dict(
optimizer=dict(lr=0.0001),
paramwise_cfg=dict(
custom_keys={
'absolute_pos_embed': dict(decay_mult=0.),
'backbone': dict(lr_mult=0.0),
'language_model': dict(lr_mult=0.0)
}))
load_from = '/model_path/grounding_dino_swin-l_pretrain_obj365_goldg-34dcdc53.pth' # noqa
`
Thank you for your help
| open | 2024-03-20T01:43:28Z | 2024-10-25T10:38:08Z | https://github.com/open-mmlab/mmdetection/issues/11570 | [] | JeremyLin886 | 7 |
taverntesting/tavern | pytest | 488 | why ModuleNotFoundError: No module named ? | test_XX.tavern.yaml
- name: lalal
request:
url: http://10.11.115.116:8080//next
method: GET
response:
status_code: 200
body:
remoteIp: 10.23.133.116
save:
$ext:
function: myalan:test_function
myalan.py
from box import Box
def test_function(response):
return Box({"test_my_simple" : response.json()["ipAddr"]["post"]["appName"]})
myalan.py and test_XX.tavern.yaml In the same directory
Report errors :
tavern.util.exceptions.BadSchemaError: Couldn't load myalan:test_function
ModuleNotFoundError: No module named 'myalan'
| closed | 2019-11-26T10:02:51Z | 2019-11-29T09:35:58Z | https://github.com/taverntesting/tavern/issues/488 | [] | sktt0211 | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 341 | No module named 'tensorflow.contrib' | I have successfully installed Tensorflow but this library which is not in use anymore "contrib" prevents me from running demo_cli.py
```
I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Traceback (most recent call last):
File "demo_cli.py", line 3, in <module>
from synthesizer.inference import Synthesizer
File "C:\Users\Jean-Paul Sartre\Downloads\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 1, in <module>
from synthesizer.tacotron2 import Tacotron2
File "C:\Users\Jean-Paul Sartre\Downloads\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py", line 3, in <module>
from synthesizer.models import create_model
File "C:\Users\Jean-Paul Sartre\Downloads\Real-Time-Voice-Cloning-master\synthesizer\models\__init__.py", line 1, in <module>
from .tacotron import Tacotron
File "C:\Users\Jean-Paul Sartre\Downloads\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py", line 4, in <module>
from synthesizer.models.helpers import TacoTrainingHelper, TacoTestHelper
File "C:\Users\Jean-Paul Sartre\Downloads\Real-Time-Voice-Cloning-master\synthesizer\models\helpers.py", line 3, in <module>
from tensorflow.contrib.seq2seq import Helper
ModuleNotFoundError: No module named 'tensorflow.contrib'
``` | closed | 2020-05-12T10:12:42Z | 2020-07-04T14:52:01Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/341 | [] | AndrejHatzi | 5 |
proplot-dev/proplot | matplotlib | 170 | Show decimal places and scientific notation on the axis | ### Description
When the number is large, matplotlib will use `e` notation automatically. But, we usually use `x10^{}` in science paper.
### Steps to reproduce
```python
import proplot as plot
import numpy as np
state = np.random.RandomState(51423)
fig, axs = plot.subplots()
axs.format(ylim=(0, 5e10), ylocator=2e10)
```
**Expected behavior**:

**Actual behavior**:

### Equivalent steps in matplotlib
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
fig, ax = plt.subplots()
x = np.linspace(0, 5e10, 4)
# y = x**2
ax.plot(x)
f = mticker.ScalarFormatter(useOffset=False, useMathText=True)
g = lambda x,pos : "${}$".format(f._formatSciNotation('%1.10e' % x))
plt.gca().yaxis.set_major_formatter(mticker.FuncFormatter(g))
```
### Proplot version
0.5.0
| closed | 2020-05-16T12:53:50Z | 2020-05-21T05:00:59Z | https://github.com/proplot-dev/proplot/issues/170 | [
"feature"
] | zxdawn | 6 |
ray-project/ray | tensorflow | 51,242 | [Serve] Detailed Analysis of Errors Related to 'Ray does not allocate any GPUs on the driver node' && 'No CUDA GPUs are available' | ### What happened + What you expected to happen
When deploying platforms based on the Ray framework, such as Ray Serve and Ray LLM, together with vLLM's OpenAI server, the errors "No CUDA GPUs are available" or "Ray does not allocate any GPUs on the driver node" have become recurring issues.
In this issue, I will provide a detailed analysis of these problems, along with a brief solution, experimental records. I sincerely invite developers from the Ray and vLLM communities to participate in the discussion, point out any shortcomings, and share your suggestions!
<h2><strong>Quick Troubleshoot</strong></h2>

<p>For older versions of vLLM, I have also provided a hack to temporarily resolve this issue. Please refer to: <a href="https://github.com/ray-project/ray/issues/51154">Ray Issue #51154</a>.</p>
<p>For <strong>Ray LLM</strong> and <strong>Ray Serve</strong> documentation:</p>
<ul>
<li><strong>Ray LLM</strong>: <a href="https://docs.ray.io/en/latest/ray-llm/index.html">Ray LLM Documentation</a></li>
<li><strong>Ray Serve</strong>: <a href="https://docs.ray.io/en/latest/serve/index.html">Ray Serve vLLM Example</a></li>
</ul>
<p>A proper configuration for TP=1 involves modifying the <code inline="">build_app</code> function in the example code from the <strong>Ray Serve</strong> documentation by replacing the following content.</p>
```diff
pg_resources = []
- pg_resources.append({"CPU": 1}) # for the deployment replica
for i in range(tp):
pg_resources.append({"CPU": 1, accelerator: 1}) # for the vLLM actors
# We use the "STRICT_PACK" strategy below to ensure all vLLM actors are placed on
# the same Ray node.
return VLLMDeployment.options(
+ ray_actor_options={"num_gpus": 1,"num_cpus": 1},
placement_group_bundles=pg_resources, placement_group_strategy="STRICT_PACK"
).bind(
```
<hr>
<h2><strong>Introduction</strong></h2>
<p>The issue can be summarized simply: the framework design of <strong>vLLM</strong> does not fully accommodate <code inline="">LLMEngine</code> running within a <strong>placement group</strong>.</p>
<p>The process that creates <code inline="">RayDistributedExecutor</code>, which serves as the entry point, must have access to a <strong>GPU</strong> while <strong>not occupying GPU resources within Ray</strong>. This conflicts with the typical configuration of <strong>Ray Serve</strong>. Additionally, since <strong>vLLM always requests a whole number of GPUs when <code inline="">world_size > 1</code></strong>, it is not possible to work around this limitation by allocating fractional GPUs.</p>
<p>

</p>
<p>Regardless of whether using <code inline="">LLM</code> (offline inference) or <code inline="">OpenAIServingCompletion</code> (online deployment), both are considered <strong>entry points</strong>. The class responsible for managing the specific processes during initialization is called an <code inline="">Executor</code>. The <code inline="">Executor</code> itself creates a <strong>local actor</strong> to use the <strong>GPU</strong> and also spawns a <strong>dummy actor</strong> to reserve resources in the <strong>placement group</strong>.</p>

<p>However, when integrating this framework into <strong>Ray</strong>, several issues arise:</p>
<ul>
<li>In <strong>Ray</strong>, the <code inline="">Executor</code> itself also runs within an <strong>Actor</strong> and uses the <strong>first bundle of the placement group</strong>.
<ul>
<li>If <strong>no GPU resources</strong> are assigned to it, <code inline="">CUDA_VISIBLE_DEVICES</code> will be an <strong>empty string</strong>, leading to the <em>"No CUDA GPUs are available"</em> error when trying to call <code inline="">set_device</code>.</li>
<li>On the other hand, if we <strong>do allocate a GPU</strong> to it, <strong>vLLM</strong> will still use a <code inline="">dummy_driver_worker</code> that occupies a <strong>GPU</strong>, which causes the total number of requested workers to exceed the <strong>placement group capacity</strong>.</li>
<li>Since <strong>vLLM does not allocate resources based on bundles</strong> but instead forces each worker to use exactly <strong>one GPU when <code inline="">world_size > 1</code></strong>, we cannot work around this limitation by assigning fractional GPUs.</li>
</ul>
</li>
</ul>
<h3><strong>A Deadlock!</strong></h3>
<hr>
<h2><strong>Experiments</strong></h2>
<p>Due to the specific feature of the code, there are actually <strong>two executable scenarios</strong>. I will first present the experimental table and then analyze each case one by one.</p>
VLLM Version | Placement Group Configuration | TP | Status | Notes
-- | -- | -- | -- | --
VLLM 0.7.3 | [{'CPU':1} + {'GPU':1} * TP] | >1 | ✅ Works | Replica actor has no GPU but gains access via update_environment_variables
VLLM 0.7.3 | [{'GPU':1} * TP] | >1 | ❌ Fails | Extra worker creation causes deadlock due to loop in ray_distributed_executor.py#L187
VLLM 0.7.3 | [{'CPU':1} + {'GPU':1} * TP] | 1 | ❌ Fails | Replica actor has no GPU, and Executor can no longer "borrow" CUDA_VISIBLE_DEVICES
VLLM 0.7.3 | [{'GPU':1} * TP] | 1 | ✅ Works | Replica actor has no GPU, but uniproc_executor avoids dummy worker creation
<hr>
<h2><strong>Analysis</strong></h2>
<p>In the existing code, there are actually <strong>two scenarios</strong> where execution is possible:</p>
<ol>
<li><strong>TP > 1 without explicitly assigning GPUs</strong> (this is the default setting in <strong>Ray Serve</strong>). This explains why the issue has not become a critical blocker—under the current configuration, execution is still possible.</li>
<li><strong>TP = 1 with GPU assignment</strong> (as mentioned earlier, using an appropriate configuration combined with <strong>Ray Serve</strong> to resolve the issue).</li>
</ol>
<h3><strong>Case 1: Default Configuration (<code inline="">TP > 1</code> & No GPU Assigned)</strong></h3>

<p>Even if <strong>Ray</strong> does not allocate any <strong>GPUs</strong> to the <strong>Replica Actor</strong> (i.e., the <code inline="">RayDistributedExecutor</code> within the <strong>Serve</strong> framework), <code inline="">CUDA_VISIBLE_DEVICES</code> will still <strong>not be empty</strong>.</p>
<p>This happens because of this line of code, which calls <code inline="">self.driver_worker</code> and modifies the <strong>environment variables</strong> of the current process.</p>
<p>As a result, in the <strong>default configuration</strong>, the code functions correctly, allowing a process to access <strong>GPUs</strong> without directly occupying them.</p>
<h3><strong>Case 2: <code inline="">TP = 1</code> Changes the Behavior</strong></h3>
<p>When <strong>TP = 1</strong>, <strong>vLLM</strong> switches to using <code inline="">UniprocExecutor</code>, as seen in this line of code.</p>
<p>In this case, if <code inline="">CUDA_VISIBLE_DEVICES</code> is <strong>empty</strong>, it will cause an <strong>error</strong>, as <code inline="">UniprocExecutor</code> does <strong>not</strong> inherit the same <strong>environment variable handling</strong> as the multi-process setup.</p>

<hr>
<h2><strong>Supplementary Notes on Ray Serve and Ray LLM</strong></h2>
<p>After an initial review of the <strong>source code</strong> and conducting <strong>simple experiments</strong>, I believe that the <strong>new and old APIs</strong> of <strong>Ray Serve</strong> are fundamentally the same, except for the addition of a <strong>router</strong> and deeper <strong>integration with vLLM</strong>.</p>
<p>The core interaction between <strong>Ray</strong> and <strong>vLLM</strong> still revolves around the <strong>placement group (PG) allocation</strong> during deployment.</p>
<p>Therefore, these two approaches are essentially equivalent:</p>
<ol>
<li><strong>Manually integrating</strong> <code inline="">vllm.entrypoints.openai.serving_completion</code> into <strong>Ray Serve</strong>.</li>
<li><strong>Using the <code inline="">ray[llm]</code> library</strong> for deployment.</li>
</ol>
<hr>
<h2><strong>Related Issues</strong></h2>
<p>Based on my preliminary review, the following issues are all related to the analysis presented here:</p>
<ul>
<li><a href="https://github.com/vllm-project/vllm/issues/12983">vLLM Issue #12983</a></li>
<li><a href="https://github.com/vllm-project/vllm/issues/13521">vLLM Issue #13521</a></li>
<li><a href="https://github.com/vllm-project/vllm/issues/14415">vLLM Issue #14415</a></li>
<li><a href="https://github.com/vllm-project/vllm/issues/14456">vLLM Issue #14456</a></li>
<li><a href="https://github.com/ray-project/ray/issues/51154">Ray Issue #51154</a></li>
<li><a href="https://github.com/ray-project/ray/issues/51193">Ray Issue #51193</a></li>
<li><a href="https://github.com/ray-project/ray/issues/50275">Ray Issue #50275</a></li>
</ul></body></html>
### Versions / Dependencies
vllm>=0.7.2
ray[serve,llm,default] -U
### Reproduction script
Demo code in following
- Ray LLM: [Ray LLM Documentation](https://docs.ray.io/en/latest/serve/llm/overview.html)
- Ray Serve: [Ray Serve vLLM Example](https://docs.ray.io/en/latest/serve/tutorials/vllm-example.html)
### Issue Severity
High: It blocks me from completing my task. | open | 2025-03-11T11:58:54Z | 2025-03-14T20:20:45Z | https://github.com/ray-project/ray/issues/51242 | [
"bug",
"serve",
"llm"
] | huiyeruzhou | 5 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,033 | No such file or directory: 'encoder/encoder.pt' |
---------------# Initializing all the encoder libraries
from IPython.display import Audio
from IPython.utils import io
from synthesizer.inference import Synthesizer
from encoder import inference as encoder
from vocoder import inference as vocoder
from pathlib import Path
import numpy as np
import librosa
encoder_weights = Path("encoder/encoder.pt")
vocoder_weights = Path("vocoder/saved_models/pretrained/pretrained.pt")
syn_dir = Path("synthesizer/saved_models/logs-pretrained/taco_pretrained")
encoder.load_model(encoder_weights)
synthesizer = Synthesizer(syn_dir)
vocoder.load_model(vocoder_weights)
------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-27-38db2b658233>](https://localhost:8080/#) in <module>()
11 vocoder_weights = Path("vocoder/saved_models/pretrained/pretrained.pt")
12 syn_dir = Path("synthesizer/saved_models/logs-pretrained/taco_pretrained")
---> 13 encoder.load_model(encoder_weights)
14 synthesizer = Synthesizer(syn_dir)
15 vocoder.load_model(vocoder_weights)
3 frames
[/usr/local/lib/python3.7/dist-packages/torch/serialization.py](https://localhost:8080/#) in __init__(self, name, mode)
209 class _open_file(_opener):
210 def __init__(self, name, mode):
--> 211 super(_open_file, self).__init__(open(name, mode))
212
213 def __exit__(self, *args):
FileNotFoundError: [Errno 2] No such file or directory: 'encoder/encoder.pt' | closed | 2022-03-04T07:41:07Z | 2022-03-27T15:17:54Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1033 | [] | AntonTRX | 1 |
ets-labs/python-dependency-injector | flask | 383 | Check if required container dependencies are provided, aka containers.check_dependencies(instance) | Need to implement a checker of container required dependencies.
Ok -- very good. Yes -- I was already trying to do what you suggested with overrides. Now with the defaults on the individual Dependency its cleaner. Checking dependencies would be nice, but I guess that by design you are able to provide them "incrementally" so requiring them all up front would break other use cases. Perhaps a `containers.checkDependencies( instance )` would be nice at some point. Thanks.
I guess we are all set as far as I'm concerned on this issue. Closing, and thank you!
_Originally posted by @shaunc in https://github.com/ets-labs/python-dependency-injector/issues/336#issuecomment-770101562_ | closed | 2021-01-30T00:19:51Z | 2021-02-15T19:23:28Z | https://github.com/ets-labs/python-dependency-injector/issues/383 | [
"feature"
] | rmk135 | 2 |
litestar-org/litestar | asyncio | 3,636 | Docs: status_code for handlers is incorrect | ### Summary
Documentation for handler states:
* status_code: An http status code for the response. Defaults to ``200`` for mixed method or ``GET``, ``PUT`` and
``PATCH``, ``201`` for ``POST`` and ``204`` for ``DELETE``.
However, the code is implemented as:
```python
def get_default_status_code(http_methods: set[Method]) -> int:
"""Return the default status code for a given set of HTTP methods.
Args:
http_methods: A set of method strings
Returns:
A status code
"""
if HttpMethod.POST in http_methods:
return HTTP_201_CREATED
if HttpMethod.DELETE in http_methods:
return HTTP_204_NO_CONTENT
return HTTP_200_OK
```
So a route like the following will return 201 instead of 200, contradicting the documentation.
```python
@route(path="/foo", http_method=[HttpMethod.GET, HttpMethod.POST])
```
Should be something like:
```python
def get_default_status_code(http_methods: set[Method]) -> int:
"""Return the default status code for a given set of HTTP methods.
Args:
http_methods: A set of method strings
Returns:
A status code
"""
if len(http_methods) > 1:
return HTTP_200_OK
if HttpMethod.POST in http_methods:
return HTTP_201_CREATED
if HttpMethod.DELETE in http_methods:
return HTTP_204_NO_CONTENT
return HTTP_200_OK
```
Cheers,
Pau. | closed | 2024-07-19T15:34:30Z | 2025-03-20T15:54:49Z | https://github.com/litestar-org/litestar/issues/3636 | [
"Documentation :books:",
"Help Wanted :sos:"
] | ptallada | 4 |
huggingface/datasets | pytorch | 7,065 | Cannot get item after loading from disk and then converting to iterable. | ### Describe the bug
The dataset generated from local file works fine.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
But after saving it to disk and then loading it from disk, I cannot get data as expected.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ds.save_to_disk("./train")
ds = datasets.load_from_disk("./train")
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
After a long time waiting, an error occurs:
```
Loading dataset from disk: 100%|█████████████████████████████████████████████████████████████████████████| 165/165 [00:00<00:00, 6422.18it/s]
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get
if not self._poll(timeout):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module>
for batch in dataloader:
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
data = self._next_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data
idx, data = self._get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data
success, data = self._try_get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data
raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e
RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly
```
It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable?
### Steps to reproduce the bug
1. Create a `Dataset` from local files with `from_dict`
2. Save it to disk with `save_to_disk`
3. Load it from disk with `load_from_disk`
4. Convert to iterable with `to_iterable_dataset`
5. Loop the dataset
### Expected behavior
Get items faster than the original dataset generated from dict.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.23.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | open | 2024-07-23T09:37:56Z | 2024-07-23T09:37:56Z | https://github.com/huggingface/datasets/issues/7065 | [] | happyTonakai | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,289 | Why can't launch the GPU? | Thanks for your project! I made the config following your steps, and trained the cyclegan by runing the train.py file. But I find that the project seems run on CPU, rather than using my GPU. Do you have any suggestions? | closed | 2021-06-16T10:07:05Z | 2021-07-13T20:46:24Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1289 | [] | BiQiWHU | 2 |
microsoft/nni | pytorch | 5,622 | `from nni.nas import model_context` not working + NasExperimentConfig `use_active_gpu=True` wierd behaviour | **Describe the issue**:
I upgraded to the lastest NNI version and `from nni.nas import model_context` is not working.
Additionally, regarding NasExperiment, should I use `start` or `run` to launch an experiment?
**Environment**:
- NNI version: 3.0rc1
- Training service (local|remote|pai|aml|etc): local
- Client OS: windows
- Python version: 3.9.13
- PyTorch/TensorFlow version: 1.13
- Is conda/virtualenv/venv used?: Poetry
- Is running in Docker?: no
**How to reproduce it?**:
`from nni.nas import model_context` | open | 2023-06-28T14:11:35Z | 2023-07-04T16:51:48Z | https://github.com/microsoft/nni/issues/5622 | [] | sw33zy | 8 |
MaartenGr/BERTopic | nlp | 1,373 | Getting distance to center of cluster to use as a proxy for confidence | I'd like to output a confidence metric per document corresponding to its embedding's distance from the cluster center. So if a document's embedding was close to the cluster center it would have a high confidence score for example.
For KMeans I'd want to calculate this distances using code similar to [this](https://stackoverflow.com/questions/54240144/distance-between-nodes-and-the-centroid-in-a-kmeans-cluster)
Does BERTopic expose the ability to add callbacks to the clustering flow such as the code above? Otherwise, what is the best way to incorporate this calculation into a BERTopic-based pipeline?
I am thinking specifically about KMeans but would love if this could apply to the other clustering methods BERTopic supports(HDBscan and agglomerative clustering). | closed | 2023-06-28T16:19:42Z | 2023-09-27T09:10:37Z | https://github.com/MaartenGr/BERTopic/issues/1373 | [] | ronykrell | 3 |
kiwicom/pytest-recording | pytest | 13 | Fuzzy cassettes | For unhappy path testing, there could be an extra parameter that will mutate recorded cassettes.
Use cases:
- Check how the app will work if the format of responses will change
- Validate error handling
Possible fuzzes:
- Mutate response body. JSON - add/remove fields in objects / lists. Change values. Make JSON invalid.
- Completely change response content type
- Change response status code
- Mutate requests during recording??
- Raise exceptions instead of real responses
- Add delays to responses
All these could be combined and generate test cases | open | 2019-08-09T08:16:36Z | 2020-09-27T10:15:26Z | https://github.com/kiwicom/pytest-recording/issues/13 | [] | Stranger6667 | 1 |
JaidedAI/EasyOCR | deep-learning | 471 | How to use Transformer model for recognition? | Hi ,
I wanted to use EasyOcr for my use case. Can you help me to use Transformer model for recognition? As I saw a line of code in the description i.e reader = easyocr.Reader(['en'], detection='DB', recognition = 'Transformer') | closed | 2021-06-24T15:44:07Z | 2021-06-24T20:45:35Z | https://github.com/JaidedAI/EasyOCR/issues/471 | [] | karndeepsingh | 1 |
modoboa/modoboa | django | 2,731 | DKIM show key in 2.0.3 does not show the raw key | DKIM key in old admin interface shows both BIND and RAW versions of the DKIM key
The new admin interface only shows the BIND variety, which is not helpful for hosted DNS
| closed | 2023-01-01T14:33:00Z | 2023-01-02T22:38:45Z | https://github.com/modoboa/modoboa/issues/2731 | [
"feedback-needed"
] | olaf7 | 2 |
python-visualization/folium | data-visualization | 1,124 | Notebook code tests fail on Travis | @ocefpaf the notebook code tests are failing on Travis and I can't figure out why, can you help?
Here is the error message from Travis:
```
0.28s$ if [[ $TRAVIS_JOB_NAME == 'notebooks-code' ]]; then pytest --nbval-lax -p no:python /tmp/examples ; fi
Traceback (most recent call last):
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 927, in getini
return self._inicache[name]
KeyError: 'python_files'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 934, in _getini
description, type, default = self._parser._inidict[name]
KeyError: 'python_files'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/miniconda/envs/TEST/bin/pytest", line 11, in <module>
sys.exit(main())
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 60, in main
config = _prepareconfig(args, plugins)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 201, in _prepareconfig
pluginmanager=pluginmanager, args=args
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/pluggy/hooks.py", line 289, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/pluggy/manager.py", line 68, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/pluggy/manager.py", line 62, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/helpconfig.py", line 93, in pytest_cmdline_parse
config = outcome.get_result()
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 679, in pytest_cmdline_parse
self.parse(args)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 896, in parse
self._preparse(args, addopts=addopts)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 836, in _preparse
self._consider_importhook(args)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 772, in _consider_importhook
hook = _pytest.assertion.install_importhook(self)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/assertion/__init__.py", line 81, in install_importhook
config._assertstate.hook = hook = rewrite.AssertionRewritingHook(config)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/assertion/rewrite.py", line 63, in __init__
self.fnpats = config.getini("python_files")
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 929, in getini
self._inicache[name] = val = self._getini(name)
File "/home/travis/miniconda/envs/TEST/lib/python3.7/site-packages/_pytest/config/__init__.py", line 936, in _getini
raise ValueError("unknown configuration value: %r" % (name,))
ValueError: unknown configuration value: 'python_files'
The command "if [[ $TRAVIS_JOB_NAME == 'notebooks-code' ]]; then pytest --nbval-lax -p no:python /tmp/examples ; fi" exited with 1.
```
Something with the `nbval` tests failing. `python_files` is a configuration option of Pytest so I don't understand why it would consider it unknown.
I found this issue on the `nbval` repo: https://github.com/computationalmodelling/nbval/pull/113
It suggests `nbval` doesn't work with Python 3.7 and Ubuntu Trusty on Travis. I tried using Python 3.6 and using Ubuntu Xenial but that didn't resolve the issue. See #1121 and #1123. | closed | 2019-04-07T10:00:00Z | 2019-04-07T16:42:53Z | https://github.com/python-visualization/folium/issues/1124 | [
"bug",
"tests"
] | Conengmo | 3 |
quokkaproject/quokka | flask | 227 | https://github.com/citruspi/Flask-Analytics | https://github.com/citruspi/Flask-Analytics
| closed | 2015-07-08T18:02:42Z | 2018-02-06T13:46:16Z | https://github.com/quokkaproject/quokka/issues/227 | [
"enhancement",
"EASY",
"ready"
] | rochacbruno | 0 |
developmentseed/lonboard | jupyter | 690 | [BUG] `apply_continuous_cmap` mutates the column it operates on | ## Context
Maybe I'm doing something dumb, but it looks like `apply_continuous_cmap` mutates the column it is operating on, when I would expect it to leave it unchanged. See the example notebook here:
https://gist.github.com/ajfriend/40b0d5a2b8d7ea02fa8f35574aab65b7
This only seems to happen on pandas Series, but not numpy arrays. It happens for both palettable and matplotlib colormaps.
## Environment
- OS: macos 14.6
- Browser: Chrome
- Lonboard Version: `0.10.3`
## Steps to reproduce the bug
Shown in notebook above.
| closed | 2024-10-21T01:03:35Z | 2024-10-21T15:43:14Z | https://github.com/developmentseed/lonboard/issues/690 | [
"bug"
] | ajfriend | 4 |
vitalik/django-ninja | rest-api | 362 | [BUG] Unexpected default router operation auth if api auth set, but router auth set to None | If a NinjaAPI instance has `auth=[something]`, but a router instance attached to this api has `auth=None`, then I'd expect the default auth for that router's operations to be None. | closed | 2022-02-15T21:42:01Z | 2025-03-22T19:03:10Z | https://github.com/vitalik/django-ninja/issues/362 | [] | SmileyChris | 0 |
widgetti/solara | jupyter | 725 | Check hook rules for list, dict and set comprehension | Extend #706 with disallowing `[solara.use_state() for i in range(N)]` | open | 2024-08-01T10:14:09Z | 2024-08-01T10:14:09Z | https://github.com/widgetti/solara/issues/725 | [
"good first issue",
"footgun"
] | maartenbreddels | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.