repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
aio-libs/aiomysql | sqlalchemy | 63 | Echo option for default cursor | This line:
https://github.com/aio-libs/aiomysql/blob/master/aiomysql/connection.py#L365
``` python
cur = cursor(self, self._echo) if cursor else self.cursorclass(self)
```
We should pass echo param to cursorclass too, don't we?
| closed | 2016-02-16T14:07:02Z | 2017-04-17T10:28:51Z | https://github.com/aio-libs/aiomysql/issues/63 | [] | tvoinarovskyi | 2 |
mwaskom/seaborn | pandas | 3,263 | Color not changing | Hi! I'm trying to change colors in seaborn but it's having no effect?
<img width="706" alt="Captura de pantalla 2023-02-15 a la(s) 21 53 02" src="https://user-images.githubusercontent.com/119420090/219230518-91d40820-51b1-4d11-b724-408e0b5525e1.png">
| closed | 2023-02-16T00:58:04Z | 2023-02-16T02:23:52Z | https://github.com/mwaskom/seaborn/issues/3263 | [] | pablotucu | 1 |
coqui-ai/TTS | deep-learning | 2,775 | [Bug] training a new model stops with "Decoder stopped with `max_decoder_steps` 10000" | ### Describe the bug
Hello!
I am running TTS 0.15.6 and try to train a new voice basing on 22,000 wav files. The training process seems to work but stops after a while, sometimes after 30 minutes and sometimes after 2 hours. Please see me log here:
```
[...]
--> STEP: 14
| > decoder_loss: 16.613059997558594 (17.25351878574916)
| > postnet_loss: 17.12180519104004 (21.706623349870956)
| > stopnet_loss: 0.6667962074279785 (0.6582207466874804)
| > decoder_coarse_loss: 15.918059349060059 (16.576676845550537)
| > decoder_ddc_loss: 0.0008170512155629694 (0.002927447099604511)
| > ga_loss: 0.0023095402866601944 (0.006352203565516642)
| > decoder_diff_spec_loss: 1.2021437883377075 (1.1472604700497218)
| > postnet_diff_spec_loss: 0.7060739398002625 (0.7125482303755624)
| > decoder_ssim_loss: 0.8515534400939941 (0.8743813676493508)
| > postnet_ssim_loss: 0.7912083864212036 (0.8292492159775325)
| > loss: 13.979524612426758 (15.465778078351702)
| > align_error: 0.9930884041823447 (0.9810643588259284)
warning: audio amplitude out of range, auto clipped.
--> EVAL PERFORMANCE
| > avg_loader_time: 0.00717801707131522 (+0.003781165395464216)
| > avg_decoder_loss: 17.25351878574916 (-1.5459022521972656)
| > avg_postnet_loss: 21.706623349870956 (-0.7187363760811927)
| > avg_stopnet_loss: 0.6582207466874804 (-0.1437307809080396)
| > avg_decoder_coarse_loss: 16.576676845550537 (-1.278356620243617)
| > avg_decoder_ddc_loss: 0.002927447099604511 (+0.00029547616473532155)
| > avg_ga_loss: 0.006352203565516642 (-2.3055106534489687e-05)
| > avg_decoder_diff_spec_loss: 1.1472604700497218 (+0.06803790586335312)
| > avg_postnet_diff_spec_loss: 0.7125482303755624 (+0.019743638379233208)
| > avg_decoder_ssim_loss: 0.8743813676493508 (-0.000993796757289389)
| > avg_postnet_ssim_loss: 0.8292492159775325 (-0.00942203402519226)
| > avg_loss: 15.465778078351702 (-1.0101798602512932)
| > avg_align_error: 0.9810643588259284 (-0.0001967931166291237)
| > Synthesizing test sentences.
> Decoder stopped with `max_decoder_steps` 10000
> Decoder stopped with `max_decoder_steps` 10000
> Decoder stopped with `max_decoder_steps` 10000
> Decoder stopped with `max_decoder_steps` 10000
> Decoder stopped with `max_decoder_steps` 10000
```
In the past this issue was already reported, but the thread was closed but the issue not fixed. Are there any updates?
### To Reproduce
```
import os
import sklearn
from TTS.config.shared_configs import BaseAudioConfig
from trainer import Trainer, TrainerArgs
from TTS.tts.configs.shared_configs import BaseDatasetConfig, CharactersConfig
from TTS.tts.configs.tacotron2_config import Tacotron2Config
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.tacotron2 import Tacotron2
from TTS.utils.audio import AudioProcessor
from TTS.tts.utils.text.tokenizer import TTSTokenizer
output_path = os.path.dirname(os.path.abspath(__file__))
dataset_config = BaseDatasetConfig(formatter="thorsten", meta_file_train="metadata.csv", path="/home/marc/Desktop/AI/Voice_Cloning3/")
character_config = CharactersConfig(
characters="ABCDEFGHIJKLMNOPQRSTUVWXYZ!',-.:;?abcdefghijklmnopqrstuvwxyzßäéöü̈‒–—‘’“„ ",
punctuations="!'(),-.:;? \u2012\u2013\u2014\u2018\u2019",
pad="_",
eos="~",
bos="^",
phonemes=" a b d e f h i j k l m n o p r s t u v w y z ç ð ø œ ɑ ɒ ɔ ɛ ɡ ɪ ɹ ʃ ʊ ʌ ʏ!!!!!!!?,....:;??!abdefhijklmnoprstuvwxyzçøŋœɐɑɒɔəɛɜɡɪɹɾʃʊʌʏʒː̩̃"
)
audio_config = BaseAudioConfig(
stats_path="/home/marc/Desktop/AI/Voice_Cloning3/stats-thorsten-dec2021-22k.npy",
sample_rate=22050,
do_trim_silence=True,
trim_db=60.0,
signal_norm=False,
mel_fmin=50,
spec_gain=1.0,
log_func="np.log",
ref_level_db=20,
preemphasis=0.0,
)
config = Tacotron2Config( # This is the config that is saved for the future use
audio=audio_config,
batch_size=40, # BS of 40 and max length of 10s will use about 20GB of GPU memory
eval_batch_size=16,
num_loader_workers=4,
num_eval_loader_workers=4,
run_eval=True,
test_delay_epochs=-1,
r=6,
gradual_training=[[0, 6, 64], [10000, 4, 32], [50000, 3, 32], [100000, 2, 32]],
double_decoder_consistency=True,
epochs=1000,
text_cleaner="phoneme_cleaners",
use_phonemes=True,
phoneme_language="de",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
precompute_num_workers=8,
print_step=25,
print_eval=True,
mixed_precision=False,
test_sentences=[
"Es hat mich viel Zeit gekostet ein Stimme zu entwickeln, jetzt wo ich sie habe werde ich nicht mehr schweigen.",
"Sei eine Stimme, kein Echo.",
"Es tut mir Leid David. Das kann ich leider nicht machen.",
"Dieser Kuchen ist großartig. Er ist so lecker und feucht.",
"Vor dem 22. November 1963.",
],
# max audio length of 10 seconds, feel free to increase if you got more than 20GB GPU memory
max_audio_len=22050 * 10,
output_path=output_path,
datasets=[dataset_config],
)
ap = AudioProcessor(**config.audio.to_dict())
ap = AudioProcessor.init_from_config(config)
tokenizer, config = TTSTokenizer.init_from_config(config)
train_samples, eval_samples = load_tts_samples(
dataset_config,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
model = Tacotron2(config, ap, tokenizer, speaker_manager=None)
trainer = Trainer(
TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples
)
trainer.fit()
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
TTS 0.15.6
Python 3.9.17
Ubuntu 23.04
cuda/cudnn originally 11.8 but TTS installed into the conda environment
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
Hardware AMD 5900X, RTX 4090
```
### Additional context
_No response_ | closed | 2023-07-17T02:47:52Z | 2023-07-20T12:58:24Z | https://github.com/coqui-ai/TTS/issues/2775 | [
"bug"
] | Marcophono2 | 5 |
bloomberg/pytest-memray | pytest | 53 | incompatible with flaky (as used in urllib3) | ## Bug Report
**Current Behavior** if pytest-flaky re-runs a test it fails with `FAILED test/test_demo.py::test_demo - RuntimeError: No more than one Tracker instance can be active at the same time`
**Input Code**
```python
import itertools
import pytest
count = itertools.count().__next__
@pytest.mark.flaky
def test_demo():
if count() <= 0:
assert False
```
**Expected behavior/code**
**Environment**
- Python(s): all versions (tested on 3.11)
```
pytest-memray==1.3.0
flaky==3.7.0
```
**Possible Solution**
<!--- Only if you have suggestions on a fix for the bug -->
**Additional context/Screenshots** Add any other context about the problem here. If
applicable, add screenshots to help explain.
| closed | 2022-11-23T13:30:25Z | 2022-11-23T13:48:27Z | https://github.com/bloomberg/pytest-memray/issues/53 | [] | graingert | 3 |
pandas-dev/pandas | python | 60,517 | DOC: Convert v to conv_val in function for pytables.py | ### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
pandas\pandas\core\computation\pytables.py
### Documentation problem
Many instances of just v in this function. Wanted to clarify throughout
### Suggested fix for documentation
Change v to conv_val | closed | 2024-12-07T07:58:29Z | 2024-12-09T18:32:32Z | https://github.com/pandas-dev/pandas/issues/60517 | [
"Clean",
"Dependencies"
] | migelogali | 1 |
pbugnion/gmaps | jupyter | 22 | Build a viable pipeline for writing documentation. | This package desperately needs documentation. Unfortunately, the obvious solution of writing IPython notebooks and exporting them to HTML doesn't work. Javascript widgets are not included in the exported HTML, so you can't actually see the maps (see [this](http://nbviewer.ipython.org/github/pbugnion/gmaps/blob/master/examples/ipy3/heatmap_demo.ipynb), for instance).
I think that the solution will be to write a custom 'nbexport' script, but that sounds like a real pain.
| closed | 2014-12-02T17:15:36Z | 2016-06-25T08:37:08Z | https://github.com/pbugnion/gmaps/issues/22 | [] | pbugnion | 2 |
jpadilla/django-rest-framework-jwt | django | 472 | [feature] permit to use custom header instead of `Authorization` | Permit to use another header than `Authorization` to retrieve token.
**Current and suggested behavior**
Current: this module permit only to use the standard header `Authorization`
Suggested: permit to user to define a custom header name
**Why would the enhancement be useful to most users**
For example:
I have a Django API which uses this module to manage authentication with JWT. I run this API in 2 environment: stage and production. Both behind a nginx proxy.
* On production: it works perfectly: my client use the `Authorization` header to give JWT to API
* On stage: I want to protect my stage environment with a password and without any change to API code.
* I configure `basic_auth` in nginx which use the `Authorization` header.
* Like in production, my API uses the `Authorization` header.
* I have a header conflict...
In this example, I want to use a custom header (ex `X-Authorization`) in order to avoid conflict and provide both header for nginx & api authentication.
Example from stackoverflow: https://stackoverflow.com/questions/22229996/basic-http-and-bearer-token-authentication
| open | 2019-03-26T14:01:07Z | 2019-03-26T14:12:08Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/472 | [] | thomasboni | 0 |
liangliangyy/DjangoBlog | django | 686 | 如何自定义后端页面,1.0.0.7文件我翻了一圈,没有找到与后端页面相关的html文件 | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [X] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [X] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [X] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [ ] 添加新的特性或者功能
- [X] 请求技术支持
| closed | 2023-10-24T10:00:56Z | 2023-11-06T06:15:20Z | https://github.com/liangliangyy/DjangoBlog/issues/686 | [] | qychui | 1 |
python-restx/flask-restx | api | 451 | Custom Array for Json argument parser | I'm attempting to create a custom array type for argument parsing in the json body. So I created a function and created a schema for it:
```
type_services.__schema__ = {'type':'array','items':{'type':'string'}}
```
and create it with
```
testparser.add_argument(name='services',type=type_services,location='json',required=True)
```
The type seems to work correctly, when I parse the arguments I get a list of strings. However, it doesn't show it as a field in the json body within the swagger documentation. With that being the only parameter is the json body is being shown as '{}'
When checking the swagger.json, it the items type seems to be missing. I added a "firstName" parameter string for comparison and this is the what I get in the swagger.json
```
"/customer/test": {
"parameters": [
{
"name": "payload",
"required": true,
"in": "body",
"schema": {
"type": "object",
"properties": {
"firstName": {
"type": "string"
},
"services": {
"type": "array"
}
}
}
}
],
"post": {
"responses": {
"200": {
"description": "Success"
}
},
"operationId": "post_create",
"tags": [
"customer"
]
}
}
},
```
The type: string portion is missing in the schema. Any suggestions? | open | 2022-06-30T16:25:14Z | 2022-06-30T16:25:14Z | https://github.com/python-restx/flask-restx/issues/451 | [] | Bxmnx | 0 |
jupyterlab/jupyter-ai | jupyter | 668 | Add support for Claude V3 models on AWS Bedrock | ### Problem
Claude V3 was recently announced and AWS Bedrock already provides Claude V3 Sonnet (model id `anthropic.claude-3-sonnet-20240229-v1:0`).
| closed | 2024-03-04T21:06:34Z | 2024-03-07T19:38:37Z | https://github.com/jupyterlab/jupyter-ai/issues/668 | [
"enhancement"
] | DzmitrySudnik | 1 |
labmlai/annotated_deep_learning_paper_implementations | machine-learning | 190 | want to use CelebA dataset,but there is an issue | PLZ | closed | 2023-06-07T13:57:10Z | 2024-06-19T10:47:29Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/190 | [] | Z0Victor | 2 |
noirbizarre/flask-restplus | flask | 367 | Marshalling a long list of nested objects seems to scale badly | To support my testing, I hacked in the following lines to `model.py`
```python
def __deepcopy__(self, memo):
DEEP_COPY_CALL_COUNT[0] += 1
obj = self.__class__(self.name,
[(key, copy.deepcopy(value, memo)) for key, value in self.items()],
mask=self.__mask__)
obj.__parents__ = self.__parents__
return obj
DEEP_COPY_CALL_COUNT = [0]
```
I then wrote the following tests:
```python
from flask_restplus import Namespace, fields, marshal
from flask_restplus.model import DEEP_COPY_CALL_COUNT
from time import time
from collections import OrderedDict
ns = Namespace('')
thing = ns.model('Thing', {'a': fields.String,
'b': fields.String,
'c': fields.String,
'd': fields.String,
'e': fields.String})
element = ns.model('Element', {'value': fields.Nested(thing)})
single_nested_model = ns.model('Single', {'data': fields.List(fields.Nested(thing))})
double_nested_model = ns.model('Double', {'data': fields.List(fields.Nested(element))})
class Thing(object):
def __init__(self):
self.a = 1
self.b = 1
self.c = 1
self.d = 1
self.e = 1
class Element(object):
def __init__(self):
self.value = Thing()
single_things = OrderedDict({'data': [Thing() for _ in xrange(100000)]})
double_things = OrderedDict({'data': [Element() for _ in xrange(100000)]})
print DEEP_COPY_CALL_COUNT
start = time()
marshal(single_things, single_nested_model)
print time() - start
print DEEP_COPY_CALL_COUNT
DEEP_COPY_CALL_COUNT[0] = 0
print DEEP_COPY_CALL_COUNT
start = time()
marshal(double_things, double_nested_model)
print time() - start
print DEEP_COPY_CALL_COUNT
```
On my machine, the single-nested structure takes around 18 seconds and the doubly-nested one takes about 47. This seems to be a fairly long time. For the singly-nested structure, this resulted in 200,002 calls to `__deepcopy__`, followed by 600,003 for the doubly-nested structure.
Given that I've only increased the nesting level from 2 to 3, I'm not sure why the number of clones would treble here. I'm actually not sure why there's any need to copy the models so frequently at all. Indeed, replacing the `__deepcopy__` method with a simple `return self` seems only to break `ModelTest.test_model_deepcopy`, which just asserts that deepcopy works.
Why does deepcopy get called this many times? Can the marshalling code be reworked into something that's a bit more efficient? | closed | 2017-12-19T16:09:11Z | 2018-01-06T01:24:43Z | https://github.com/noirbizarre/flask-restplus/issues/367 | [] | Ymbirtt | 2 |
horovod/horovod | pytorch | 3,315 | recipe for target 'horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/all' failed | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet):PyTorch
2. Framework version: 1.5.1
3. Horovod version:0.23.0
4. MPI version:
5. CUDA version:10.2
6. NCCL version:2
7. Python version:3.7
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version: 7.5.0
12. CMake version:0.23.0
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
running build_ext
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is GNU 7.3.0
-- Check for working CXX compiler: /home/xcc/anaconda3/envs/lanegcn/bin/x86_64-conda_cos6-linux-gnu-c++
-- Check for working CXX compiler: /home/xcc/anaconda3/envs/lanegcn/bin/x86_64-conda_cos6-linux-gnu-c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags: -mf16c -mavx -mfma
-- Using command /home/xcc/anaconda3/envs/lanegcn/bin/python
-- Found CUDA: /usr/local/cuda-10.2 (found version "10.2")
-- Linking against static NCCL library
-- Found NCCL: /usr/include
-- Determining NCCL version from the header file: /usr/include/nccl.h
-- NCCL_MAJOR_VERSION: 2
-- Found NCCL (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnccl_static.a)
-- Found NVTX: /usr/local/cuda-10.2/include
-- Found NVTX (include: /usr/local/cuda-10.2/include, library: dl)
-- Found Pytorch: 1.5.1 (found suitable version "1.5.1", minimum required is "1.2.0")
-- HVD_NVCC_COMPILE_FLAGS = --std=c++11 -O3 -Xcompiler -fPIC -gencode arch=compute_30,code=sm_30 -gencode arch=compute_32,code=sm_32 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75
-- Configuring done
CMake Warning at horovod/torch/CMakeLists.txt:81 (add_library):
Cannot generate a safe runtime search path for target pytorch because files
in some directories may conflict with libraries in implicit directories:
runtime library [libcudart.so.10.2] in /home/xcc/anaconda3/envs/lanegcn/lib may be hidden by files in:
/usr/local/cuda-10.2/lib64
Some of these libraries may not be found correctly.
In file included from /usr/local/cuda-10.2/include/driver_types.h:77:0,
from /usr/local/cuda-10.2/include/builtin_types.h:59,
from /usr/local/cuda-10.2/include/cuda_runtime.h:91,
from <command-line>:0:
/usr/include/limits.h:26:10: fatal error: bits/libc-header-start.h: No such file or directory
#include <bits/libc-header-start.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
CMake Error at compatible_horovod_cuda_kernels_generated_cuda_kernels.cu.o.RelWithDebInfo.cmake:219 (message):
Error generating
/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/build/temp.linux-x86_64-3.7/RelWithDebInfo/horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir//./compatible_horovod_cuda_kernels_generated_cuda_kernels.cu.o
horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/build.make:70: recipe for target 'horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/compatible_horovod_cuda_kernels_generated_cuda_kernels.cu.o' failed
make[2]: *** [horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/compatible_horovod_cuda_kernels_generated_cuda_kernels.cu.o] Error 1
make[2]: Leaving directory '/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/build/temp.linux-x86_64-3.7/RelWithDebInfo'
CMakeFiles/Makefile2:215: recipe for target 'horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/all' failed
make[1]: *** [horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/all] Error 2
make[1]: Leaving directory '/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/build/temp.linux-x86_64-3.7/RelWithDebInfo'
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/setup.py", line 211, in <module>
'horovodrun = horovod.runner.launch:run_commandline'
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/setup.py", line 101, in build_extensions
cwd=cmake_build_dir)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'RelWithDebInfo', '--', 'VERBOSE=1']' returned non-zero exit status 2.
----------------------------------------
ERROR: Command errored out with exit status 1: /home/xcc/anaconda3/envs/lanegcn/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/setup.py'"'"'; __file__='"'"'/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-h8bnj0_b/install-record.txt --single-version-externally-managed --compile --install-headers /home/xcc/anaconda3/envs/lanegcn/include/python3.7m/horovod Check the logs for full command output.
steps to reproduce it:
HOROVOD_NCCL_INCLUDE=/usr/include HOROVOD_NCCL_LIB=/usr/lib/x86_64-linux-gnu HOROVOD_CUDA_HOME=/usr/local/cuda-10.2 HOROVOD_CUDA_INCLUDE=/usr/local/cuda-10.2/include HOROVOD_GPU_OPERATIONS=NCCL HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITHOUT_MPI=1 HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITHOUT_MXNET=1 pip install horovod
BTW,the cmake version is 3.10.2,and i do "pip install mpi4py"(in conda env)
| open | 2021-12-13T13:25:36Z | 2021-12-13T14:06:00Z | https://github.com/horovod/horovod/issues/3315 | [
"bug"
] | coco-99-coco | 0 |
ultralytics/ultralytics | python | 19,270 | expected str, bytes or os.PathLike object, not NoneType | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
I imported the dataset directly from Roboflow, so it should not have problem.
The following is the code I run
from ultralytics import YOLO, checks, hub
checks()
hub.login('hidden')
model = YOLO('https://hub.ultralytics.com/models/DDnZzdKNetoATXL0SY0Q')
results = model.train()
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\site-packages\ultralytics\engine\trainer.py", line 558, in get_dataset
elif self.args.data.split(".")[-1] in {"yaml", "yml"} or self.args.task in {
AttributeError: 'NoneType' object has no attribute 'split'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\a\Desktop\Weight1\Train.py", line 7, in <module>
results = model.train()
File "C:\Program Files\Python310\lib\site-packages\ultralytics\engine\model.py", line 803, in train
self.trainer = (trainer or self._smart_load("trainer"))(overrides=args, _callbacks=self.callbacks)
File "C:\Program Files\Python310\lib\site-packages\ultralytics\engine\trainer.py", line 134, in __init__
self.trainset, self.testset = self.get_dataset()
File "C:\Program Files\Python310\lib\site-packages\ultralytics\engine\trainer.py", line 568, in get_dataset
raise RuntimeError(emojis(f"Dataset '{clean_url(self.args.data)}' error ❌ {e}")) from e
File "C:\Program Files\Python310\lib\site-packages\ultralytics\utils\__init__.py", line 1301, in clean_url
url = Path(url).as_posix().replace(":/", "://") # Pathlib turns :// -> :/, as_posix() for Windows
File "C:\Program Files\Python310\lib\pathlib.py", line 960, in __new__
self = cls._from_parts(args)
File "C:\Program Files\Python310\lib\pathlib.py", line 594, in _from_parts
drv, root, parts = self._parse_args(args)
File "C:\Program Files\Python310\lib\pathlib.py", line 578, in _parse_args
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
### Environment
Ultralytics 8.3.75 🚀 Python-3.10.11 torch-2.5.1+cu118 CUDA:0 (NVIDIA GeForce RTX 2080 Ti, 11264MiB)
Setup complete ✅ (16 CPUs, 15.9 GB RAM, 306.5/446.5 GB disk)
OS Windows-10-10.0.19045-SP0
Environment Windows
Python 3.10.11
Install pip
RAM 15.93 GB
Disk 306.5/446.5 GB
CPU AMD Ryzen 7 5700X 8-Core Processor
CPU count 16
GPU NVIDIA GeForce RTX 2080 Ti, 11264MiB
GPU count 1
CUDA 11.8
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 10.4.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.1>=1.4.1
torch ✅ 2.5.1+cu118>=1.8.0
torch ✅ 2.5.1+cu118!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1+cu118>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
from ultralytics import YOLO, checks, hub
checks()
hub.login('hidden')
model = YOLO('https://hub.ultralytics.com/models/DDnZzdKNetoATXL0SY0Q')
results = model.train()
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-16T22:55:49Z | 2025-02-16T23:15:55Z | https://github.com/ultralytics/ultralytics/issues/19270 | [
"question"
] | felixho789 | 2 |
koaning/scikit-lego | scikit-learn | 366 | [DOCS] Add reference to pyod | It's a related project with *many* outlier detection models. It would help folks if we add a reference in our outlier docs.
https://github.com/yzhao062/pyod
| closed | 2020-06-02T08:22:47Z | 2020-07-08T20:50:16Z | https://github.com/koaning/scikit-lego/issues/366 | [
"good first issue",
"documentation"
] | koaning | 2 |
robinhood/faust | asyncio | 152 | problem with python3.6 | Hello there,
I am trying to use faust for one of my learning project where I am working with GDAL which is compatible up to python3.6 and I find difficulty working with faust 1.0.30 on python3.6 virtual environment. Here is the error I am getting:
```
File "/home/santosh/project/pinp/env3.6/lib/python3.6/site-packages/faust/streams.py", line 58, in <module>
from contextvars import ContextVar
ModuleNotFoundError: No module named 'contextvars'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "hello_world.py", line 2, in <module>
app = faust.App(
File "/home/santosh/project/pinp/env3.6/lib/python3.6/site-packages/faust/__init__.py", line 229, in __getattr__
object_origins[name], None, None, [name])
File "/home/santosh/project/pinp/env3.6/lib/python3.6/site-packages/faust/app/__init__.py", line 1, in <module>
from .base import App
File "/home/santosh/project/pinp/env3.6/lib/python3.6/site-packages/faust/app/base.py", line 51, in <module>
from faust.channels import Channel, ChannelT
File "/home/santosh/project/pinp/env3.6/lib/python3.6/site-packages/faust/channels.py", line 27, in <module>
from .streams import current_event
File "/home/santosh/project/pinp/env3.6/lib/python3.6/site-packages/faust/streams.py", line 63, in <module>
from aiocontextvars import ContextVar, Context
ModuleNotFoundError: No module named 'aiocontextvars'
```
Could you please let me know how to fix this issue. Thanks in advance!
| open | 2018-08-28T10:09:34Z | 2018-11-09T16:32:05Z | https://github.com/robinhood/faust/issues/152 | [
"Category: Deployment",
"Category: Packaging and Release Management",
"Status: Need Verification"
] | skumarsah | 10 |
microsoft/MMdnn | tensorflow | 16 | Need to update pip installation | When I tried to update my MMdnn by `pip install -U https://github.com/Microsoft/MMdnn/releases/download/0.1.1/mmdnn-0.1.1-py2.py3-none-any.whl
`, it didn't update the version in master now.
For the `setup.py`, I tried it but it looked like I have to add some command after it
```
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: no commands supplied
```
And if I use `pip install git+https://github.com/Microsoft/MMdnn.git@master`, there will be an encoding error:
```
Collecting git+https://github.com/Microsoft/MMdnn.git@master
Cloning https://github.com/Microsoft/MMdnn.git (to master) to /tmp/pip-UM6BrV-build
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-UM6BrV-build/setup.py", line 5, in <module>
with open('README.md', encoding='utf-8') as f:
TypeError: 'encoding' is an invalid keyword argument for this function
```
Can you please explain the usage or update it?
Thanks.
| closed | 2017-12-01T16:46:15Z | 2017-12-05T14:57:30Z | https://github.com/microsoft/MMdnn/issues/16 | [] | seanchung2 | 1 |
roboflow/supervision | machine-learning | 1,182 | HaloAnnotator does not work | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
can you give me a complete code? when i use HaloAnnotator, nothing changes in the image, the detections.mask is None
```python
import cv2
import supervision as sv
from ultralytics import YOLO
filename = '../src/static/dog.png'
image = cv2.imread(filename)
model = YOLO('../models/yolov8x.pt')
results = model(image)[0]
detections = sv.Detections.from_ultralytics(results)
annotated_image = sv.HaloAnnotator().annotate(
scene=image.copy(), detections=detections)
sv.plot_image(annotated_image)
```
### Additional
_No response_ | closed | 2024-05-09T07:33:38Z | 2024-05-09T08:11:32Z | https://github.com/roboflow/supervision/issues/1182 | [
"question"
] | wilsonlv | 1 |
saulpw/visidata | pandas | 2,404 | Scientific notation shown for column with large number even when type is string | **Small description**

**Expected result**
Seeing the original string that is in the csv
**Actual result with screenshot**
If you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`.
**Steps to reproduce with sample data and a .vd**
First try reproducing without any user configuration by using the flag `-N`.
e.g. `echo "abc" | vd -f txt -N`
Please attach the commandlog (saved with `Ctrl-D`) to show the steps that led to the issue.
See [here](http://visidata.org/docs/save-restore/) for more details.
**Additional context**
Please include the version of VisiData and Python.
| closed | 2024-05-12T18:58:40Z | 2024-05-12T20:10:53Z | https://github.com/saulpw/visidata/issues/2404 | [
"bug",
"By Design"
] | jay-babu | 2 |
chezou/tabula-py | pandas | 93 | CalledProcessError | # Summary of your issue
Hello. Recently I want to use the tabula-py to extract multiple tables from a pdf file https://drive.google.com/open?id=10Z5203McD66puNAfy2Or85NpeVUZqbL3 . However, I face the call process error. I read several issues in your GitHub. It seems to be the problem of Java. However, after reinstalling the Java, the problem appears again. I have already added the environment paths. And my OS is Win 10. Thank you!
# Environment
Write and check your environment. Please paste outputs of specific commands if required.
- [ ] Paste the output of `import tabula; tabula.environment_info()` on Python REPL:
```py
tabula.environment_info()
Python version:
3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]
Java version:
java version "1.8.0_171"
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)
tabula-py version: 1.1.1
platform: Windows-10-10.0.16299-SP0
uname:
uname_result(system='Windows', node='DESKTOP-8ATR4KH', release='10', version='10.0.16299', machine='AMD64', processor='Intel64 Family 6 Model 142 Stepping 9, GenuineIntel')
linux_distribution: ('', '', '')
mac_ver: ('', ('', '', ''), '')
```
If not possible to execute `tabula.environment_info()`, please answer following questions manually.
- [ ] Paste the output of `python --version` command on your terminal: ?
- [ ] Paste the output of `java -version` command on your terminal: ?
- [ ] Does `java -h` command work well?; Ensure your java command is included in `PATH`
- [ ] Write your OS and it's version: ?
Providing PDF would be really helpful to resolve the issue.
- [ ] (Optional, but really helpful) Your PDF URL: ?
# What did you do when you faced the problem?
Provide your information to reproduce the issue.
## Code:
```
import tabula
df = tabula.read_pdf("D:/Research Assistant/Task/2011.pdf", pages=2, multiple_tables=True)
```
## Expected behavior:
The df contains the tables.
## Actual behavior:
```
CalledProcessError: Command '['java', '-jar', 'C:\\Users\\yipin\\Anaconda3\\lib\\site-packages\\tabula\\tabula-1.0.1-jar-with-dependencies.jar', '--pages', '2', '--guess', 'D:/Research Assistant/Task/2011.pdf']' returned non-zero exit status 1
```
## Related Issues:
| closed | 2018-05-22T00:25:25Z | 2021-07-15T06:32:27Z | https://github.com/chezou/tabula-py/issues/93 | [] | yipinlyu | 6 |
PokemonGoF/PokemonGo-Bot | automation | 5,655 | Conditions to enable Sniper | ### Short Description
Add a set of conditions similar to MoveToMapPokemon in Sniper
like:
"max_sniping distance": 10000,
"max__walking_distance": 500,
"min_time": 60,
"min_ball": 50,
"prioritize_vips": true,
Also, add a minimum great ball and minimum ultra ball condition to both sniping tasks.
### How it would help others
Sniping with only a few pokeballs and no greatballs to catch a dragonite would end up with the player having no balls if the pokemon doesn't vanish in time.
| closed | 2016-09-24T12:07:23Z | 2016-09-24T23:44:04Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5655 | [] | abhinavagrawal1995 | 5 |
horovod/horovod | tensorflow | 3,143 | [Elastic Horovod] It will loss some indices of processed samples in hvd.elastic.state when some nodes dropped | **Environment:**
1. Framework: PyTorch
2. Framework version: 1.7.0+cu101
3. Horovod version: 0.22.1
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
yes
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
yes
**Bug report:**
horovod/torch/elastic/state.py
```python
class SamplerStateHandler(StateHandler):
def __init__(self, sampler):
super().__init__(sampler)
self._saved_sampler_state = copy.deepcopy(self.value.state_dict())
def save(self):
self._saved_sampler_state = copy.deepcopy(self.value.state_dict())
def restore(self):
self.value.load_state_dict(self._saved_sampler_state)
def sync(self):
# Get the set of processed indices from all workers
world_processed_indices = _union(allgather_object(self.value.processed_indices))
# Replace local processed indices with global indices
state_dict = self.value.state_dict()
state_dict['processed_indices'] = world_processed_indices
# Broadcast and load the state to make sure we're all in sync
self.value.load_state_dict(broadcast_object(state_dict))
```
when `state.commit()` is called, function `save()` above only save the local state of `ElasticSampler` locally. If one node is dropped by some reason, the indices of processed samples on this node is lost. So after restart and sync, those samples whill be processed again, which is not what we want.
**Steps to reproduce**
1. Add some log to `SamplerStateHandler.sync()`
```python
class SamplerStateHandler(StateHandler):
......
def sync(self):
# Get the set of processed indices from all workers
world_processed_indices = _union(allgather_object(self.value.processed_indices))
print(f"world_processed_indices: {world_processed_indices }")
......
```
2. Use the code below to reproduce. Note not do shuffle for the convenience of observation.
```python
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import time
import torch
import horovod.torch as hvd
BATCH_SIZE_PER_GPU = 2
class MyDataset(torch.utils.data.Dataset):
def __init__(self, n):
self.n = n
def __getitem__(self, index):
index = index % self.n
return index
def __len__(self):
return self.n
@hvd.elastic.run
def train(state, data_loader, a):
rank = hvd.rank()
print(f"train rank={rank}")
total_epoch = 100
for epoch in range(state.epoch, total_epoch):
print(f"epoch={epoch}")
print("Epoch {} / {}, Start training".format(epoch, total_epoch))
print(f"train... rank={rank}")
print(f"start enumerate train_loader... rank={rank}")
batch_offset = state.batch
for i, d in enumerate(data_loader):
state.batch = batch_idx = batch_offset + i
if state.batch % 5 == 0:
t1 = time.time()
state.commit()
print(f"time: {time.time() - t1}")
state.check_host_updates()
b = hvd.allreduce(a)
print(f"b: {b}")
state.train_sampler.record_batch(i, BATCH_SIZE_PER_GPU)
# if rank == 0:
msg = 'Epoch: [{0}][{1}/{2}]\t'.format(
state.epoch, state.batch, len(data_loader))
print(msg)
time.sleep(0.5)
state.epoch += 1
state.batch = 0
data_loader.sampler.set_epoch(epoch)
state.commit()
def main():
hvd.init()
torch.manual_seed(219)
torch.cuda.set_device(hvd.local_rank())
dataset = MyDataset(2000)
sampler = hvd.elastic.ElasticSampler(dataset, shuffle=False)
data_loader = torch.utils.data.DataLoader(
dataset,
batch_size=BATCH_SIZE_PER_GPU,
shuffle=False,
num_workers=2,
sampler=sampler,
worker_init_fn=None,
drop_last=True,
)
a = torch.Tensor([1,2,3,4])
state = hvd.elastic.TorchState(epoch=0,
train_sampler=sampler,
batch=0)
train(state, data_loader, a)
if __name__ == "__main__":
main()
```
3. Use elastic horovod to run the above code on some nodes, for example, on 3 nodes.
4. Kill the processes on one node after a while.
5. Observe the log we added to `SamplerStateHandler.sync()`.
**Sloutions**
1. sloution 1
Save the global state. To get the global state of ElasticSampler on every node, function `save` should call function `sync` first.
```python
class SamplerStateHandler(StateHandler):
......
def save(self):
self.sync()
self._saved_sampler_state = copy.deepcopy(self.value.state_dict())
......
```
But this will cause `state.commit()` to take a long time.
2. sloution 2
Maybe we can save the number of processed samples instead of save all the processed indices. The number of processed samples can be calculated locally by `batch_size` and `num_replicas`.
| closed | 2021-09-01T12:33:21Z | 2021-10-21T20:44:14Z | https://github.com/horovod/horovod/issues/3143 | [
"bug"
] | hgx1991 | 0 |
DistrictDataLabs/yellowbrick | scikit-learn | 425 | Improving Quick Start Documentation | **Quick Start Documentation**
In the walk through part of the documentation, correlation between actual temperature and feels like temperature were analysed. The author mentioned that it was intuitive to prioritize the feature 'feels like' over 'temp'. But for the reader, it is bit hard to understand the reason behind author's decision on feature selection from the plot.
Expectation:
Could add some more details on why 'feels like' is better than 'temp' when we train machine learning models.
### Background
https://www.datascience.com/blog/introduction-to-correlation-learn-data-science-tutorials
https://newonlinecourses.science.psu.edu/stat501/node/346/
| closed | 2018-05-15T17:52:14Z | 2018-06-08T11:09:06Z | https://github.com/DistrictDataLabs/yellowbrick/issues/425 | [
"type: question",
"level: intermediate",
"type: documentation"
] | muthu-tech | 4 |
pyg-team/pytorch_geometric | deep-learning | 9,862 | Bug in the implementation of sagpool | ### 🐛 Describe the bug
Hi this is an old bug described in pull request #8562 .
As mentioned, the implementation of select function in [sagpool](https://github.com/pyg-team/pytorch_geometric/blob/master/torch_geometric/nn/pool/sag_pool.py) reuses the select class of topkpool, which introduces an extra learnable weight. As a consequence, this select function learns to take the opposite of the scores if the weight is negative, which is not consistent with the paper, neither the [codes](https://github.com/inyeoplee77/SAGPool/blob/master/layers.py) of the authors. In my observation, this change influences the effectiveness of sagpool significantly in some situations.
Considering the future work may use the sagpool layer in Pyg for evaluation, would you mind fixing this bug so that it doesn't influence their results and conclusion?
### Versions
I cant get the collect_env tool. The connection refused.
| open | 2024-12-14T18:15:00Z | 2024-12-14T18:15:00Z | https://github.com/pyg-team/pytorch_geometric/issues/9862 | [
"bug"
] | ChenYizhu97 | 0 |
plotly/dash-table | plotly | 830 | Cell with dropdown does not allow for backspace | When editing the value of a cell with a dropdown after double clicking, the value can only be appended with more characters. If a typo was made when filtering the dropdown, pressing the backspace key doesn't do anything and you must click outside the cell to clear the input. However, if you double click on a cell without a dropdown to enable cell editing and then double click on a cell with a dropdown, the backspace key works as expected. | open | 2020-09-23T18:45:01Z | 2020-09-23T18:45:01Z | https://github.com/plotly/dash-table/issues/830 | [] | blozano824 | 0 |
PrefectHQ/prefect | automation | 17,280 | Loading a GCPSecret block generates a warning | ### Bug summary
Loading a GcpSecret Block seems to generating a warning, it appears to be coming from the GCP library, the Block still loads everything executes but the warning doesn't appear until the flow run completes causing some confusion
```python
from prefect import flow
from prefect_gcp.secret_manager import GcpSecret
@flow(log_prints=True)
def gcp_secret_flow():
gcpsecret_block = GcpSecret.load("mm2-test-secret")
gcpsecret_block.read_secret()
```
### Version info
```Text
Version: 3.2.2
API version: 0.8.4
Python version: 3.12.4
Git commit: d982c69a
Built: Thu, Feb 13, 2025 10:53 AM
OS/Arch: darwin/arm64
Profile: masonsandbox
Server type: cloud
Pydantic version: 2.10.6
Integrations:
prefect-dask: 0.3.2
prefect-snowflake: 0.28.0
prefect-slack: 0.3.0
prefect-gcp: 0.6.2
prefect-aws: 0.5.0
prefect-gitlab: 0.3.1
prefect-dbt: 0.6.4
prefect-docker: 0.6.1
prefect-sqlalchemy: 0.5.1
prefect-shell: 0.3.1
```
### Additional context
Log output
```
13:09:08.327 | INFO | Flow run 'spry-donkey' - Beginning flow run 'spry-donkey' for flow 'env-vars-flow'
13:09:08.331 | INFO | Flow run 'spry-donkey' - View at
13:09:09.698 | INFO | Flow run 'spry-donkey' - The secret 'NOTREAL' data was successfully read.
13:09:09.918 | INFO | Flow run 'spry-donkey' - Finished in state Completed()
13:09:10.086 | WARNING | EventsWorker - Still processing items: 1 items remaining...
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1740514154.823139 19166321 init.cc:232] grpc_wait_for_shutdown_with_timeout() timed out.
``` | closed | 2025-02-25T20:42:35Z | 2025-02-25T20:51:50Z | https://github.com/PrefectHQ/prefect/issues/17280 | [
"upstream dependency"
] | masonmenges | 1 |
keras-team/keras | data-science | 20,350 | argmax returns incorrect result for input containing -0.0 (Keras using TensorFlow backend) | Description:
When using keras.backend.argmax with an input array containing -0.0, the result is incorrect. Specifically, the function returns 1 (the index of -0.0) as the position of the maximum value, while the actual maximum value is 1.401298464324817e-45 at index 2.
This issue is reproducible in TensorFlow and JAX as well, as they share similar backend logic for the argmax function. However, PyTorch correctly returns the expected index 2 for the maximum value.
Expected Behavior:
keras.backend.argmax should return 2, as the value at index 2 (1.401298464324817e-45) is greater than both -1.0 and -0.0.
```
import numpy as np
import torch
import tensorflow as tf
import jax.numpy as jnp
from tensorflow import keras
def test_argmax():
# Input data
input_data = np.array([-1.0, -0.0, 1.401298464324817e-45], dtype=np.float32)
# PyTorch argmax
pytorch_result = torch.argmax(torch.tensor(input_data, dtype=torch.float32)).item()
print(f"PyTorch argmax result: {pytorch_result}")
# TensorFlow argmax
tensorflow_result = tf.math.argmax(input_data).numpy()
print(f"TensorFlow argmax result: {tensorflow_result}")
# Keras argmax (Keras internally uses TensorFlow, so should be the same)
keras_result = keras.backend.argmax(input_data).numpy()
print(f"Keras argmax result: {keras_result}")
# JAX argmax
jax_result = jnp.argmax(input_data)
print(f"JAX argmax result: {jax_result}")
if __name__ == "__main__":
test_argmax()
```
```
PyTorch argmax result: 2
TensorFlow argmax result: 1
Keras argmax result: 1
JAX argmax result: 1
``` | closed | 2024-10-14T10:15:25Z | 2025-01-25T06:13:46Z | https://github.com/keras-team/keras/issues/20350 | [
"stat:awaiting keras-eng",
"type:Bug"
] | LilyDong0127 | 1 |
viewflow/viewflow | django | 290 | Adding setUp method | Hello, is it possible to add a `setUp()` method to the Flow class that gets called when a workflow instance gets started? Problem I cannot put it inside the `__init__` is that this method gets called when the code gets loaded, eg.
```python
class MyFlow(Flow):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# this code gets executed when code being loaded
# by the FlowMetaClass
team = Team.objects.get(id=1)
```
Proposing
```python
class MyFlow(Flow):
def setUp(self, *args, **kwargs):
team = Team.objects.get(id=1)
```
Normally this will work fine, but if I have to add a field to the Team model, this prevents me able to make migration because it will instantiate the `MyFlow` class in the meta class, and ultimately complain about the new field is missing.
Whilst I can create a custom Start node to achieve this but I think it's more convenient to do it inside `setUp` method so a few similar flows can simply extend on the same base flow class.
An alternative way right now I am using is to create a descriptor class for the team to lazy load:
```python
class LazyTeamLoader():
"""
Mainly to get around this problem:
https://github.com/viewflow/viewflow/issues/290
"""
def __init__(self, team_slug: str):
self.team_slug = team_slug
self.team_instance = None
def __get__(self, *args, **kwargs):
if self.team_instance:
return self.team_instance
return Team.objects.get(slug=self.team_slug)
class MyFlow(Flow):
iaas_team = LazyTeamLoader('iaas')
```
Let me know what you think? | closed | 2020-09-16T23:01:31Z | 2020-12-11T10:16:04Z | https://github.com/viewflow/viewflow/issues/290 | [
"request/question"
] | variable | 1 |
akfamily/akshare | data-science | 5,405 | AKShare 接口问题报告 | 1. 请先详细阅读文档对应接口的使用方式:https://akshare.akfamily.xyz
2. 操作系统版本,目前只支持 64 位操作系统
3. Python 版本,目前只支持 3.8 以上的版本 【 3.11.8】
4. AKShare 版本,请升级到最新版【1.14.57】
5. 接口的名称和相应的调用代码
``` python
# 个股分红信息查询
news_trade_notify_dividend_baidu_df = ak.news_trade_notify_dividend_baidu(date="20241107")
print(news_trade_notify_dividend_baidu_df)
```
8. 接口报错的截图或描述
1. 尝试了date 为 20141202 至 20141205与示例中的20241107,都返回空的dataframe,应该有数据
2. 返回空的dataframe,没有表头
10. 期望能正确返回数据,没有数据时应该带表头
| closed | 2024-12-05T12:00:29Z | 2024-12-06T06:19:08Z | https://github.com/akfamily/akshare/issues/5405 | [
"bug"
] | a932455223 | 1 |
voxel51/fiftyone | computer-vision | 4,880 | [INSTALL] Cannot launch QuickStart | ### System information
- **OS Platform and Distribution** (e.g., Linux Ubuntu 16.04): macOS 15.0 (ARM)
- **Python version** (`python --version`): 3.9-3.11, installed via miniconda
- **FiftyOne version** (`fiftyone --version`): 1.0.0
- **FiftyOne installed from** (pip or source): pip
### Commands to reproduce
As thoroughly as possible, please provide the Python and/or shell commands used to encounter the issue.
```
fiftyone quickstart
```
### Describe the problem
Stack trace:
```
Traceback (most recent call last):
File "/Users/USER/miniforge3/envs/fo/bin/fiftyone", line 8, in <module>
sys.exit(main())
File "/Users/USER/miniforge3/envs/fo/lib/python3.9/site-packages/fiftyone/core/cli.py", line 4636, in main
args.execute(args)
File "/Users/USER/miniforge3/envs/fo/lib/python3.9/site-packages/fiftyone/core/cli.py", line 4619, in <lambda>
parser.set_defaults(execute=lambda args: command.execute(parser, args))
File "/Users/USER/miniforge3/envs/fo/lib/python3.9/site-packages/fiftyone/core/cli.py", line 174, in execute
_, session = fouq.quickstart(
File "/Users/USER/miniforge3/envs/fo/lib/python3.9/site-packages/fiftyone/utils/quickstart.py", line 44, in quickstart
return _quickstart(port, address, remote, desktop)
File "/Users/USER/miniforge3/envs/fo/lib/python3.9/site-packages/fiftyone/utils/quickstart.py", line 50, in _quickstart
return _launch_app(dataset, port, address, remote, desktop)
File "/Users/USER/miniforge3/envs/fo/lib/python3.9/site-packages/fiftyone/utils/quickstart.py", line 60, in _launch_app
session = fos.launch_app(
TypeError: launch_app() got an unexpected keyword argument 'desktop'
```
### Other info/logs
I originally tried to load my own dataset (a small YOLO one) via python code, but I always just get a window in my browser with an unspecified "TypeError: Load failed". So I tried the demo (`fiftyone quickstart`), and it looks like my installation is wrong somehow? I went into the code a bit, it looks like session's launch_app wants no desktop argument at all. If I just remove that one in quickstart's launch_app() call the error is gone, but then I go back to my original issue with my own dataset and get a "TypeError: Load failed".
I've tried a few different python versions but both errors are persistent. I'd appreciate any help, or even some help in enabling some more verbose output/pointing me to where logs are stored. Thanks!
PS: Considering that no one else has encountered this issue so far I've put this as an installation issue, hope that works.
| closed | 2024-10-03T08:46:33Z | 2024-10-04T14:34:19Z | https://github.com/voxel51/fiftyone/issues/4880 | [
"bug",
"installation"
] | tfaehse | 1 |
FactoryBoy/factory_boy | sqlalchemy | 831 | Post-generated attribute of RelatedFactoryList isn't generated | #### Description
In order to automatically chain Factories whose underlying models aren't explicitly related all the way down the chain, I use a `post_generation` hook on a child `SubFactory` that's implicitly related through my code's business logic. When generating a child alone, the `post_generation` hook succeeds. However, when trying to generate a child via an explicitly related parent, the `post_generated` attributes aren't generated.
#### To Reproduce
Example: while the Shoe object can exist independently of a Child, in practice a child will always have a left and right shoe. However, without explicit relations such as FK, the relation can't be mocked using RelatedFactory or SubFactory.
```python
from factory import Factory, RelatedFactoryList, Faker, post_generation
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, Unicode, create_engine, ForeignKey
from factory.alchemy import SQLAlchemyModelFactory
from sqlalchemy.orm import scoped_session, sessionmaker, relationship
Base = declarative_base()
engine = create_engine('sqlite://')
session = scoped_session(sessionmaker(bind=engine))
class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
children = relationship("Child", back_populates="parent")
class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('parent.id'))
name = Column(Unicode())
parent = relationship("Parent", back_populates="children")
class Shoe(Base):
__tablename__ = 'toy'
id = Column(Integer, primary_key=True)
foot = Column(Unicode())
class ShoeFactory(SQLAlchemyModelFactory):
class Meta:
model = Shoe
sqlalchemy_session = session
foot = Faker('random_element', elements=["right", "left"])
class ChildFactory(SQLAlchemyModelFactory):
# Child has to have at least 1 right shoe and one left shoe.
class Meta:
model = Child
sqlalchemy_session = session
name = Faker('word')
@post_generation
def post_generated_attribute(self, create, extracted, **kwargs):
self.right_shoe = ShoeFactory(
foot="right"
)
self.left_shoe = ShoeFactory(
foot="left"
)
class ParentFactory(SQLAlchemyModelFactory):
class Meta:
model = Parent
sqlalchemy_session = session
children = RelatedFactoryList(ChildFactory, 'parent', size=1,)
```
##### The issue
When generating an ChildFactory alone, the `post-generation` hook succeeds. However if generating a Parent, somehow the SubFactory `child` attribute doesn't generate the `shoes` post-attribute.
```python
# Generating child with no parents.
orphan = ChildFactory()
try:
orphan_right_shoe = orphan.right_shoe
except:
print("Orphan has no right shoe")
else:
print("Orphan's right shoe is on")
try:
orphan_left_shoe = orphan.left_shoe # shoes are there
except:
print("Orphan has no left shoe")
else:
print("Orphan's left shoe is on")
# Generating child through parents
parent = ParentFactory()
child = parent.children
try:
child_right_shoe = parent.children.right_shoe
except:
print("Child has no right shoe")
else:
print("Child's right shoe is on")
try:
child_left_shoe = parent.childern.left_shoe
except:
print("Child has no left shoe")
else:
print("Child's left shoe is on")
>>>
Orphan's right shoe is on
Orphan's left shoe is on
Child has no right shoe
Child has no left shoe
```
The error on the Child reads:
```shell
Traceback (most recent call last):
File "scratch.py", line 58, in <module>
child_toy = parent.children.toy
AttributeError: 'InstrumentedList' object has no attribute 'toy'
```
#### Notes
This also seems to happen on `RelatedFactory` (which would make sense since the `List` is an extension).
Any ideas why this would be happening? Otherwise, any recommendations on how to do the desired above (i.e. generate a parent, with children, with shoes)? Thanks!
| closed | 2021-01-08T23:23:39Z | 2021-04-16T00:25:33Z | https://github.com/FactoryBoy/factory_boy/issues/831 | [
"Q&A",
"SQLAlchemy",
"Fixed"
] | kabdallah-galileo | 2 |
MagicStack/asyncpg | asyncio | 316 | asyncpg.exceptions.DataError: invalid input for query argument $1 (value out of int32 range) | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.16.0
* **PostgreSQL version**: PostgreSQL 9.5.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit (AWS RDS version)
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: Yes, AWS RDS
* **Python version**: 3.6.4
* **Platform**: Red Hat 4.8.5-16
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: yes
* **If you built asyncpg locally, which version of Cython did you use?**: n/a
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: n/a
<!-- Enter your issue details below this comment. -->
File "/usr/lib64/python3.6/site-packages/asyncpg/connection.py", line 372, in prepare
return await self._prepare(query, timeout=timeout, use_cache=False)
File "/usr/lib64/python3.6/site-packages/asyncpg/connection.py", line 377, in _prepare
use_cache=use_cache)
File "/usr/lib64/python3.6/site-packages/asyncpg/connection.py", line 308, in _get_statement
types_with_missing_codecs, timeout)
File "/usr/lib64/python3.6/site-packages/asyncpg/connection.py", line 348, in _introspect_types
self._intro_query, (list(typeoids),), 0, timeout)
File "/usr/lib64/python3.6/site-packages/asyncpg/connection.py", line 1363, in __execute
return await self._do_execute(query, executor, timeout)
File "/usr/lib64/python3.6/site-packages/asyncpg/connection.py", line 1385, in _do_execute
result = await executor(stmt, None)
File "asyncpg/protocol/protocol.pyx", line 190, in bind_execute
File "asyncpg/protocol/prepared_stmt.pyx", line 160, in asyncpg.protocol.protocol.PreparedStatementState._encode_bind_msg
asyncpg.exceptions.DataError: invalid input for query argument $1: [4144356976] (value out of int32 range)
I'm encountering another issue similar to https://github.com/MagicStack/asyncpg/issues/279. When I use a custom data type defined in Postgres and attempt to reference it in a query, if the OID of the datatype is greater than 2^31 I get the above error. I'm not sure what the fix is, but I imagine it again involves treating OIDs as uint32 rather than int32 values. | closed | 2018-06-13T19:27:37Z | 2018-06-14T04:32:10Z | https://github.com/MagicStack/asyncpg/issues/316 | [] | eheien | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,917 | the ORM does not expire server_onupdate columns during a bulk UPDATE (new style) | This was discussed in #11911
The `server_onupdate` columns do not get expired by the orm following an update, requiring populate-existing to fetch the new values. This impact mainly computed columns.
Reproducer:
```py
from sqlalchemy import orm
import sqlalchemy as sa
class Base(orm.DeclarativeBase):
pass
class T(Base):
__tablename__ = "t"
id: orm.Mapped[int] = orm.mapped_column(primary_key=True)
value: orm.Mapped[int]
cc: orm.Mapped[int] = orm.mapped_column(sa.Computed("value + 42"))
e = sa.create_engine("sqlite:///", echo=True)
Base.metadata.create_all(e)
with orm.Session(e) as s:
s.add(T(value=10))
s.flush()
s.commit()
with orm.Session(e) as s:
assert (v := s.get_one(T, 1)).cc == 52
s.execute(sa.update(T).values(id=1, value=2))
assert (v := s.get_one(T, 1)).cc != 44 # should be 44
with orm.Session(e) as s:
assert (v := s.get_one(T, 1)).cc == 52
r = s.execute(sa.update(T).values(value=2).returning(T))
assert (v := r.scalar_one()).cc != 44 # should be 44
assert (v := s.get_one(T, 1)).cc != 44 # should be 44
with orm.Session(e) as s:
r = s.execute(sa.insert(T).values(value=9).returning(T))
assert (v := r.scalar_one()).cc == 51
r = s.execute(sa.update(T).values(value=2).filter_by(id=2).returning(T))
assert (v := r.scalar_one()).cc != 44 # should be 44
r = s.execute(sa.select(T).filter_by(id=2))
assert (v := r.scalar_one()).cc != 44 # should be 44
``` | closed | 2024-09-23T18:21:09Z | 2024-11-24T21:29:36Z | https://github.com/sqlalchemy/sqlalchemy/issues/11917 | [
"bug",
"orm",
"great mcve"
] | CaselIT | 5 |
xlwings/xlwings | automation | 2,156 | xlwings Server via Office Scripts not responsive | #### Windows 10
#### 0.28.6, Office 365, Python 3.10
Hello there, and first of all, what a phenomenal project! Using it on a daily basis!
I am currently trying to make the xlwings server work (trying it out on my local machine) as the add-in option gets blocked by the networks firewall (and no - network admin does not want to whitelist it... sigh). So I made the local server run with the demo folder and the included main.py and went the OfficeScripts route. However once I click **Run** in the Office Script after pasting the output of _xlwings copy os_... nothing happens, except the small pop up "The scipt ran with errors."
Is there anything I have to adjust in the xlwings copy os output, maybe in order to direct the script to my local server?
I am stuck on this for a while now and really appreciate any guidance / help here!
Many many thanks and best wishes

| closed | 2023-02-03T16:51:49Z | 2023-02-03T18:37:48Z | https://github.com/xlwings/xlwings/issues/2156 | [] | alexanderburkhard | 3 |
tensorly/tensorly | numpy | 414 | robust_pca does not work on GPU with PyTorch as a backend | #### Describe the bug
robust_pca does not work on GPU with PyTorch as a backend with tensorly-0.7.0 and PyTorch 1.11.0+cu113
However, i also tried with tensorly-0.7.0 and pytorch 1.7+cu102 that has the same issues
#### Steps or Code to Reproduce
```
pip install tensorly
print(torch.__version__)
import torch
import tensorly as tl
tl.set_backend('pytorch')
cuda = torch.device('cuda')
fake_data = torch.randn(2500, 9000, device=cuda)
low_rank_part, sparse_part = tl.decomposition.robust_pca(fake_data, reg_E=0.04, learning_rate=1.2, n_iter_max=20)
```
#### Expected behavior
Run robust_pca on GPU without any issues
#### Actual result
```
/usr/local/lib/python3.7/dist-packages/tensorly/backend/core.py:1106: UserWarning: In partial_svd: converting to NumPy. Check SVD_FUNS for available alternatives if you want to avoid this.
warnings.warn('In partial_svd: converting to NumPy.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-7-e31773fb0320>](https://localhost:8080/#) in <module>()
1 cuda = torch.device('cuda')
2 fake_data = torch.randn(2500, 9000, device=cuda)
----> 3 low_rank_part, sparse_part = tl.decomposition.robust_pca(fake_data, reg_E=0.04, learning_rate=1.2, n_iter_max=20)
3 frames
<__array_function__ internals> in amax(*args, **kwargs)
[/usr/local/lib/python3.7/dist-packages/torch/_tensor.py](https://localhost:8080/#) in __array__(self, dtype)
730 return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
731 if dtype is None:
--> 732 return self.numpy()
733 else:
734 return self.numpy().astype(dtype, copy=False)
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
```
#### Versions
Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
Python 3.7.13 (default, Apr 24 2022, 01:04:09)
[GCC 7.5.0]
NumPy 1.21.6
SciPy 1.4.1
TensorLy 0.7.0
Pytorch 1.11.0+cu113
| closed | 2022-06-23T03:29:30Z | 2022-06-24T16:38:06Z | https://github.com/tensorly/tensorly/issues/414 | [] | Mahmood-Hussain | 2 |
pyg-team/pytorch_geometric | pytorch | 10,119 | pytorch_geometric is using a compromised tj-actions/changed-files GitHub action | pytorch_geometric uses a compromised version of tj-actions/changed-files. The compromised action appears to leak secrets the runner has in memory.
The action is included in:
- https://github.com/pyg-team/pytorch_geometric/blob/d2bb939a1bfba3b7a6f7d7b102a2771471657319/.github/workflows/documentation.yml
Output of an affected runs:
- https://github.com/pyg-team/pytorch_geometric/actions/runs/13864242315/job/38799668099#step:3:63
Please review.
Learn about the compromise on [StepSecurity](https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised) of [Semgrep](https://semgrep.dev/blog/2025/popular-github-action-tj-actionschanged-files-is-compromised/). | open | 2025-03-15T21:31:58Z | 2025-03-15T21:32:06Z | https://github.com/pyg-team/pytorch_geometric/issues/10119 | [
"bug"
] | eslerm | 0 |
dbfixtures/pytest-postgresql | pytest | 225 | TypeError: '>=' not supported between instances of 'float' and 'Version' | Tried upgrading to 2.0.0 and same problem
```
@request.addfinalizer
def drop_database():
> drop_postgresql_database(pg_user, pg_host, pg_port, pg_db, 11.2)
qc/test/conftest.py:46:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
user = None, host = None, port = None, db_name = 'qc_test', version = 11.2
def drop_postgresql_database(user, host, port, db_name, version):
"""
Drop databse in postgresql.
:param str user: postgresql username
:param str host: postgresql host
:param str port: postgresql port
:param str db_name: database name
:param packaging.version.Version version: postgresql version number
"""
conn = psycopg2.connect(user=user, host=host, port=port)
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
cur = conn.cursor()
# We cannot drop the database while there are connections to it, so we
# terminate all connections first while not allowing new connections.
> if version >= parse_version('9.2'):
E TypeError: '>=' not supported between instances of 'float' and 'Version'
../../../../.virtualenvs/qc-backend-qjdNaO4n/lib/python3.7/site-packages/pytest_postgresql/factories.p:
84: TypeError
``` | closed | 2019-08-08T13:47:22Z | 2019-08-09T08:03:06Z | https://github.com/dbfixtures/pytest-postgresql/issues/225 | [
"question"
] | revmischa | 6 |
python-gino/gino | asyncio | 40 | GINO query methods should accept raw SQL | So that user could get model objects from raw SQL. For example:
```python
users = await db.text('SELECT * FROM users WHERE id > :num').gino.model(User).return_model(True).all(num=28, bind=db.bind)
``` | closed | 2017-08-30T02:28:27Z | 2017-08-30T03:16:49Z | https://github.com/python-gino/gino/issues/40 | [
"help wanted",
"task"
] | fantix | 1 |
ray-project/ray | tensorflow | 51,423 | Ray on kubernetes with custom image_uri is broken | ### What happened + What you expected to happen
Hi, I am trying to use a custom image on a kubernetes cluster. I am using this cluster: `https://github.com/ray-project/kuberay/blob/master/ray-operator/config/samples/ray-cluster.autoscaler.yaml`.
Unfortunately, it seems that ray uses podman to launch custom images (`https://github.com/ray-project/ray/blame/master/python/ray/_private/runtime_env/image_uri.py#L16C10-L18C15`) (by @zcin), however it does not seem that in the official ray image that podman is installed, so I get issues saying that podman is not installed.
I have tried installing podman manually, but then I get the error `WARN[0000] "/" is not a shared mount, this could cause issues or missing mounts with rootless containers`.
In my opinion, the best solution for this would be to completely remove this podman dependency, as it seems to be causing many issues.
Is there a workaround for this right now? I'm completely blocked as things stand.
### Versions / Dependencies
latest
### Reproduction script
```python
from ray.job_submission import JobSubmissionClient
client = JobSubmissionClient(args.address)
job_id = client.submit_job(
entrypoint=""" cat /etc/hostname; echo "import ray; print(ray.__version__); print('hello'); import time; time.sleep(100); print('done');" > main.py; python main.py """,
runtime_env={
"image_uri": "<choose an image here>",
},
)
print(job_id)
```
### Issue Severity
High: It blocks me from completing my task. | open | 2025-03-17T15:07:30Z | 2025-03-21T23:06:12Z | https://github.com/ray-project/ray/issues/51423 | [
"bug",
"triage",
"core"
] | CowKeyMan | 5 |
davidteather/TikTok-Api | api | 879 | RuntimeError: This event loop is already running[BUG] - Your Error Here | Fill Out the template :)
**I have installed all required library and still getting runtime error**
A clear and concise description of what the bug is.
**from TikTokApi import TikTokApi
api = TikTokApi()
n_videos = 100
username = 'nastyblaq'
user_videos = api.byUsername(username, count=n_videos)**
Please add any relevant code that is giving you unexpected results.
Preferably the smallest amount of code to reproduce the issue.
**SET LOGGING LEVEL TO INFO BEFORE POSTING CODE OUTPUT**
```py
import logging
TikTokApi(logging_level=logging.INFO) # SETS LOGGING_LEVEL TO INFO
# Hopefully the info level will help you debug or at least someone else on the issue
```
```py
# Code Goes Here
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Error Trace (if any)**
Put the error trace below if there's any error thrown.
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-3-55c3fab06e06>](https://localhost:8080/#) in <module>()
1 from TikTokApi import TikTokApi
----> 2 api = TikTokApi()
3 n_videos = 100
4 username = 'nastyblaq'
5 user_videos = api.byUsername(username, count=n_videos)
```
**Desktop (please complete the following information):**
- OS: [e.g. Windows 10]
- TikTokApi Version [e.g. 5.0.0] - if out of date upgrade before posting an issue
**Additional context**
Add any other context about the problem here.
| closed | 2022-05-04T13:07:30Z | 2023-08-08T22:14:34Z | https://github.com/davidteather/TikTok-Api/issues/879 | [
"bug"
] | kareemrasheed89 | 1 |
inventree/InvenTree | django | 8,618 | Cannot delete Attachment | ### Please verify that this bug has NOT been raised before.
- [x] I checked and didn't find a similar issue
### Describe the bug*
When I try to delete Attachement which I accidentally uploaded. I get error code ,,Action Prohibited-Delete operation not allowed”

### Steps to Reproduce
Upload attachement to part, then try to delete it.
But the problem didn’t occur in Demo.
### Expected behaviour
To delete the attachment.
### Deployment Method
- [ ] Docker
- [ ] Package
- [ ] Bare metal
- [ ] Other - added info in Steps to Reproduce
### Version Information
# Version Information:
InvenTree-Version: 0.16.4
Django Version: 4.2.15
Database: sqlite3
Debug-Mode: False
Deployed using Docker: False
Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.31
Installer: None
Active plugins: [{'name': 'InvenTreeBarcode', 'slug': 'inventreebarcode', 'version': '2.1.0'}, {'name': 'InvenTreeCoreNotificationsPlugin', 'slug': 'inventreecorenotificationsplugin', 'version': '1.0.0'}, {'name': 'InvenTreeCurrencyExchange', 'slug': 'inventreecurrencyexchange', 'version': '1.0.0'}, {'name': 'InvenTreeLabel', 'slug': 'inventreelabel', 'version': '1.1.0'}, {'name': 'InvenTreeLabelMachine', 'slug': 'inventreelabelmachine', 'version': '1.0.0'}, {'name': 'InvenTreeLabelSheet', 'slug': 'inventreelabelsheet', 'version': '1.0.0'}, {'name': 'DigiKeyPlugin', 'slug': 'digikeyplugin', 'version': '1.0.0'}, {'name': 'LCSCPlugin', 'slug': 'lcscplugin', 'version': '1.0.0'}, {'name': 'MouserPlugin', 'slug': 'mouserplugin', 'version': '1.0.0'}, {'name': 'TMEPlugin', 'slug': 'tmeplugin', 'version': '1.0.0'}, {'name': 'Brother Labels', 'slug': 'brother', 'version': '1.0.0'}]
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
```shell
``` | closed | 2024-12-02T16:20:57Z | 2024-12-29T06:25:17Z | https://github.com/inventree/InvenTree/issues/8618 | [
"bug"
] | spitdavid | 6 |
pallets/quart | asyncio | 177 | quart v0.18.0 requires click >= v8.0.0 but still has `*` in `pyproject.toml` | As of quart v0.18.0, click v8 is now required. This is due to the import of `click.core.ParameterSource` introduced in https://github.com/pallets/quart/commit/951bed228f7f66c04e76f5786aaddc9d9e8d6831. `click.core.ParameterSource` was introduced in click v8.0.0 https://click.palletsprojects.com/en/8.0.x/changes/#version-8-0-0.
This can be reproduced by creating a virtual environment, installing quart v0.18.0 and click v7.1.2, then importing quart. I would expect that pip would complain at install time about a dependency conflict.
I already have a PR incoming. (https://github.com/pallets/quart/pull/178)
```console
$ python3.10 -m venv venv
```
```console
$ venv/bin/python -m pip install --upgrade pip setuptools wheel
Requirement already satisfied: pip in ./venv/lib/python3.10/site-packages (21.2.4)
Collecting pip
Using cached pip-22.2.2-py3-none-any.whl (2.0 MB)
Requirement already satisfied: setuptools in ./venv/lib/python3.10/site-packages (58.1.0)
Collecting setuptools
Using cached setuptools-65.3.0-py3-none-any.whl (1.2 MB)
Collecting wheel
Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB)
Installing collected packages: wheel, setuptools, pip
Attempting uninstall: setuptools
Found existing installation: setuptools 58.1.0
Uninstalling setuptools-58.1.0:
Successfully uninstalled setuptools-58.1.0
Attempting uninstall: pip
Found existing installation: pip 21.2.4
Uninstalling pip-21.2.4:
Successfully uninstalled pip-21.2.4
Successfully installed pip-22.2.2 setuptools-65.3.0 wheel-0.37.1
```
```console
$ venv/bin/python -m pip install quart==0.18.0 click==7.1.2
Collecting quart==0.18.0
Using cached Quart-0.18.0-py3-none-any.whl (98 kB)
Collecting click==7.1.2
Using cached click-7.1.2-py2.py3-none-any.whl (82 kB)
Collecting itsdangerous
Using cached itsdangerous-2.1.2-py3-none-any.whl (15 kB)
Collecting markupsafe
Using cached MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Collecting blinker
Using cached blinker-1.5-py2.py3-none-any.whl (12 kB)
Collecting hypercorn>=0.11.2
Using cached Hypercorn-0.14.3-py3-none-any.whl (57 kB)
Collecting jinja2
Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting aiofiles
Using cached aiofiles-22.1.0-py3-none-any.whl (14 kB)
Collecting werkzeug>=2.2.0
Using cached Werkzeug-2.2.2-py3-none-any.whl (232 kB)
Collecting priority
Using cached priority-2.0.0-py3-none-any.whl (8.9 kB)
Collecting toml
Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting h2>=3.1.0
Using cached h2-4.1.0-py3-none-any.whl (57 kB)
Collecting h11
Using cached h11-0.13.0-py3-none-any.whl (58 kB)
Collecting wsproto>=0.14.0
Using cached wsproto-1.2.0-py3-none-any.whl (24 kB)
Collecting hpack<5,>=4.0
Using cached hpack-4.0.0-py3-none-any.whl (32 kB)
Collecting hyperframe<7,>=6.0
Using cached hyperframe-6.0.1-py3-none-any.whl (12 kB)
Installing collected packages: toml, priority, markupsafe, itsdangerous, hyperframe, hpack, h11, click, blinker, aiofiles, wsproto, werkzeug, jinja2, h2, hypercorn, quart
Successfully installed aiofiles-22.1.0 blinker-1.5 click-7.1.2 h11-0.13.0 h2-4.1.0 hpack-4.0.0 hypercorn-0.14.3 hyperframe-6.0.1 itsdangerous-2.1.2 jinja2-3.1.2 markupsafe-2.1.1 priority-2.0.0 quart-0.18.0 toml-0.10.2 werkzeug-2.2.2 wsproto-1.2.0
```
```console
$ venv/bin/python -c 'import quart'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/altendky/venv/lib/python3.10/site-packages/quart/__init__.py", line 5, in <module>
from .app import Quart as Quart
File "/home/altendky/venv/lib/python3.10/site-packages/quart/app.py", line 50, in <module>
from .blueprints import Blueprint
File "/home/altendky/venv/lib/python3.10/site-packages/quart/blueprints.py", line 18, in <module>
from .scaffold import _endpoint_from_view_func, Scaffold, setupmethod
File "/home/altendky/venv/lib/python3.10/site-packages/quart/scaffold.py", line 28, in <module>
from .cli import AppGroup
File "/home/altendky/venv/lib/python3.10/site-packages/quart/cli.py", line 19, in <module>
from click.core import ParameterSource
ImportError: cannot import name 'ParameterSource' from 'click.core' (/home/altendky/venv/lib/python3.10/site-packages/click/core.py)
```
Environment:
- Python version: 3.10
- Quart version: 0.18.0
| closed | 2022-09-05T23:53:06Z | 2022-09-21T00:27:28Z | https://github.com/pallets/quart/issues/177 | [] | altendky | 0 |
sebp/scikit-survival | scikit-learn | 70 | Installation failed on both windows and linux | I tried to install on CentOS 7.2 by follow command:
`python ci/list-requirements.py requirements/dev.txt > /tmp/requirements.txt`
`conda create -n sksurv -c sebp python=3 --file /tmp/requirements.txt`
but failed after "downloading and extracting packages":
```
# conda create -n sksurv -c sebp python=3 --file /tmp/requirements.txt
WARNING conda.base.context:use_only_tar_bz2(632): Conda is constrained to only using the old .tar.bz2 file format because you have conda-build installed, and it is <3.18.3. Update or remove conda-build to get smaller downloads and faster extractions.
Collecting package metadata (repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /root/anaconda3/envs/sksurv
added / updated specs:
- cvxopt
- cvxpy[version='>=1.0']
- cython[version='>=0.29']
- numexpr
- numpy
- pandas[version='<0.24,>=0.20']
- pytest
- pytest-cov
- python=3
- scikit-learn[version='<0.21,>=0.19.0']
- scipy[version='<1.3.0,>=1.0']
- setuptools_scm
The following packages will be downloaded:
package | build
---------------------------|-----------------
coverage-4.5.3 | py37h7b6447c_0 217 KB defaults
cvxopt-1.2.0 | py37h54607b7_1 521 KB sebp
cvxpy-1.0.15 | py37_0 6 KB sebp
cvxpy-base-1.0.15 | py37hf484d3e_0 683 KB sebp
cython-0.29.11 | py37he6710b0_0 2.2 MB defaults
ecos-2.0.7.post1 | py37h5f7f5f7_0 72 KB sebp
importlib_metadata-0.17 | py37_1 39 KB defaults
more-itertools-7.0.0 | py37_0 93 KB defaults
multiprocess-0.70.6.1 | py37h14c3975_1 182 KB sebp
osqp-0.4.1 | py37h5f7f5f7_0 162 KB sebp
pluggy-0.12.0 | py_0 20 KB defaults
pytest-5.0.0 | py37_0 356 KB defaults
pytest-cov-2.7.1 | py_0 21 KB defaults
scs-2.0.2 | py37hb3ffb1f_1 94 KB sebp
setuptools_scm-3.3.3 | py_0 26 KB defaults
zipp-0.5.1 | py_0 8 KB defaults
------------------------------------------------------------
Total: 4.6 MB
The following NEW packages will be INSTALLED:
_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main
atomicwrites pkgs/main/linux-64::atomicwrites-1.3.0-py37_1
attrs pkgs/main/linux-64::attrs-19.1.0-py37_1
blas pkgs/main/linux-64::blas-1.0-mkl
ca-certificates pkgs/main/linux-64::ca-certificates-2019.5.15-0
certifi pkgs/main/linux-64::certifi-2019.6.16-py37_0
coverage pkgs/main/linux-64::coverage-4.5.3-py37h7b6447c_0
cvxopt sebp/linux-64::cvxopt-1.2.0-py37h54607b7_1
cvxpy sebp/linux-64::cvxpy-1.0.15-py37_0
cvxpy-base sebp/linux-64::cvxpy-base-1.0.15-py37hf484d3e_0
cython pkgs/main/linux-64::cython-0.29.11-py37he6710b0_0
dill pkgs/main/linux-64::dill-0.2.9-py37_0
ecos sebp/linux-64::ecos-2.0.7.post1-py37h5f7f5f7_0
fastcache pkgs/main/linux-64::fastcache-1.1.0-py37h7b6447c_0
fftw pkgs/main/linux-64::fftw-3.3.8-h7b6447c_3
future pkgs/main/linux-64::future-0.17.1-py37_0
glpk pkgs/main/linux-64::glpk-4.65-h3ceedfd_2
gmp pkgs/main/linux-64::gmp-6.1.2-h6c8ec71_1
gsl pkgs/main/linux-64::gsl-2.4-h14c3975_4
importlib_metadata pkgs/main/linux-64::importlib_metadata-0.17-py37_1
intel-openmp pkgs/main/linux-64::intel-openmp-2019.4-243
libedit pkgs/main/linux-64::libedit-3.1.20181209-hc058e9b_0
libffi pkgs/main/linux-64::libffi-3.2.1-hd88cf55_4
libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0
libgfortran-ng pkgs/main/linux-64::libgfortran-ng-7.3.0-hdf63c60_0
libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0
metis pkgs/main/linux-64::metis-5.1.0-hf484d3e_4
mkl pkgs/main/linux-64::mkl-2019.4-243
mkl_fft pkgs/main/linux-64::mkl_fft-1.0.12-py37ha843d7b_0
mkl_random pkgs/main/linux-64::mkl_random-1.0.2-py37hd81dba3_0
more-itertools pkgs/main/linux-64::more-itertools-7.0.0-py37_0
multiprocess sebp/linux-64::multiprocess-0.70.6.1-py37h14c3975_1
ncurses pkgs/main/linux-64::ncurses-6.1-he6710b0_1
numexpr pkgs/main/linux-64::numexpr-2.6.9-py37h9e4a6bb_0
numpy pkgs/main/linux-64::numpy-1.16.4-py37h7e9f1db_0
numpy-base pkgs/main/linux-64::numpy-base-1.16.4-py37hde5b4d6_0
openssl pkgs/main/linux-64::openssl-1.1.1c-h7b6447c_1
osqp sebp/linux-64::osqp-0.4.1-py37h5f7f5f7_0
packaging pkgs/main/linux-64::packaging-19.0-py37_0
pandas pkgs/main/linux-64::pandas-0.23.4-py37h04863e7_0
pip pkgs/main/linux-64::pip-19.1.1-py37_0
pluggy pkgs/main/noarch::pluggy-0.12.0-py_0
py pkgs/main/linux-64::py-1.8.0-py37_0
pyparsing pkgs/main/noarch::pyparsing-2.4.0-py_0
pytest pkgs/main/linux-64::pytest-5.0.0-py37_0
pytest-cov pkgs/main/noarch::pytest-cov-2.7.1-py_0
python pkgs/main/linux-64::python-3.7.3-h0371630_0
python-dateutil pkgs/main/linux-64::python-dateutil-2.8.0-py37_0
pytz pkgs/main/noarch::pytz-2019.1-py_0
readline pkgs/main/linux-64::readline-7.0-h7b6447c_5
scikit-learn pkgs/main/linux-64::scikit-learn-0.20.3-py37hd81dba3_0
scipy pkgs/main/linux-64::scipy-1.2.1-py37h7c811a0_0
scs sebp/linux-64::scs-2.0.2-py37hb3ffb1f_1
setuptools pkgs/main/linux-64::setuptools-41.0.1-py37_0
setuptools_scm pkgs/main/noarch::setuptools_scm-3.3.3-py_0
six pkgs/main/linux-64::six-1.12.0-py37_0
sqlite pkgs/main/linux-64::sqlite-3.28.0-h7b6447c_0
suitesparse pkgs/main/linux-64::suitesparse-5.2.0-h9e4a6bb_0
tbb pkgs/main/linux-64::tbb-2019.4-hfd86e86_0
tk pkgs/main/linux-64::tk-8.6.8-hbc83047_0
wcwidth pkgs/main/linux-64::wcwidth-0.1.7-py37_0
wheel pkgs/main/linux-64::wheel-0.33.4-py37_0
xz pkgs/main/linux-64::xz-5.2.4-h14c3975_4
zipp pkgs/main/noarch::zipp-0.5.1-py_0
zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3
Proceed ([y]/n)? y
Downloading and Extracting Packages
setuptools_scm-3.3.3 | 26 KB | #################################################################################################################################################################### | 100%
cython-0.29.11 | 2.2 MB | #################################################################################################################################################################### | 100%
osqp-0.4.1 | 162 KB | | 0%
scs-2.0.2 | 94 KB | | 0%
cvxpy-base-1.0.15 | 683 KB | | 0%
pytest-cov-2.7.1 | 21 KB | #################################################################################################################################################################### | 100%
ecos-2.0.7.post1 | 72 KB | | 0%
coverage-4.5.3 | 217 KB | #################################################################################################################################################################### | 100%
pytest-5.0.0 | 356 KB | #################################################################################################################################################################### | 100%
multiprocess-0.70.6. | 182 KB | | 0%
pluggy-0.12.0 | 20 KB | #################################################################################################################################################################### | 100%
importlib_metadata-0 | 39 KB | #################################################################################################################################################################### | 100%
cvxpy-1.0.15 | 6 KB | | 0%
more-itertools-7.0.0 | 93 KB | #################################################################################################################################################################### | 100%
zipp-0.5.1 | 8 KB | #################################################################################################################################################################### | 100%
cvxopt-1.2.0 | 521 KB | | 0%
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
```
"conda install -c sebp scikit-survival" at an existed environment on windows10 also failed with the same error:
```
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
TypeError("__init__() missing 1 required positional argument: 'message'")
```
| closed | 2019-07-08T12:08:49Z | 2019-07-10T08:30:35Z | https://github.com/sebp/scikit-survival/issues/70 | [] | chengchengbai | 4 |
d2l-ai/d2l-en | computer-vision | 2,580 | Chinese version of the code is out of date | I found that there are some compatibility issues between the code in the Chinese version and the latest d2l package, by checking the English version, I found that the code in the English version has been updated with pytorch2.0, and the d2l package has been updated with it, and these updates have not been synchronized to the Chinese version in time, which causes a lot of problems to students who use the Chinese version for learning. So I would like to ask the team if there is any plan to update the Chinese version, and if there is anything I can do to contribute to this? | open | 2023-12-28T08:53:14Z | 2025-02-01T06:44:20Z | https://github.com/d2l-ai/d2l-en/issues/2580 | [] | LiquidTaeJa | 2 |
3b1b/manim | python | 1,432 | There is no scene inside that module | ### Describe the error
<!-- A clear and concise description of what you want to make. -->
I get the following error when I run the example. I have installed everything as far as I can tell.

### Environment
**OS System**: Windows 10
**manim version**: master <!-- make sure you are using the latest version of master branch -->
**python version**:3.9.2
| closed | 2021-03-05T04:26:07Z | 2024-11-18T16:53:20Z | https://github.com/3b1b/manim/issues/1432 | [] | MurliNair | 20 |
jeffknupp/sandman2 | rest-api | 67 | Add repository topic "automatic-api" | Could you add the topic `automatic-api` to your repository? Software that automatically exposes APIs to databases isn't well-cataloged. (There didn't even seem to be a list on GitHub, so I [started one](https://github.com/dbohdan/automatic-api).) It will be easier to discover if there is a standard [GitHub topic](https://help.github.com/articles/about-topics/), and `automatic-api` seems as good a candidate as any. Three projects [already use it](https://github.com/topics/automatic-api). | closed | 2017-12-22T23:14:50Z | 2018-05-07T18:04:14Z | https://github.com/jeffknupp/sandman2/issues/67 | [] | dbohdan | 1 |
mljar/mercury | jupyter | 153 | Set widgets values dynamically | It would be nice to have a way to create input fields using data from a python variable/ API response data/ YML generated file (from a python script for example) | closed | 2022-08-01T14:15:50Z | 2023-02-14T11:29:14Z | https://github.com/mljar/mercury/issues/153 | [
"enhancement",
"help wanted"
] | shinuehara | 3 |
koaning/scikit-lego | scikit-learn | 403 | [DOCS] KlusterFoldValidation | `KlusterFoldValidation` does not have proper documentation. There's an [API spec](https://scikit-lego.readthedocs.io/en/latest/api/model_selection.html) but an example next to [this](https://scikit-lego.readthedocs.io/en/latest/timegapsplit.html) would be nice. | closed | 2020-07-30T12:00:31Z | 2022-04-20T20:04:33Z | https://github.com/koaning/scikit-lego/issues/403 | [
"documentation"
] | koaning | 0 |
profusion/sgqlc | graphql | 107 | Is it possible to set graphql_name to snake_case instead of camelCase? | Is it possible to set graphql_name to snake_case instead of camelCase?
This is how my schema looks currently,
> ```class PipelineAccessNode(sgqlc.types.Type, Node):
> __schema__ = sgqlc_schema
> __field_names__ = ('pipeline_id', 'user_type', 'user_info')
> pipeline_id = sgqlc.types.Field(sgqlc.types.non_null(String), graphql_name='pipelineId')
> user_type = sgqlc.types.Field(sgqlc.types.non_null(String), graphql_name='userType')
> user_info = sgqlc.types.Field(sgqlc.types.non_null(String), graphql_name='userInfo')
I expect it to be,
> ```class PipelineAccessNode(sgqlc.types.Type, Node):
> __schema__ = sgqlc_schema
> __field_names__ = ('pipeline_id', 'user_type', 'user_info')
> pipeline_id = sgqlc.types.Field(sgqlc.types.non_null(String), graphql_name='pipeline_id')
> user_type = sgqlc.types.Field(sgqlc.types.non_null(String), graphql_name='user_type')
> user_info = sgqlc.types.Field(sgqlc.types.non_null(String), graphql_name='user_info') | closed | 2020-07-07T19:24:37Z | 2020-07-08T02:18:50Z | https://github.com/profusion/sgqlc/issues/107 | [] | gu3sss | 7 |
lmcgartland/graphene-file-upload | graphql | 44 | Add support for using django's TestClient | This should extend [GraphQLTestCase](https://docs.graphene-python.org/projects/django/en/latest/testing/) with support for file mutation test.
> NOTE This is a working code sample
```python
from django.test import Client, TestCase
DEFAULT_GRAPHQL_URL = "/graphql/"
def file_graphql_query(self, query, op_name=None, input_data=None, variables=None, headers=None, files=None, client=None, graphql_url=None):
if not files:
raise ValueError('Missing required argument "files": Use `self.query` instead.')
client = client or Client()
headers = headers or {}
variables = variables or {}
graphql_url = graphql_url or DEFAULT_GRAPHQL_URL
map_ = {}
for k in files.keys():
map_[k] = [f'variables.{k}']
if k not in variables:
variables[k] = None
body = {'query': query}
if op_name:
body['operationName'] = op_name
if variables:
body['variables'] = variables
if input_data:
if 'variables' in body:
body['variables']['input'] = input_data
else:
body['variables'] = {'input': input_data}
data = {
'operations': json.dumps(body),
'map': json.dumps(map_),
**files,
}
if headers:
resp = client.post(graphql_url, data, **headers)
else:
resp = client.post(graphql_url, data)
return resp
```
Usage
```
response = file_query(
'''
mutation uploadImageMutation($id: ID!, $image: Upload!) {
uploadImage(id: $id, image: $image) {
user {
id
}
}
}
''',
op_name='uploadImageMutation',
variables={'id': test_instance.pk},
files={'image': image_file}
)
```
TODO:
- [x] Integrate into graphene_file_upload/django package | closed | 2021-01-19T20:49:52Z | 2021-02-16T04:20:23Z | https://github.com/lmcgartland/graphene-file-upload/issues/44 | [] | jackton1 | 4 |
Morizeyao/GPT2-Chinese | nlp | 256 | 通用中文模型 | 我从百度网盘下载的bin文件,加载不了。
会报 RuntimeError: PytorchStreamReader failed reading zip archive: invalid header or archive is corrupted
的错误。请问有人遇到,该怎么解决呀? | open | 2022-11-09T08:56:02Z | 2023-05-23T07:25:28Z | https://github.com/Morizeyao/GPT2-Chinese/issues/256 | [] | Nagaloop | 3 |
falconry/falcon | api | 1,494 | No on_delete option | I am trying to implement the DELETE request method. Seems like on_delete is not supported and the documentation says nothing about this.
It is especially confusing and frustrating to spend so much time trying to understand if such a basic option is supported at all.
```
class ThingsResource(object):
def on_delete(self, req, resp):
doc = {}
if req.content_length:
doc = falcon.json.load(req.stream)
``` | closed | 2019-04-18T01:54:24Z | 2019-04-18T02:45:25Z | https://github.com/falconry/falcon/issues/1494 | [] | popovvasile | 2 |
jupyter-book/jupyter-book | jupyter | 1,504 | Support Python 3.10 | ### Description / Summary
We are not currently testing or officially supporting Python 3.10 in Jupyter Book.
We should at least:
- Start testing Python 3.10 in our test suite
- Make sure our tests pass
- Update our metadata to support the latest 3 Python versions (3.10/9/8)
We should do this after #1448 is done to reduce any potential Sphinx 2.x bugs we might hit
(originally reported in https://github.com/executablebooks/jupyter-book/issues/1490)
### Value / benefit
More and more people will use 3.10 over time, so we might start getting unexpected errors once this happens.
### Implementation details
Metadata about python version is here: https://github.com/executablebooks/jupyter-book/blob/master/setup.cfg
Our test matrix is here:
https://github.com/executablebooks/jupyter-book/blob/a0d719fd9d27734ba64368696a6c309877895bc2/.github/workflows/tests.yml#L81-L90
(not sure why we only test 3.8 with Sphinx 2/3 but we might as well try doing a full matrix unless we know there are incompatible versions matches)
This PR attempted to do this, but upgrading to Python 3.10 was not a super quick fix: https://github.com/executablebooks/jupyter-book/pull/1509
Issues we ran into:
- Python 3.10 isn't supported in the version of pytest we use. Upgrading to the latest pytest results in some errors in our builds, so we'll need to debug that.
### Tasks to complete
- [x] #1448
- [x] #1494
- [x] https://github.com/executablebooks/jupyter-book/pull/1510
- [ ] Implement the changes described above and see if tests pass | closed | 2021-10-13T22:57:45Z | 2023-04-14T16:21:35Z | https://github.com/jupyter-book/jupyter-book/issues/1504 | [
"enhancement"
] | choldgraf | 2 |
thomaxxl/safrs | rest-api | 22 | Missing examples/template folder | Hi Thomas - Thanks for the project. Would it be possible to add the templates folder to GitHub? I was running the demo_full.py, but was getting an error with missing my_master.html file when trying to access the admin UI
Getting a copy of https://github.com/flask-admin/flask-admin/blob/master/examples/auth/templates/my_master.html seem to work.
Thanks, | closed | 2019-01-07T20:36:14Z | 2020-03-27T14:30:53Z | https://github.com/thomaxxl/safrs/issues/22 | [] | jordiyeh | 3 |
deezer/spleeter | deep-learning | 630 | [Feature] without python spleetergui | ## Description
<!-- Describe your feature request here. -->
## Additional information
<!-- Add any additional description -->
| closed | 2021-06-17T08:20:53Z | 2021-07-16T09:26:58Z | https://github.com/deezer/spleeter/issues/630 | [
"enhancement",
"wontfix",
"feature"
] | DHRUVDAVE21 | 0 |
fastapi-users/fastapi-users | asyncio | 33 | Add error codes to error responses | For a better user experience, it would be nice to have more detailed error messages.
To not force us to use a localization engine, let's define some codes and let the front-end interpret them. The codes should be however clear enough for an API user to understand what is going on.
Possible list:
* `REGISTER_USER_ALREADY_EXISTS`
* `LOGIN_BAD_CREDENTIALS`
* `RESET_PASSWORD_BAD_TOKEN` | closed | 2019-10-30T07:39:13Z | 2019-10-31T09:10:53Z | https://github.com/fastapi-users/fastapi-users/issues/33 | [
"enhancement"
] | frankie567 | 0 |
deepset-ai/haystack | machine-learning | 8,738 | Deprecation and removal of `dataframe` field from `Document` | For motivation, check out #8688
```[tasklist]
### Tasks
- [x] **Haystack 2.10.0**
- [x] Deprecate `dataframe` field and `ExtractedTableAnswer`
- [ ] https://github.com/deepset-ai/haystack/issues/8863
- [x] **Between Haystack 2.10.0 and 2.11.0**
- [ ] https://github.com/deepset-ai/haystack-core-integrations/issues/1371
- [ ] **Haystack 2.11.0**
- [x] Remove `dataframe` support from `InMemoryDocumentStore`
- [ ] https://github.com/deepset-ai/haystack/issues/8852
- [x] Remove `dataframe` field and `ExtractedTableAnswer`
- [x] Make `pandas` dependency optional (investigate if we can remove the dependency)
- [ ] https://github.com/deepset-ai/haystack/issues/8956
- [x] **After Haystack 2.11.0**
- [ ] https://github.com/deepset-ai/haystack-core-integrations/issues/1462
``` | closed | 2025-01-16T14:56:10Z | 2025-03-11T16:34:21Z | https://github.com/deepset-ai/haystack/issues/8738 | [
"P2"
] | anakin87 | 0 |
python-restx/flask-restx | api | 99 | Migrate from flask-restplus | Hello. Is there any document describing procedure of migrating application from `flask-restplus` to `flask-restx`?
Are there any issues converting or known hacks to be applied? I am preparing to switch project with about 50k+ lines and just searching for some courage :)
| open | 2020-03-24T09:04:49Z | 2024-03-07T13:37:44Z | https://github.com/python-restx/flask-restx/issues/99 | [
"documentation",
"question"
] | nick4u | 4 |
giotto-ai/giotto-tda | scikit-learn | 310 | test_projection_values_equal_slice failed due to unreliable test timings | test_projection_values_equal_slice failed on master in fcffaf8
due to unreliable test timings in hypothesis,
```
=================================== FAILURES ===================================
______________________ test_projection_values_equal_slice ______________________
> ???
self = <hypothesis.core.StateForActualGivenExecution object at 0x7fd0e5a29430>
message = 'Hypothesis test_projection_values_equal_slice(X=array([[0.]])) produces unreliable results: Falsified on the first call but did not on a subsequent one'
def __flaky(self, message):
if len(self.falsifying_examples) <= 1:
> raise Flaky(message)
E hypothesis.errors.Flaky: Hypothesis test_projection_values_equal_slice(X=array([[0.]])) produces unreliable results: Falsified on the first call but did not on a subsequent one
/opt/hostedtoolcache/Python/3.8.1/x64/lib/python3.8/site-packages/hypothesis/core.py:835: Flaky
---------------------------------- Hypothesis ----------------------------------
Falsifying example: test_projection_values_equal_slice(
X=array([[0.]]),
)
Unreliable test timings! On an initial run, this test took 632.78ms, which exceeded the deadline of 200.00ms, but on a subsequent run it took 0.34 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None.
=============================== warnings summary ===============================
```
likely not critical, still the different in run time is surprising. | closed | 2020-02-20T10:13:06Z | 2022-08-26T22:13:40Z | https://github.com/giotto-ai/giotto-tda/issues/310 | [
"enhancement"
] | rth | 1 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 812 | ValueError: Proxy dictionary is missing required fields. | from scrapegraphai.utils.proxy_rotation import search_proxy_servers
search_proxy_servers()
graph_config = {
"llm": {
"model": "gpt-4o",
},
"verbose": True,
"max_results": 1,
# "search_engine": "bing",
"headless": False,
"loader_kwargs": {
"proxy": {
"server": "here i put the server",
}
}
}
search_graph = SearchGraph(
prompt=PROMPTS.get("propmt_vvs")["prompt"].format(**data),
config=graph_config,
schema=model,
)
ValueError: Proxy dictionary is missing required fields.
version 1.31.
The thing is that I cannot manually specify the proxy. it says I am missing fields, but I have all the ones in the documentation
| closed | 2024-11-19T17:15:53Z | 2025-01-06T19:02:59Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/812 | [] | manu2022 | 2 |
Teemu/pytest-sugar | pytest | 238 | Upload wheel to PyPI for v0.9.4 | I notice that pytest-sugar v0.9.2 has a wheel but v0.9.4 does not. Could you upload a wheel for v0.9.4? | closed | 2022-06-28T17:32:16Z | 2023-04-10T11:20:17Z | https://github.com/Teemu/pytest-sugar/issues/238 | [
"dont-know"
] | hoodmane | 2 |
dgtlmoon/changedetection.io | web-scraping | 3,045 | API - API Should not deny access when password is enabled | Homepage ChangeDetection Widget breaks with the latest docker image.

Are there new changes on API Side that broke this?
Thanks | closed | 2025-03-22T22:13:45Z | 2025-03-23T10:11:45Z | https://github.com/dgtlmoon/changedetection.io/issues/3045 | [
"bug",
"triage",
"API"
] | FurkanVG | 17 |
thunlp/OpenPrompt | nlp | 14 | AttributeError is occured when initializing a PtuningTemplate | the code is:
`prompt_template = PtuningTemplate(text=['<text_a>', '<new>', '<new>', '<mask>', '.'], model=bertModel, tokenizer=bertTokenizer)`
get the reported error:
"OpenPrompt/openprompt/prompts/**ptuning_prompts.py**", line **63**, in **on_text_set** self.num_new_token = sum([token == self.new_token for token in self.text])
AttributeError: '**PtuningTemplate**' object has no attribute '**new_token**' | closed | 2021-10-14T08:07:32Z | 2021-10-14T14:43:47Z | https://github.com/thunlp/OpenPrompt/issues/14 | [] | LeqsNaN | 0 |
zappa/Zappa | flask | 1,197 | Add new AWS regions into Zappa global depoyments. | <!--- Provide a general summary of the issue in the Title above -->
## Context
AWS has added many more regions within the last 2-3 years and those regions are not in the global deployment. We should add the new regions. Below is the list of regions that are not in zappa:-
1. us-east-2
2. us-west-2
3. af-south-1 (Requires opt-in)
4. ap-northeast-3 (Requires opt-in)
5. ap-northeast-2
6. ap-southeast-2
7. eu-west-2
8. eu-west-3
9. eu-south-1 (Requires opt-in)
10. me-south-1 (Requires opt-in)
11. me-central-1 (Requires opt-in)
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.7/3.8/3.9 -->
## Expected Behavior
<!--- Tell us what should happen -->
On global deployment, zappa should add all the available AWS regions, but its list of available regions is old and AWS has added more than 10 new regions since then.
## Actual Behavior
<!--- Tell us what happens instead -->
Currently, zappa doesn't deploy to all generally available regions.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Add the new regions into the zappa regions list.
## Steps to Reproduce
<!--- Provide a link to a live example or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. `zappa init` on a new project
2. select global deployment
3. Now, take a look at the `zappa_settings.json` you will find many new regions missing.
| closed | 2022-11-24T13:27:36Z | 2024-04-13T20:36:15Z | https://github.com/zappa/Zappa/issues/1197 | [
"no-activity",
"auto-closed"
] | souravjamwal77 | 2 |
TencentARC/GFPGAN | deep-learning | 483 | 模型是否支持FP16 | 将pth转为onnx,然后将onnx直接转为tensorrt后(FP32),人脸图片修复正常,onnx转tensorrt为 FP16时,推理出的结果变成了一张纯色图片 | open | 2024-01-19T08:48:15Z | 2024-07-25T04:54:12Z | https://github.com/TencentARC/GFPGAN/issues/483 | [] | zhangsong1234 | 1 |
PeterL1n/RobustVideoMatting | computer-vision | 159 | 推理速度是2~3帧每秒 | V100的GPU,用的模型是预训练的mobilenetv3.pth,视频是1920*1080的,请问是不是太慢? | closed | 2022-04-11T03:50:03Z | 2022-05-16T05:43:04Z | https://github.com/PeterL1n/RobustVideoMatting/issues/159 | [] | napohou | 2 |
jupyter/nbgrader | jupyter | 1,660 | nbgrader+jupyterhub for multiple graders: no links to assignments in 'Manage Assignments' (formgrader) | <!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### Operating system
Ubuntu 20.04.05 LTS
### `nbgrader --version`
Python version 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:35:26) [GCC 10.4.0]
nbgrader version 0.8.0
### `jupyterhub --version` (if used with JupyterHub)
2.3.1
### `jupyter notebook --version`
6.4.12
### Expected behavior
In the formgrader page Manage Assignments there is a list with the available assignments. When clicking on the title a new tab opens up with Notebook tree of the assignment
### Actual behavior
Although the title appears to be a link there is no `href` inside the tag and nothing happens when clicking on the title.
### Steps to reproduce the behavior
Follow instructions from the nbgrader demo: <https://nbgrader.readthedocs.io/en/stable/configuration/jupyterhub_config.html#example-use-case-one-class-multiple-graders>
### Possible cause
When turning on "Prevent cross-site tracking" in Safari (v15.6.1) the links are working correctly. This might indicate that there is a problem with cross-site linking. Maybe this is caused by the setup of the 'multiple graders' where a new service is configured in `jupyterhub_config.py`. I do not have enough experience with configuring JupyterHub to solve this issue. | closed | 2022-09-06T11:31:28Z | 2023-07-26T14:14:05Z | https://github.com/jupyter/nbgrader/issues/1660 | [] | ronligt | 2 |
pallets/flask | python | 5,052 | App.json_provider_class has no effect is defined after app creation | Hi
### What happens
When setting `json_provider_class` after app creation, it has no effect, because you only set a variable, and it doesn't update `app.json` which is used after
```python
app = Flask(__name__)
app.json_provider_class = CustomProvider
```
It does work if you do the following :
```python
Flask.json_provider_class = CustomProvider
app = Flask(__name__)
```
And of you course if you do the following, but I find it a bit verbose
```python
app = Flask(__name__)
app.json = CustomProvider(app)
```
### Expectation
I expect either
- `app.json_provider_class = CustomProvider` to work
- `Flask.json_provider_class = CustomProvider` to be explicitly documented
### How to solve
I let you decide what you prefer.
I can provide a PR for the first one, I'll just change variable `json_provider_class: t.Type[JSONProvider] = DefaultJSONProvider` to a getter+setter that updates `self.json` as it is done in the `__init__` `self.json: JSONProvider = self.json_provider_class(self)`.
Environment:
- Python version: 3.10
- Flask version: 2.2.3
| closed | 2023-04-12T17:12:02Z | 2023-04-27T00:05:30Z | https://github.com/pallets/flask/issues/5052 | [] | azro352 | 3 |
roboflow/supervision | tensorflow | 1,337 | What models or data sets are used for object detection | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
The following picture is an example, but no documentation was found

### Additional
_No response_ | closed | 2024-07-10T03:50:18Z | 2024-07-10T07:03:01Z | https://github.com/roboflow/supervision/issues/1337 | [
"question"
] | dearMOMO | 1 |
moshi4/pyCirclize | data-visualization | 62 | Highlight different gene functioins with different tick/label colors and jitter labels | First off, I'd like to thank you for providing such a great tool!
To the point to which I am writing this issue, I can't figure out of this is something easily built in or if it requires more python coding than I know.
I'm using resfinder (more specifically, starAMR) which provides a resfinder.tsv file with Gene, Start, End, and other columns.
I would like to adnotate a plasmid with prokka and build the adnotation from the gff file, which is clearly explained in your examples.
But I'd like to specifically use resfinder.tsv to adnotate AMR genes, and highlight the presence of these genes (perhaps by writting gene names in red).
The problem is that some genes would be present in both prokka.gff, as well as resfinder.tsv and their predicted names might be different. In my current case, prokka.gff adnotates a genes as "ampC" and resfinder.tsv identifies it as "CMY-2" (this is more specific, and this is what I'd want). I checked, the Start, End and Lenght are all the same.
My current understanding of python and mathlibplot (coming from a functional understanding of R and ggplot2) would lead me to try to make a combined dataframe (I hope this is not specific to R language) , substituting and appending the much larger prokka database with the resfinder hits based on Start and End match and perhaps add a custom column that would indicate AMR. Then when plotting, add an if statement that check for the AMR column, and use something like `xtick.set_color(color)`.
A second highlight I would like to add is on the genome from the prokka.gbk file. Some AMR hits are also present in the genome level. In this high level overview, there would be far fewer things to adnotate (you don't want to plot each gene). But it would require some kind of automatic jitter-adnotation that would allow each label to be clearly seen and a line leading to the gene location. I'm attaching an example from an adnotation made with prokksee. I would imagine that you can build a custom dataset with the genes of interest, perhaps a classification column, and again take advantage of a way to use different colors to highlight the classification/function.

| closed | 2024-03-17T02:25:47Z | 2024-05-03T02:30:51Z | https://github.com/moshi4/pyCirclize/issues/62 | [
"question"
] | muntean-alex | 1 |
encode/databases | asyncio | 180 | Custom or third party backend support. | It's an important feature whether it's for allowing the third-party libraries to extend the main lib or it's for writing the custom back-end for various reasons (support for private or proprietary technology different approaching for existing backend, etc.). | closed | 2020-03-26T12:23:52Z | 2020-03-26T14:10:56Z | https://github.com/encode/databases/issues/180 | [] | gvbgduh | 2 |
praw-dev/praw | api | 1,129 | Emoji uploading options | I looked into exactly what this means, as I don't use the redesign regularly. This would benefit mods with large amounts of emojis, especially in subreddits where their emoji limit is increased.
When adding an emoji to a subreddit as a mod (redesign), you are prompted by a menu, with the options being:
"Appears in:"
[ ] Post Flair
[ ] User Flair
Mod only: [ ]
Currently, as far as I can tell, when using PRAW to do this (such as batch uploading emojis), the only options are the name and url/path. They default upload with Post and User enabled. For example, I batch uploaded 600 emojis (to match out flair selection), and when choosing a flair for a post, all 600 emojis appear in the emoji menu. They don't appear in the post flair menu, just the emoji menu. Personally, I feel obligated to correct this in our subreddit (though I'll look for the easiest way, therefore I haven't actually done anything heh).
There should be a way to pass these options into the upload. Now, I'm having a hard time understanding endpoints (though I'm nearly there), but if I understand them correctly, then this shouldn't require any additional endpoints.
I've only been at this for a few months, so please correct me if I've misunderstood anything, or overlooked it.
EDIT: I can't believe I forgot this. But obviously there should be a way to also update these settings. Same way you would with a flair template. | closed | 2019-11-01T06:36:16Z | 2020-02-10T00:40:41Z | https://github.com/praw-dev/praw/issues/1129 | [
"Feature"
] | H4CKY54CK | 4 |
MaartenGr/BERTopic | nlp | 1,555 | Documentation fix | @MaartenGr Just a small mistake I found that might need to be fixed. It's in the [best practice doc](https://maartengr.github.io/BERTopic/getting_started/best_practices/best_practices.html#controlling-number-of-topics).
I could be wrong but I believe all the `min_topic_size` is supposed to be `min_cluster_size`? And it should be `min_cluster_size=40` in the code snippet as well?
 | closed | 2023-10-03T08:24:35Z | 2023-10-05T10:20:00Z | https://github.com/MaartenGr/BERTopic/issues/1555 | [] | poomkusa | 1 |
ydataai/ydata-profiling | jupyter | 750 | Adding custom sections to the report | Is there any way to add custom sections to the report? For example, I would like to add a section which has a single bar chart(with custom data) inside it. I know that there's the tedious way of going through the whole report graph and adding sections manually, but it is error-prone as there are many classes which are internal to pandas_profiling and which need to be understood first to have any experimentation done on them. Any help with the above mentioned use-case is appreciated. | open | 2021-04-06T06:54:41Z | 2021-06-19T11:37:00Z | https://github.com/ydataai/ydata-profiling/issues/750 | [
"feature request 💬",
"help wanted 🙋"
] | LikhithTati | 2 |
ludwig-ai/ludwig | data-science | 3,657 | this project support windows? | I try to install:
pip install ludwig,it error on windows10 x64,it seems that getdaft cannot support on windows. | closed | 2023-09-23T12:39:28Z | 2023-09-26T04:07:14Z | https://github.com/ludwig-ai/ludwig/issues/3657 | [
"bug"
] | futureflsl | 2 |
plotly/dash-table | plotly | 146 | Unable to run examples in dash-docs where examples are rendered dynamically | Dynamically rendering the table via a callback in the docs isn't working - an exception is thrown.
See examples here: https://github.com/plotly/dash-docs/pull/232 | closed | 2018-10-22T16:39:51Z | 2018-11-02T02:15:17Z | https://github.com/plotly/dash-table/issues/146 | [] | chriddyp | 0 |
psf/requests | python | 6,478 | Recent SSL Error - urllib3.exceptions.MaxRetryError | Hi.
I have 2 APIs on GCP, they're dockerized. I got communications between those 2 APIs. Lately one of them is throwing me an SSL error.
Havent update anything, nor change any code.
First error:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.9/site-packages/uvicorn/workers.py", line 66, in init_process
super(UvicornWorker, self).init_process()
File "/usr/local/lib/python3.9/site-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.9/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.9/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.9/site-packages/gunicorn/util.py", line 359, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/Main.py", line 2, in <module>
from Routes.ShippingRoute import shippingRouter
File "/Routes/ShippingRoute.py", line 10, in <module>
from Storage.GBucket import uploadToBucket
File "/Storage/GBucket.py", line 13, in <module>
bucket = gcsStorageClient.get_bucket(BUCKET_NAME)
File "/usr/local/lib/python3.9/site-packages/google/cloud/storage/client.py", line 767, in get_bucket
bucket.reload(
File "/usr/local/lib/python3.9/site-packages/google/cloud/storage/bucket.py", line 1086, in reload
super(Bucket, self).reload(
File "/usr/local/lib/python3.9/site-packages/google/cloud/storage/_helpers.py", line 246, in reload
api_response = client._get_resource(
File "/usr/local/lib/python3.9/site-packages/google/cloud/storage/client.py", line 372, in _get_resource
return self._connection.api_request(
File "/usr/local/lib/python3.9/site-packages/google/cloud/storage/_http.py", line 72, in api_request
return call()
File "/usr/local/lib/python3.9/site-packages/google/api_core/retry.py", line 349, in retry_wrapped_func
return retry_target(
File "/usr/local/lib/python3.9/site-packages/google/api_core/retry.py", line 207, in retry_target
raise exceptions.RetryError(
google.api_core.exceptions.RetryError: Deadline of 120.0s exceeded while calling target function, last exception: HTTPSConnectionPool(host='oauth2.googleapis.com', port=443): Max retries exceeded with url: /token (Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1129)')))"
```
Second Error:
```
"Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='myApiUrl', port=443): Max retries exceeded with url: /updateOrders (Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1129)')))"
```
Requirements File:
```
-i https://pypi.org/simple
anyio==3.6.2
cachetools==5.2.1
certifi==2022.12.7
cffi==1.15.1
charset-normalizer==3.0.1
click==8.1.3
colorama==0.4.6
cryptography==40.0.1
fastapi==0.89.1
fuzzywuzzy==0.18.0
google-api-core==2.11.0
google-auth==2.16.0
google-cloud-core==2.3.2
google-cloud-storage==2.7.0
google-crc32c==1.5.0
google-resumable-media==2.4.0
googleapis-common-protos==1.58.0
greenlet==2.0.1
gunicorn==20.1.0
h11==0.14.0
idna==3.4
levenshtein==0.20.9
ndg-httpsclient==0.5.1
numpy==1.24.2
pandas==1.5.3
protobuf==4.21.12
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.21
pydantic==1.10.4
pymysql==1.0.2
pyopenssl==23.1.1
pypdf==3.2.1
python-dateutil==2.8.2
python-dotenv==0.21.0
python-levenshtein==0.20.9
python-multipart==0.0.5
pytz==2022.7.1
rapidfuzz==2.13.7
requests==2.31.0
rsa==4.9
setuptools==68.0.0 ; python_version >= '3.7'
six==1.16.0
sniffio==1.3.0
sqlalchemy==1.4.46
starlette==0.22.0
suds==1.1.2
typing-extensions==4.4.0
unidecode==1.3.6
urllib3==2.0.3
uvicorn==0.20.0
```
It started happening like 2 weeks ago. Already downgraded pyton version, requests version, pyopenssl version, nothing seems to work.
| closed | 2023-07-04T14:45:18Z | 2024-07-24T00:03:30Z | https://github.com/psf/requests/issues/6478 | [] | DanielVelezV | 4 |
unit8co/darts | data-science | 2,106 | ValueError: Expected both prediction and target to be 1D or 2D tensors, but received tensors with dimension torch.Size([32, 100, 1]) | **Describe the bug**
A clear and concise description of what the bug is.
I have a 1D time series that I am trying to fit to the NBeats model but I'm not sure why its throwing the following error:
Expected both prediction and target to be 1D or 2D tensors, but received tensors with dimension torch.Size([32, 100, 1])
ValueError: Expected both prediction and target to be 1D or 2D tensors, but received tensors with dimension torch.Size([32, 100, 1])
**To Reproduce**
Steps to reproduce the behavior, preferably code snippet.
```
model_name = "nbeats_run"
cols = ['target']
config = {
"input_chunk_length" : lookback,
"output_chunk_length" : 100,
"generic_architecture" : True,
}
for fold, (train_idx, val_idx) in enumerate(cv.split(train_df, train_df[f'target'], groups)):
if fold >= 4:
x_train, x_valid = train_df[cols].iloc[train_idx], train_df[cols].iloc[val_idx]
min_train, max_train = min(train_df['open_time'].iloc[train_idx]).to_pydatetime(), max(
train_df['open_time'].iloc[train_idx]).to_pydatetime()
min_valid, max_valid = min(train_df['open_time'].iloc[val_idx]).to_pydatetime(), max(
train_df['open_time'].iloc[val_idx]).to_pydatetime()
scaler = MinMaxScaler()
x_train['target'] = scaler.fit_transform(x_train[['target']])
x_valid['target'] = scaler.transform(x_valid[['target']])
x_train, x_valid = x_train.reset_index(drop = True), x_valid.reset_index(drop = True)
x_tr_scaled = TimeSeries.from_values(x_train.values.reshape(-1)) #, "open_time", ["target"]
x_val_scaled = TimeSeries.from_values(x_valid.values.reshape(-1)) #, "open_time", ["target"]
model = NBEATSModel(**config)
model.fit(x_tr_scaled, val_series=x_val_scaled)
```
**Expected behavior**
A clear and concise description of what you expected to happen.
I'm not sure why its throwing the error of model.fit(). It should be fitting the model as I have univariate time series
**System (please complete the following information):**
- Python version: [e.g. 3.8]
- darts version [e.g. 0.24.0]
**Additional context**
Add any other context about the problem here. For example, I'm not sure if I'm passing in my data incorrectly ? I also tried the following:
```
x_tr_scaled = TimeSeries.from_dataframe(x_train)
x_val_scaled = TimeSeries.from_dataframe(x_val)
```
I am pretty much following the steps in the the intro to NBeats model but not sure whats going on here (https://unit8co.github.io/darts/examples/07-NBEATS-examples.html)
Also, this is what my `x_tr_scaled` looks like

| closed | 2023-12-06T00:21:44Z | 2023-12-06T01:29:08Z | https://github.com/unit8co/darts/issues/2106 | [
"bug",
"triage"
] | zhoujs93 | 0 |
healthchecks/healthchecks | django | 473 | Cron expression with commas causes warning | `45 9,19 * * *` is showing me an warning, even though the cronjob is running on time. Adjusting the expression to `45 9 * * *` out of curiosity then shows the check as healthy. So it looks like cron expressions with commas aren't being considered correctly. | closed | 2021-02-01T01:12:57Z | 2023-01-16T22:01:21Z | https://github.com/healthchecks/healthchecks/issues/473 | [] | bradbeattie | 7 |
Yorko/mlcourse.ai | plotly | 410 | topic 5 part 1 summation sign | [comment in ODS](https://opendatascience.slack.com/archives/C39147V60/p1541584422610100) | closed | 2018-11-07T10:57:25Z | 2018-11-10T16:18:10Z | https://github.com/Yorko/mlcourse.ai/issues/410 | [
"minor_fix"
] | Yorko | 1 |
gee-community/geemap | streamlit | 330 | How about using lgtm.com? | I was running https://lgtm.com over this repo and it found a few interesting things, see https://lgtm.com/projects/g/giswqs/geemap/ Maybe worth adding to the tooling?
| closed | 2021-03-01T15:17:00Z | 2021-03-03T02:35:16Z | https://github.com/gee-community/geemap/issues/330 | [
"Feature Request"
] | deeplook | 9 |
OWASP/Nettacker | automation | 762 | When the program is doing a scan, there is a malfunction in the APIs | Please describe the issue or question and share your OS and Python version.
_________________
**OS**: `x`
Ubuntu
**OS Version**: `x`
22.04.2 LTS
**Python Version**: `x`
Python 3.10.12
Hello, i made a UI in react to call the apis.
When nettacker it's not scanning all the api works perfectly but when a scan is in progress the api responds almost every time "database error!" but every once in a while the api works. | open | 2023-11-09T16:59:22Z | 2023-11-14T16:22:58Z | https://github.com/OWASP/Nettacker/issues/762 | [
"bug"
] | ErtyDess | 1 |
sunscrapers/djoser | rest-api | 120 | Activation token invalidated by user login | Hello, thanks for the great work on this library!
`ActivationView` uses Django's `PasswordResetTokenGenerator` to generate the activation token. To create this token, this class hashes internal user state, including the user's last login time. This means if the user logs in before clicking the activation link (in situations where the activation link is sent, but the user is not required to click it before logging in), the token is invalidated and will not work. The API returns a 400 with `INVALID_TOKEN_ERROR`.
This token hashing behavior makes sense for password reset tokens, however when creating activation tokens we should not hash the user's last login. This will allow djoser to support optional activation/confirmation links.
What are your thoughts on this? Would you be willing to accept a pull request?
| closed | 2016-02-20T22:08:44Z | 2023-10-10T21:04:09Z | https://github.com/sunscrapers/djoser/issues/120 | [] | joshgeller | 13 |
microsoft/nlp-recipes | nlp | 541 | [ASK] Remove old BERT tests and utils | ### Description
utils_nlp.models.bert is outdated and not supported anymore.
- remove/update related tests
- remove utils
- update examples
### Other Comments
| open | 2020-01-18T18:11:12Z | 2020-01-18T18:11:12Z | https://github.com/microsoft/nlp-recipes/issues/541 | [] | saidbleik | 0 |
wiseodd/generative-models | tensorflow | 62 | Weight sharing | Hi,
Can you explain why multiply 0.5 from gradient ?
```
# Average the gradients
for p in D_shared.parameters():
p.grad.data = 0.5 * p.grad.data
``` | open | 2018-08-23T16:51:05Z | 2018-08-23T16:51:05Z | https://github.com/wiseodd/generative-models/issues/62 | [] | omg777 | 0 |
axnsan12/drf-yasg | django | 681 | swagger_settings.DEFAULT_API_URL is not working | https://github.com/axnsan12/drf-yasg/blob/b99306f71c6a5779b62189df7d9c1f5ea1c794ef/src/drf_yasg/generators.py#L189-L197
as your code here, url from `swagger_settings.DEFAULT_API_URL` is read by generator but never used.
I appended two lines after `if parsed_url.path:` like this:
```python
if parsed_url.path:
...
else:
self._gen.url = url
```
Then it works.
I'm making a pr if that's fine with you. | open | 2020-12-25T10:11:34Z | 2025-03-07T12:13:24Z | https://github.com/axnsan12/drf-yasg/issues/681 | [
"triage"
] | CoreJa | 1 |
serengil/deepface | deep-learning | 675 | it uses build_model for every represent function | Hey, serengil.
I found you use build_model for every represent function.
But, it's really inefficient when one tries to use this library to get multiple image features (it were millions for me).
So, I called build_model before I use represent and pass the model to the function (it was quite simple).
I hope my experience can help many others and update this library. | closed | 2023-02-15T06:18:59Z | 2023-02-15T06:36:39Z | https://github.com/serengil/deepface/issues/675 | [
"question"
] | liveseongho | 1 |
ultralytics/ultralytics | computer-vision | 19,112 | I want to implement a multi-task network for segmentation and keypoints .what do i need to do | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I want to implement a multi-task network for segmentation and keypoints .what do i need to do
### Additional
_No response_ | open | 2025-02-07T01:27:25Z | 2025-02-14T12:36:10Z | https://github.com/ultralytics/ultralytics/issues/19112 | [
"question",
"segment",
"pose"
] | duyanfang123 | 6 |
strawberry-graphql/strawberry | django | 3,041 | Switching from strawberry.union to Annotated Union results in unexpected type when resolving fields | ## Describe the Bug
I had a `strawberry.union` kind of union type declaration, and been trying to migrate to the updated Annotated Union kind of declaration, but that change then results in a TypeError.
The original code something looks like this:
```python
SomeStuff = strawberry.union(
"SomeStuff",
types=(
Resource,
Process,
),
)
```
and according to the [docs](https://strawberry.rocks/docs/types/union#defining-unions) it should be:
```python
SomeStuff = Annotated[Union[Resource, Process], strawberry.union("SomeStuff")]
```
This latter then results in the following error when the code starts to resolve the schema, let's say a `Thing` has a field of `SomeStuff` type:
```
../../../.pyenv/versions/3.10.11/envs/llm-ct/lib/python3.10/site-packages/graphql/type/definition.py:946: in fields
raise cls(f"{self.name} fields cannot be resolved. {error}") from error
E TypeError: Thing fields cannot be resolved. Unexpected type 'typing.Annotated[typing.Union[app.models.graphql.resource.Resource, app.models.graphql.process.Process], <strawberry.union.StrawberryUnion object at 0x10b00b0a0>]'
```
Any idea what I might be doing wrong or missing? Does anything else have to change as well, besides the definition of `SomeStuff`?
## System Information
- Operating system: MacOS
- Strawberry version (if applicable): 0.204.0
## Additional Context
The definition of `Thing` is something like this, not sure if the lazy loading has any effect on the outcome:
```python
@strawberry.interface
class Thing:
name: str
@strawberry.field
async def observed_object(
self, info: Info
) -> Annotated[
"SomeStuff",
strawberry.lazy("app.models.graphql.stuffs"),
]:
# [...snip...]
```
(Lazy loading is needed as the real case has some circularity in dependencies).
I wonder the problem is really here, that in the union-type definition `SomeStuff` would be now incorrect, reading the "resolving an union" [section](https://strawberry.rocks/docs/types/union#resolving-a-union) in the docs. But that would be quite a boilerplate (in the real setup I have more than 2 types that are unioned). | open | 2023-08-16T08:15:17Z | 2025-03-20T15:56:20Z | https://github.com/strawberry-graphql/strawberry/issues/3041 | [
"bug"
] | imrehg | 0 |
recommenders-team/recommenders | deep-learning | 2,191 | [BUG] Tests breaking due to a error in protobuf dependency | ### Description
<!--- Describe your issue/bug/request in detail -->
Protobuf made a new release https://pypi.org/project/protobuf/#history
on Nov 27, 2024. It's breaking the tests.
```
File "/home/runner/work/recommenders/recommenders/tests/ci/azureml_tests/post_pytest.py", line 75, in <module>
runs = mlflow.search_runs(
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/mlflow/tracking/fluent.py", line 2069, in search_runs
experiment_by_name = get_experiment_by_name(n)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/mlflow/tracking/fluent.py", line 1671, in get_experiment_by_name
return MlflowClient().get_experiment_by_name(name)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/mlflow/tracking/client.py", line 134, in __init__
self._tracking_client = TrackingServiceClient(final_tracking_uri)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/mlflow/tracking/_tracking_service/client.py", line 83, in __init__
self.store
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/mlflow/tracking/_tracking_service/client.py", line 87, in store
return utils._get_store(self.tracking_uri)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/mlflow/tracking/_tracking_service/utils.py", line 208, in _get_store
return _tracking_store_registry.get_store(store_uri, artifact_uri)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/mlflow/tracking/_tracking_service/registry.py", line 45, in get_store
return self._get_store_with_resolved_uri(resolved_store_uri, artifact_uri)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/mlflow/tracking/_tracking_service/registry.py", line 56, in _get_store_with_resolved_uri
return builder(store_uri=resolved_store_uri, artifact_uri=artifact_uri)
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/azureml/mlflow/entry_point_loaders.py", line 33, in azureml_store_builder
from azureml.mlflow._store.tracking.store import AzureMLRestStore
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/azureml/mlflow/_store/tracking/store.py", line 17, in <module>
from azureml.mlflow._protos.aml_service_pb2 import (
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/azureml/mlflow/_protos/aml_service_pb2.py", line 10, in <module>
from google.protobuf import service as _service
File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/google/protobuf/service.py", line 78
raise NotImplementedError
^
IndentationError: unindent does not match any outer indentation level
Error: Process completed with exit code 1.
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
https://github.com/recommenders-team/recommenders/actions/runs/12100479486/job/33739131547
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
I expect Google to test their libraries before making a release.
### Willingness to contribute
<!--- Go over all the following points, and put an `x` in the box that apply. -->
- [ ] Yes, I can contribute for this issue independently.
- [ ] Yes, I can contribute for this issue with guidance from Recommenders community.
- [ ] No, I cannot contribute at this time.
### Other Comments
| closed | 2024-12-01T08:05:46Z | 2024-12-05T06:52:37Z | https://github.com/recommenders-team/recommenders/issues/2191 | [
"bug"
] | miguelgfierro | 4 |
apachecn/ailearning | nlp | 420 | 测试 | closed | 2018-08-24T06:42:44Z | 2018-08-24T07:13:30Z | https://github.com/apachecn/ailearning/issues/420 | [] | jiangzhonglian | 0 | |
kochlisGit/ProphitBet-Soccer-Bets-Predictor | seaborn | 72 | Can't create a league | i can't create leage and get this issue on terminal
 | open | 2024-03-01T10:07:26Z | 2024-03-01T13:49:26Z | https://github.com/kochlisGit/ProphitBet-Soccer-Bets-Predictor/issues/72 | [] | wilfozz | 12 |
gunthercox/ChatterBot | machine-learning | 1,708 | ModuleNotFoundError: No module named 'chatterbot_corpus' error | i was running a the basic example code from \examples:
`
**_from chatterbot import ChatBot
from chatterbot.trainers import ListTrainer
# Create a new chat bot named Charlie
chatbot = ChatBot('Charlie')
trainer = ListTrainer(chatbot)
trainer.train([
"Hi, can I help you?",
"Sure, I'd like to book a flight to Iceland.",
"Your flight has been booked."
])
# Get a response to the input text 'I would like to book a flight.'
response = chatbot.get_response('I would like to book a flight.')
print(response)
`_**
________
Your work is great so pls respond fastly! Thanks! | closed | 2019-04-22T03:34:21Z | 2019-06-19T23:57:39Z | https://github.com/gunthercox/ChatterBot/issues/1708 | [] | SperCoder | 15 |
desec-io/desec-stack | rest-api | 55 | api: Hide (don't serialize) created/updated fields | Rationale, especially for DNS records in our database: When we sync the database from pdns, all the `created` timestamps would be updated, which is implausible from the user's perspective | closed | 2017-06-26T10:43:39Z | 2017-08-16T12:48:12Z | https://github.com/desec-io/desec-stack/issues/55 | [
"api"
] | peterthomassen | 1 |
scanapi/scanapi | rest-api | 203 | Change config file default name | Our default config path is `.scanapi.yaml`, which is not so explicit as it could.
https://github.com/scanapi/scanapi/blob/master/scanapi/settings.py#L5
Some other possibilities:
- scanapi.conf (like [redis](https://redis.io/topics/config), or [postgresql](https://www.postgresql.org/docs/9.3/config-setting.html))
- scanapi.init (like [pytest](https://docs.pytest.org/en/latest/customize.html))
- .scanapi (like [rspec](https://github.com/rspec/rspec/wiki#rspec))
My favorite one is the first, because in my opinion it is the most explicit.
Besides we also need to:
- Update documentation at [scanapi/website](https://github.com/scanapi/website/tree/master/_docs_v1)
- Update examples at [scanapi/examples](https://github.com/scanapi/examples) | closed | 2020-06-29T18:19:34Z | 2020-08-24T20:34:13Z | https://github.com/scanapi/scanapi/issues/203 | [
"Good First Issue",
"Breaking Change"
] | camilamaia | 2 |
tensorly/tensorly | numpy | 82 | weights in parafac decomposition | In parafac decomposition, the docs mention a "weights" parameter being returned as given below:
> weights : ndarray, optional
> Array of length rank of weights for each factor matrix. See the with_weights keyword attribute.
But they're not returned. Nor does the source have any mention of it. | closed | 2018-10-26T03:02:29Z | 2019-05-01T12:01:29Z | https://github.com/tensorly/tensorly/issues/82 | [
"enhancement"
] | abhyudayasrinet | 1 |
ccxt/ccxt | api | 25,294 | Gate pro: Cannot retry watchTradesForSymbols() when exchange delists a symbol | ### Operating System
MacOS
### Programming Languages
JavaScript
### CCXT Version
4.4.60
### Description
When gate.io exchange delisted some symbol (e.g. in my case it was "LUNARLENS/USDT"), I already had an existing exchange instance with markets loaded and websocket subscriptions opened where the symbol was still present in the markets.
My code is calling `watchTradesForSymbols(someSymbolsArrayIncludingThatSymbol)` periodically every minute to retry a websocket connection in case it closes for some reason, but after the symbol was delisted, the call ended with exchange exception message "unknown currency pair: LUNARLENS_USDT". Not sure if the websocket connection was dropped right away or some time later but then the retry didn't work for above mentioned reason anymore and the exchange stopped receiving trades completely.
So now I implemented an error handling mechanism where I try to parse the mentioned symbol from the exchange error message and try again but without the delisted symbol, however the problem is than upon attempting to open a subscription all the other symbols in that array get marked as having an existing subscription in the `client.subscriptions` inside `Exchange.watchMultiple()` even if the call fails, the **client.subscriptions remain filled** and now even when I try to call the `watchTradesForSymbols` method again it will do nothing since it thinks that the subscription for those symbols already exists even if the call failed before because of the delisted symbol.
I think the problem is the client.subscriptions Dictionary where those subsriptions which are not opened should be cleared but they are not.
To simulate the situation I had to manualy add the "LUNARLENS/USDT" market to the exchange.markets since it is delisted now (I just copied the one for "ETH/USDT" and changed ETH to LUNARLENS in all fields:
```
const ethUsdt: MarketInterface = exchange.markets["ETC/USDT"];
const lunar = {...ethUsdt};
lunar.id = "LUNARLENS_USDT";
lunar.symbol = "LUNARLENS/USDT"
lunar.base = "LUNARLENS";
lunar.baseId = "LUNARLENS";
exchange.markets["LUNARLENS/USDT"] = lunar;
```
| closed | 2025-02-17T13:54:41Z | 2025-02-19T18:30:56Z | https://github.com/ccxt/ccxt/issues/25294 | [
"bug"
] | ggecy | 2 |
joeyespo/grip | flask | 94 | Links to header anchor does not work | Default markdown syntax not working with grip:
```
[LinkName](#Header1)
...
## Header1
```
Real example you can find in [prelude](https://github.com/bbatsov/prelude) repo (table of contents section).
| closed | 2015-01-13T19:12:22Z | 2015-02-09T07:49:45Z | https://github.com/joeyespo/grip/issues/94 | [
"bug"
] | veelenga | 4 |
benbusby/whoogle-search | flask | 1,026 | [BUG] How to go back to search session after clicking a link | Use both the self-hosted docker version of the https://[w.ftw.lol](https://w.ftw.lol/search)/search, whenever you search something and go into the link and "go back" on browser, the chrome or edge will both compalin about "Confirm Form Resubmission" and "ERR_CACHE_MISS"
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [x] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: windows 10
- Browser: chrome and edge
- Version [e.g. 22]
| closed | 2023-06-25T21:38:57Z | 2023-06-26T22:01:14Z | https://github.com/benbusby/whoogle-search/issues/1026 | [
"bug"
] | mysteriousHerb | 1 |
kynan/nbstripout | jupyter | 14 | Add Changelog | Would be useful to have!
| closed | 2016-02-15T23:59:31Z | 2017-01-25T21:57:53Z | https://github.com/kynan/nbstripout/issues/14 | [
"type:meta",
"resolution:fixed"
] | kynan | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.