repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
liangliangyy/DjangoBlog | django | 529 | 功能确实 | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [ x] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [ x] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [ x] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [x ] 添加新的特性或者功能
- [ ] 请求技术支持
1 为用户勾选‘工作人员状态’后,该用户也无法访问管理页面
2 超级用户无法创建用户组
| closed | 2021-12-15T08:18:49Z | 2021-12-19T03:48:54Z | https://github.com/liangliangyy/DjangoBlog/issues/529 | [] | lmafeifeil | 9 |
wkentaro/labelme | computer-vision | 1,546 | Unable to start labelMe | ### Provide environment information
Python 3.8.10
```
> pip list
Package Version
------------------- -----------
annotated-types 0.7.0
beautifulsoup4 4.13.3
certifi 2025.1.31
charset-normalizer 3.4.1
click 8.1.8
colorama 0.4.6
coloredlogs 15.0.1
contourpy 1.1.1
cycler 0.12.1
filelock 3.16.1
flatbuffers 25.2.10
fonttools 4.56.0
gdown 5.2.0
humanfriendly 10.0
idna 3.10
imageio 2.35.1
imgviz 1.7.5
importlib_resources 6.4.5
kiwisolver 1.4.7
labelme 5.6.1
lazy_loader 0.4
loguru 0.7.3
matplotlib 3.7.5
mpmath 1.3.0
natsort 8.4.0
networkx 3.1
numpy 1.24.4
onnxruntime 1.19.2
osam 0.2.2
packaging 24.2
pillow 10.4.0
pip 25.0.1
protobuf 5.29.3
pydantic 2.10.6
pydantic_core 2.27.2
pyparsing 3.1.4
PyQt5 5.15.11
PyQt5-Qt5 5.15.2
PyQt5_sip 12.15.0
pyreadline3 3.5.4
PySocks 1.7.1
python-dateutil 2.9.0.post0
PyWavelets 1.4.1
PyYAML 6.0.2
QtPy 2.4.3
requests 2.32.3
scikit-image 0.21.0
scipy 1.10.1
setuptools 56.0.0
six 1.17.0
soupsieve 2.6
sympy 1.13.3
termcolor 2.4.0
tifffile 2023.7.10
tqdm 4.67.1
typing_extensions 4.12.2
urllib3 2.2.3
win32_setctime 1.2.0
zipp 3.20.2
```
### What OS are you using?
Windows 10, Build 19045
### Describe the Bug
When starting labelMe, the following error occurs:
```
Traceback (most recent call last):
File "C:\Users\user\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\user\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "D:\test\venv\Scripts\labelme.exe\__main__.py", line 4, in <module>
File "d:\test\venv\lib\site-packages\labelme\__main__.py", line 14, in <module>
from labelme.app import MainWindow
File "d:\test\venv\lib\site-packages\labelme\app.py", line 22, in <module>
from labelme import ai
File "d:\test\venv\lib\site-packages\labelme\ai\__init__.py", line 5, in <module>
from .text_to_annotation import get_rectangles_from_texts # NOQA: F401
File "d:\test\venv\lib\site-packages\labelme\ai\text_to_annotation.py", line 10, in <module>
model: str, image: np.ndarray, texts: list[str]
TypeError: 'type' object is not subscriptable
```
### Expected Behavior
The application should start.
### To Reproduce
The issue occurs with labelMe 5.6.0 or newer. Neither command line nor .exe works.
Steps:
```
> python -m venv venv
> .\venv\Scripts\Activate.ps1
> pip install labelme
> labelme
``` | open | 2025-02-26T11:06:21Z | 2025-03-18T09:58:27Z | https://github.com/wkentaro/labelme/issues/1546 | [] | onnxruntime-user | 2 |
arogozhnikov/einops | numpy | 22 | Ellipsis not mentioned in docs | Great work!
I discovered in https://github.com/arogozhnikov/einops/blob/master/einops/einops.py#L199 that you also support ellipsis. Its an important feature so you may want to add it to the documentation. | open | 2018-12-01T10:52:24Z | 2025-03-02T20:41:12Z | https://github.com/arogozhnikov/einops/issues/22 | [] | LukasDrude | 14 |
strawberry-graphql/strawberry | asyncio | 3,656 | `ApolloTracingExtension` regression populating `resolvers` field. | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
```python
import asyncio
import strawberry
from strawberry.extensions import tracing
@strawberry.type
class Query:
@strawberry.field
def node(self) -> str:
return ''
schema = strawberry.Schema(Query, extensions=[tracing.ApolloTracingExtension])
for i in range(2):
result = asyncio.run(schema.execute('{ node }'))
assert result.extensions['tracing']['execution']['resolvers'], i # second time fails
```
outputs
```console
assert result.extensions['tracing']['execution']['resolvers'], i # second time fails
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
AssertionError: 1
```
Only the first request populates `resolvers` as expected. This worked in previous version.
## System Information
- Strawberry version (if applicable): >=0.240
## Additional Context
<!-- Add any other relevant information about the problem here. --> | open | 2024-10-05T17:44:40Z | 2025-03-20T15:56:54Z | https://github.com/strawberry-graphql/strawberry/issues/3656 | [
"bug"
] | coady | 2 |
horovod/horovod | tensorflow | 3,511 | Horovod installation for TF CPU nightly fails with error: no member "tensorflow_gpu_device_info"! | **Environment:**
1. TensorFlow
2. Framework version: 2.10 nightly
3. Horovod version: 0.24.2 all the way up to tip of master
4. MPI version: 4.0.3
5. CUDA version: N/A this is CPU install
6. NCCL version: N/A
7. Python version: 3.8.10
8. Spark / PySpark version: N/A
9. Ray version: N/A
10. OS and version: Ubuntu 20.04.4 LTS
11. GCC version: 9.4.0
12. CMake version: 3.16.3
While installing any version of Horovod from `0.24.2` all the way up to tip of `master` branch and with the following settings I get:
```
# Install Horovod
export HOROVOD_WITHOUT_PYTORCH=1
export HOROVOD_WITHOUT_MXNET=1
export HOROVOD_WITH_TENSORFLOW=1
export HOROVOD_VERSION=v0.24.2
```
and then:
```
python3 -m pip install git+https://github.com/horovod/horovod.git@${HOROVOD_VERSION}
```
and I'm getting this error during installation:
```
/tmp/pip-req-build-xs138tj2/horovod/common/ops/gloo_operations.h:51:8: required from here
/tmp/pip-req-build-xs138tj2/third_party/gloo/gloo/math.h:20:22: warning: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
[ 99%] Building CXX object horovod/tensorflow/CMakeFiles/tensorflow.dir/mpi_ops.cc.o
cd /tmp/pip-req-build-xs138tj2/build/temp.linux-x86_64-cpython-38/RelWithDebInfo/horovod/tensorflow && /usr/bin/c++ -DEIGEN_MPL2_ONLY=1 -DHAVE_GLOO=1 -DHAVE_MPI=1 -DTENSORFLOW_VERSION=2010000000 -Dtensorflow_EXPORTS -I/tmp/pip-req-build-xs138tj2/third_party/HTTPRequest/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/assert/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/config/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/core/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/detail/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/iterator/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/lockfree/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/mpl/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/parameter/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/predef/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/preprocessor/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/static_assert/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/type_traits/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/utility/include -I/tmp/pip-req-build-xs138tj2/third_party/lbfgs/include -I/tmp/pip-req-build-xs138tj2/third_party/gloo -I/tmp/pip-req-build-xs138tj2/third_party/flatbuffers/include -isystem /usr/lib/x86_64-linux-gnu/openmpi/include/openmpi -isystem /usr/lib/x86_64-linux-gnu/openmpi/include -isystem /usr/local/lib/python3.8/dist-packages/tensorflow/include -I/usr/local/lib/python3.8/dist-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0 -DEIGEN_MAX_ALIGN_BYTES=64 -pthread -fPIC -Wall -ftree-vectorize -mf16c -mavx -mfma -O3 -g -DNDEBUG -fPIC -std=c++14 -o CMakeFiles/tensorflow.dir/mpi_ops.cc.o -c /tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc: In function ‘int horovod::tensorflow::{anonymous}::GetDeviceID(tensorflow::OpKernelContext*)’:
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc:389:26: error: ‘class tensorflow::DeviceBase’ has no member named ‘tensorflow_gpu_device_info’; did you mean ‘tensorflow_accelerator_device_info’?
context->device()->tensorflow_gpu_device_info() != nullptr) {
^~~~~~~~~~~~~~~~~~~~~~~~~~
tensorflow_accelerator_device_info
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc:390:33: error: ‘class tensorflow::DeviceBase’ has no member named ‘tensorflow_gpu_device_info’; did you mean ‘tensorflow_accelerator_device_info’?
device = context->device()->tensorflow_gpu_device_info()->gpu_id;
^~~~~~~~~~~~~~~~~~~~~~~~~~
tensorflow_accelerator_device_info
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc: At global scope:
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc:384:18: warning: ‘tensorflow::OpKernelContext* horovod::tensorflow::{anonymous}::TFOpContext::GetKernelContext() const’ defined but not used [-Wunused-function]
OpKernelContext* TFOpContext::GetKernelContext() const { return context_; }
^~~~~~~~~~~
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc:293:30: warning: ‘const tensorflow::Tensor* horovod::tensorflow::{anonymous}::TFTensor::tensor() const’ defined but not used [-Wunused-function]
const ::tensorflow::Tensor* TFTensor::tensor() const { return &tensor_; }
^~~~~~~~
make[2]: *** [horovod/tensorflow/CMakeFiles/tensorflow.dir/build.make:453: horovod/tensorflow/CMakeFiles/tensorflow.dir/mpi_ops.cc.o] Error 1
make[2]: Leaving directory '/tmp/pip-req-build-xs138tj2/build/temp.linux-x86_64-cpython-38/RelWithDebInfo'
make[1]: *** [CMakeFiles/Makefile2:443: horovod/tensorflow/CMakeFiles/tensorflow.dir/all] Error 2
make[1]: Leaving directory '/tmp/pip-req-build-xs138tj2/build/temp.linux-x86_64-cpython-38/RelWithDebInfo'
make: *** [Makefile:130: all] Error 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-xs138tj2/setup.py", line 166, in <module>
setup(name='horovod',
File "/usr/local/lib/python3.8/dist-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.8/dist-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/wheel/bdist_wheel.py", line 223, in run
self.run_command('build')
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/command/build.py", line 136, in run
self.run_command(cmd_name)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/dist-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/tmp/pip-req-build-xs138tj2/setup.py", line 100, in build_extensions
subprocess.check_call([cmake_bin, '--build', '.'] + cmake_build_args,
File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'RelWithDebInfo', '--', 'VERBOSE=1']' returned non-zero exit status 2.
----------------------------------------
ERROR: Failed building wheel for horovod
Running setup.py clean for horovod
```
| closed | 2022-04-19T17:24:41Z | 2022-04-21T01:08:25Z | https://github.com/horovod/horovod/issues/3511 | [
"bug"
] | ashahba | 4 |
TracecatHQ/tracecat | automation | 429 | Improve Search in UI (Registry and Workflows) | **Describe the bug**
The Search could be inproved in UI (Registry and Workflows). Now it seems that search is done only in Action title, but Namespace (integration itself) field could also be used.
**To Reproduce**
1. Go to 'Workflows'
2. Add new 'Node'
3. Search for 'LDAP'
4. Only one action is returned
**Expected behavior**
Search results should return all actions that are LDAP (integration) related
**Screenshots**


| closed | 2024-10-15T06:32:28Z | 2024-11-06T20:33:23Z | https://github.com/TracecatHQ/tracecat/issues/429 | [] | skrilab | 1 |
hankcs/HanLP | nlp | 994 | pyhanlp安装JDK之后报错 | 报错信息
Traceback (most recent call last):
File "C:/zengxianfeng/lac/utils.py", line 4, in <module>
from pyhanlp import *
File "C:\zengxianfeng\Anaconda\lib\site-packages\pyhanlp\__init__.py", line 116, in <module>
_start_jvm_for_hanlp()
File "C:\zengxianfeng\Anaconda\lib\site-packages\pyhanlp\__init__.py", line 108, in _start_jvm_for_hanlp
getDefaultJVMPath(),
File "C:\zengxianfeng\Anaconda\lib\site-packages\jpype\_core.py", line 121, in get_default_jvm_path
return finder.get_jvm_path()
File "C:\zengxianfeng\Anaconda\lib\site-packages\jpype\_jvmfinder.py", line 153, in get_jvm_path
.format(self._libfile))
jpype._jvmfinder.JVMNotFoundException: No JVM shared library file (jvm.dll) found. Try setting up the JAVA_HOME environment variable properl
环境变量里面已经添加了 JAVA_HOME C:\zengxianfeng\Anaconda\Lib\site-packages\jdk | closed | 2018-10-12T02:42:31Z | 2018-11-30T03:16:39Z | https://github.com/hankcs/HanLP/issues/994 | [] | SefaZeng | 1 |
d2l-ai/d2l-en | tensorflow | 2,638 | d2l broken (numpy) in colab with default pytorch 2.6.0 | Colab just updated pytorch to 2.6.0 and it breaks d2l:
```bash
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-2-a93af434f2cf>](https://localhost:8080/#) in <cell line: 0>()
1 get_ipython().run_line_magic('matplotlib', 'inline')
----> 2 from d2l import torch as d2l
3 import torchvision
14 frames
[/usr/local/lib/python3.11/dist-packages/d2l/torch.py](https://localhost:8080/#) in <module>
4 import numpy as np
5 import torch
----> 6 import torchvision
7 from PIL import Image
8 from torch import nn
[/usr/local/lib/python3.11/dist-packages/torchvision/__init__.py](https://localhost:8080/#) in <module>
8 # .extensions) before entering _meta_registrations.
9 from .extension import _HAS_OPS # usort:skip
---> 10 from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
11
12 try:
[/usr/local/lib/python3.11/dist-packages/torchvision/models/__init__.py](https://localhost:8080/#) in <module>
1 from .alexnet import *
----> 2 from .convnext import *
3 from .densenet import *
4 from .efficientnet import *
5 from .googlenet import *
[/usr/local/lib/python3.11/dist-packages/torchvision/models/convnext.py](https://localhost:8080/#) in <module>
6 from torch.nn import functional as F
7
----> 8 from ..ops.misc import Conv2dNormActivation, Permute
9 from ..ops.stochastic_depth import StochasticDepth
10 from ..transforms._presets import ImageClassification
[/usr/local/lib/python3.11/dist-packages/torchvision/ops/__init__.py](https://localhost:8080/#) in <module>
21 from .giou_loss import generalized_box_iou_loss
22 from .misc import Conv2dNormActivation, Conv3dNormActivation, FrozenBatchNorm2d, MLP, Permute, SqueezeExcitation
---> 23 from .poolers import MultiScaleRoIAlign
24 from .ps_roi_align import ps_roi_align, PSRoIAlign
25 from .ps_roi_pool import ps_roi_pool, PSRoIPool
[/usr/local/lib/python3.11/dist-packages/torchvision/ops/poolers.py](https://localhost:8080/#) in <module>
8
9 from ..utils import _log_api_usage_once
---> 10 from .roi_align import roi_align
11
12
[/usr/local/lib/python3.11/dist-packages/torchvision/ops/roi_align.py](https://localhost:8080/#) in <module>
5 import torch.fx
6 from torch import nn, Tensor
----> 7 from torch._dynamo.utils import is_compile_supported
8 from torch.jit.annotations import BroadcastingList2
9 from torch.nn.modules.utils import _pair
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/__init__.py](https://localhost:8080/#) in <module>
1 import torch
2
----> 3 from . import convert_frame, eval_frame, resume_execution
4 from .backends.registry import list_backends, lookup_backend, register_backend
5 from .callback import callback_handler, on_compile_end, on_compile_start
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py](https://localhost:8080/#) in <module>
31 from torch._C._dynamo.guards import GlobalStateGuard
32 from torch._dynamo.distributed import get_compile_pg
---> 33 from torch._dynamo.symbolic_convert import TensorifyState
34 from torch._guards import compile_context, CompileContext, CompileId, tracing
35 from torch._logging import structured
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/symbolic_convert.py](https://localhost:8080/#) in <module>
25 import torch
26 import torch._logging
---> 27 from torch._dynamo.exc import TensorifyScalarRestartAnalysis
28 from torch._guards import tracing, TracingContext
29
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/exc.py](https://localhost:8080/#) in <module>
9
10 from . import config
---> 11 from .utils import counters
12
13
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/utils.py](https://localhost:8080/#) in <module>
109 np.fft,
110 np.linalg,
--> 111 np.random,
112 )
113
[/usr/local/lib/python3.11/dist-packages/numpy/__init__.py](https://localhost:8080/#) in __getattr__(attr)
335 if not abs(x.dot(x) - 2.0) < 1e-5:
336 raise AssertionError()
--> 337 except AssertionError:
338 msg = ("The current Numpy installation ({!r}) fails to "
339 "pass simple sanity checks. This can be caused for example "
[/usr/local/lib/python3.11/dist-packages/numpy/random/__init__.py](https://localhost:8080/#) in <module>
178
179 # add these for module-freeze analysis (like PyInstaller)
--> 180 from . import _pickle
181 from . import _common
182 from . import _bounded_integers
[/usr/local/lib/python3.11/dist-packages/numpy/random/_pickle.py](https://localhost:8080/#) in <module>
----> 1 from .mtrand import RandomState
2 from ._philox import Philox
3 from ._pcg64 import PCG64, PCG64DXSM
4 from ._sfc64 import SFC64
5
mtrand.pyx in init numpy.random.mtrand()
ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
``` | open | 2025-03-18T11:51:11Z | 2025-03-18T13:02:53Z | https://github.com/d2l-ai/d2l-en/issues/2638 | [] | drapado | 0 |
huggingface/diffusers | deep-learning | 10,277 | Using euler scheduler in fluxfill | ### Describe the bug
I am using the customfluxfill function and want to use the Euler scheduler (EulerAncestralDiscreteScheduler) in my code. However, I am encountering the following error:
### Reproduction
```
from diffusers.schedulers import (
DPMSolverMultistepScheduler,
EulerAncestralDiscreteScheduler,
)
SAMPLER_MAP = {
"DPM++ SDE Karras": lambda config: DPMSolverMultistepScheduler.from_config(
config, use_karras_sigmas=True, algorithm_type="sde-dpmsolver++"
),
"DPM++ 2M Karras": lambda config: DPMSolverMultistepScheduler.from_config(
config, use_karras_sigmas=True
),
"DPM++ 2M SDE": lambda config: DPMSolverMultistepScheduler.from_config(
config, algorithm_type="sde-dpmsolver++"
),
"DPM++ 2M SDE Karras": lambda config: DPMSolverMultistepScheduler.from_config(
config, use_karras_sigmas=True, algorithm_type="sde-dpmsolver++"
),
"Euler a": lambda config: EulerAncestralDiscreteScheduler.from_config(
config,
),
}
def model_fn() -> FluxFillPipeline:
pipe = FluxFillPipeline.from_pretrained("Planningo/flux-inpaint", torch_dtype=torch.bfloat16).to("cuda")
# Apply the Euler scheduler
pipe.scheduler = SAMPLER_MAP["Euler a"](pipe.scheduler.config)
return pipe
```
### Logs
```shell
ValueError: The current scheduler class <class 'diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler'>'s `set_timesteps` does not support custom sigmas schedules. Please check whether you are using the correct scheduler.
```
### System Info
latest(github version diffusers), python3.10, ubuntu with nvidia gpu
### Who can help?
@yiyixuxu | open | 2024-12-18T04:15:02Z | 2025-02-02T15:02:54Z | https://github.com/huggingface/diffusers/issues/10277 | [
"bug",
"stale"
] | luna313 | 4 |
ansible/awx | django | 14,922 | Inventory and Host modules are not idempotent in --check mode | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
Inventory and Host modules will report "ok" in real mode but "changed" in --check mode
### AWX version
23.2.0
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [X] Collection
- [ ] CLI
- [ ] Other
### Installation method
docker development environment
### Modifications
no
### Ansible version
2.14.2
### Operating system
Red Hat Enterprise Linux release 9.1 (Plow)
### Web browser
Chrome
### Steps to reproduce
```
test_bugs.yml
- ansible.controller.inventory:
name: 'Dynamic A'
organization: 'A'
- ansible.controller.host:
name: 'Host A'
inventory: 'Dynamic A'
ansible-playbook test_bugs.yml
ansible-playbook test_bugs.yml --check
```
### Expected results
```
ok: [localhost] ...
ok: [localhost] ...
ok: [localhost] ...
ok: [localhost] ...
```
### Actual results
```
ok: [localhost] ...
ok: [localhost] ...
changed: [localhost]
changed: [localhost]
```
### Additional information
_No response_ | open | 2024-02-26T01:53:40Z | 2024-02-28T19:42:02Z | https://github.com/ansible/awx/issues/14922 | [
"type:bug",
"component:awx_collection",
"needs_triage",
"community"
] | kk-at-redhat | 4 |
explosion/spaCy | data-science | 12,566 | CLI benchmark accuracy doesn't save rendered displacy htmls | The accuracy benchmark of my model does not save rendered displacy htmls as requested. Benchmark works that's why I'm confused. The model contains only **transformers** and **spancat** components. Does **spancat** is not yet supported? 😞
DocBin does not contain any empty docs
CLI output:
```powershell
$ python -m spacy benchmark accuracy data/models/pl_spancat_acc/model-best/ data/test.spacy --output results/spacy/metrics.json --gpu-id 0 --displacy-path results/spacy/benchmark_acc_test_displacy
ℹ Using GPU: 0
================================== Results ==================================
TOK 100.00
SPAN P 79.31
SPAN R 54.19
SPAN F 64.38
SPEED 3752
============================== SPANS (per type) ==============================
P R F
nam_loc_gpe_city 77.29 74.42 75.83
nam_pro_software 82.35 36.84 50.91
nam_org_institution 63.11 50.78 56.28
nam_liv_person 87.34 82.64 84.93
nam_loc_gpe_country 95.24 85.37 90.03
. . .
nam_pro 0.00 0.00 0.00
✔ Generated 25 parses as HTML
results/spacy/benchmark_acc_test_displacy
✔ Saved results to
results/spacy/benchmark_acc_test_metrics.json
```
Random doc.to_json() from test DocBin:
```python
{'ents': [{'end': 54, 'label': 'nam_adj_country', 'start': 44},
{'end': 83, 'label': 'nam_liv_person', 'start': 69},
{'end': 100, 'label': 'nam_pro_title_book', 'start': 86}],
'spans': {'sc': [{'end': 54,
'kb_id': '',
'label': 'nam_adj_country',
'start': 44},
{'end': 83,
'kb_id': '',
'label': 'nam_liv_person',
'start': 69},
{'end': 100,
'kb_id': '',
'label': 'nam_pro_title_book',
'start': 86}]},
'text': 'Niedawno czytał em nową książkę znakomitego szkockiego medioznawcy , '
'Briana McNaira - Cultural Chaos .',
'tokens': [{'end': 8, 'id': 0, 'start': 0},
{'end': 15, 'id': 1, 'start': 9},
{'end': 18, 'id': 2, 'start': 16},
{'end': 23, 'id': 3, 'start': 19},
{'end': 31, 'id': 4, 'start': 24},
{'end': 43, 'id': 5, 'start': 32},
{'end': 54, 'id': 6, 'start': 44},
{'end': 66, 'id': 7, 'start': 55},
{'end': 68, 'id': 8, 'start': 67},
{'end': 75, 'id': 9, 'start': 69},
{'end': 83, 'id': 10, 'start': 76},
{'end': 85, 'id': 11, 'start': 84},
{'end': 94, 'id': 12, 'start': 86},
{'end': 100, 'id': 13, 'start': 95},
{'end': 102, 'id': 14, 'start': 101}]}
```
<details><summary><b>Model config</b></summary>
<p>
```ini
[paths]
train = null
dev = null
vectors = null
init_tok2vec = null
[system]
gpu_allocator = "pytorch"
seed = 0
[nlp]
lang = "pl"
pipeline = ["transformer","spancat"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
[components]
[components.spancat]
factory = "spancat"
max_positive = null
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
spans_key = "sc"
threshold = 0.5
[components.spancat.model]
@architectures = "spacy.SpanCategorizer.v1"
[components.spancat.model.reducer]
@layers = "spacy.mean_max_reducer.v1"
hidden_size = 128
[components.spancat.model.scorer]
@layers = "spacy.LinearLogistic.v1"
nO = null
nI = null
[components.spancat.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"
[components.spancat.suggester]
@misc = "spacy.ngram_suggester.v1"
sizes = [1,2,3]
[components.transformer]
factory = "transformer"
max_batch_items = 4096
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}
[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v3"
name = "dkleczek/bert-base-polish-cased-v1"
mixed_precision = false
[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96
[components.transformer.model.grad_scaler_config]
[components.transformer.model.tokenizer_config]
use_fast = true
[components.transformer.model.transformer_config]
[corpora]
[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[training]
accumulate_gradient = 3
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
annotating_components = []
before_to_disk = null
before_update = null
[training.batcher]
@batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 2000
buffer = 256
get_length = null
[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
[training.optimizer.learn_rate]
@schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 20000
initial_rate = 0.00005
[training.score_weights]
spans_sc_f = 1.0
spans_sc_p = 0.0
spans_sc_r = 0.0
[pretraining]
[initialize]
vectors = ${paths.vectors}
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.tokenizer]
```
</p>
</details>
| closed | 2023-04-23T21:40:47Z | 2023-05-29T00:02:17Z | https://github.com/explosion/spaCy/issues/12566 | [
"bug",
"feat / cli",
"feat / spancat"
] | jamnicki | 3 |
ageitgey/face_recognition | python | 1,467 | Very slow. help me please | * face_recognition version: face-recognition 1.3.0
* Python version: 3.9.7
* Operating System: windows 11
### Description
I rec the video in brawser from webcam and send it to the python flask. video is 1 millisecond long. and than i have a pickle file with 6 persons encodes. i try to compare them and my 1ms video. and it's work. but for 6 people it takes 5 seconds, and I should have 1000 people. how can i speed up the process?
| open | 2023-01-23T18:00:33Z | 2023-01-23T18:00:33Z | https://github.com/ageitgey/face_recognition/issues/1467 | [] | Shoichii | 0 |
aleju/imgaug | deep-learning | 716 | How to work with landmarks and bounding boxes of objects | Dear,
As the title, I would like to augment data for objects which requires to have both bounding box and landmark. Have more than 1 object in a image!
Example:

Any sample code to deal with this?
Thanks in advance! | open | 2020-09-09T07:22:25Z | 2020-10-17T05:20:43Z | https://github.com/aleju/imgaug/issues/716 | [] | cannguyen275 | 6 |
TencentARC/GFPGAN | pytorch | 150 | How to run it on Windows? | Hi
Can anyone share how to run this on Windows?
Thank You | closed | 2022-01-22T14:34:43Z | 2023-05-17T19:02:05Z | https://github.com/TencentARC/GFPGAN/issues/150 | [] | rop12770 | 2 |
pallets-eco/flask-wtf | flask | 582 | flask-wtf i18n does not work without flask-babel | wtforms has no hard dependency to babel/flask-babel, and translations work out of the box when properly configured, e.g. `Form(flask.request.form, meta={"locales": ["fr"]})`
However translations for a simple flask-wtf form with the same configuration won't work, unless flask-babel is installed and initialized.
This behavior feels unexpected, and adds an unnecessary dependency against flask-babel for non-English applications.
This can be fixed by manually setting `WTF_I18N_ENABLED` to `False`. Maybe `WTF_I18N_ENABLED` should default to `True` only if flask-babel is installed and `False` otherwise?
### vanilla wtforms
assertions pass
```python
import flask
import wtforms
class Form(wtforms.Form):
email = wtforms.EmailField(validators=[wtforms.validators.Email()])
app = flask.Flask(__name__)
@app.route("/", methods=["GET", "POST"])
def index():
form = Form(flask.request.form, meta={"locales": ["fr"]})
form.validate()
return form.errors
res = app.test_client().post("/", data={"email": "invalid-email"})
assert "Invalid email address." not in res.json["email"]
assert "Adresse électronique non valide." in res.json["email"]
```
### flask-wtf without flask-babel
assertions fail
```python
import flask
import wtforms
import flask_wtf
class Form(flask_wtf.FlaskForm):
email = wtforms.EmailField(validators=[wtforms.validators.Email()])
app = flask.Flask(__name__)
app.config["WTF_CSRF_ENABLED"] = False
@app.route("/", methods=["GET", "POST"])
def index():
form = Form(flask.request.form, meta={"locales": ["fr"]})
form.validate()
return form.errors
res = app.test_client().post("/", data={"email": "invalid-email"})
assert "Invalid email address." not in res.json["email"]
assert "Adresse électronique non valide." in res.json["email"]
```
### flask-wtf with flask-babel
assertions pass
```python
import flask
import wtforms
import flask_wtf
import flask_babel
class Form(flask_wtf.FlaskForm):
email = wtforms.EmailField(validators=[wtforms.validators.Email()])
app = flask.Flask(__name__)
app.config["WTF_CSRF_ENABLED"] = False
babel = flask_babel.Babel()
babel.init_app(app, locale_selector=lambda: "fr")
@app.route("/", methods=["GET", "POST"])
def index():
form = Form(flask.request.form, meta={"locales": ["fr"]})
form.validate()
return form.errors
res = app.test_client().post("/", data={"email": "invalid-email"})
assert "Invalid email address." not in res.json["email"]
assert "Adresse électronique non valide." in res.json["email"]
``` | open | 2023-10-03T17:51:35Z | 2024-01-05T09:18:12Z | https://github.com/pallets-eco/flask-wtf/issues/582 | [
"bug",
"i18n"
] | azmeuk | 0 |
huggingface/datasets | numpy | 7,220 | Custom features not compatible with special encoding/decoding logic | ### Describe the bug
It is possible to register custom features using datasets.features.features.register_feature (https://github.com/huggingface/datasets/pull/6727)
However such features are not compatible with Features.encode_example/decode_example if they require special encoding / decoding logic because encode_nested_example / decode_nested_example checks whether the feature is in a fixed list of encodable types:
https://github.com/huggingface/datasets/blob/16a121d7821a7691815a966270f577e2c503473f/src/datasets/features/features.py#L1349
This prevents the extensibility of features to complex cases
### Steps to reproduce the bug
```python
class ListOfStrs:
def encode_example(self, value):
if isinstance(value, str):
return [str]
else:
return value
feats = Features(strlist=ListOfStrs())
assert feats.encode_example({"strlist": "a"})["strlist"] = feats["strlist"].encode_example("a")}
```
### Expected behavior
Registered feature types should be encoded based on some property of the feature (e.g. requires_encoding)?
### Environment info
3.0.2 | open | 2024-10-11T19:20:11Z | 2024-11-08T15:10:58Z | https://github.com/huggingface/datasets/issues/7220 | [] | alex-hh | 2 |
cookiecutter/cookiecutter-django | django | 4,755 | Add a citation file | Hi all -- I am using this in academic software. I would like the cite the repository, which I can of course do, but I would also like to credit authors and maintainers more explicitly.
I suggest that it would be a good idea to create a citation file, and possibly add a zenodo config, to make it easier to appropriately cite/credit this excellent work in grants and papers. | closed | 2023-12-19T16:58:04Z | 2023-12-20T17:52:54Z | https://github.com/cookiecutter/cookiecutter-django/issues/4755 | [
"enhancement"
] | cmatKhan | 6 |
ivy-llc/ivy | numpy | 27,968 | Fix Ivy Failing Test: tensorflow - elementwise.maximum | closed | 2024-01-20T16:18:41Z | 2024-01-25T09:54:03Z | https://github.com/ivy-llc/ivy/issues/27968 | [
"Sub Task"
] | samthakur587 | 0 | |
vvbbnn00/WARP-Clash-API | flask | 201 | [Feature request]建议增加一键部署 | 一键部署到Vercel或者Railway等平台。 | open | 2024-05-13T07:01:15Z | 2024-05-13T18:05:05Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/201 | [
"enhancement"
] | sunzhuo | 0 |
ipython/ipython | jupyter | 14,355 | Word-based backward deletion? | Hi guys! I've been using your beautiful shell for like 10 years now. 🤗
I'm wondering if you already support this and I just could not find it in the docs, or if you don't - could you please give me some directions where to look in the sources and how would you implement it:
I really want to be able to configure ipython to have smart word-based backward deletion enabled. In the same way how `zsh` does it. Comparing to shells like bash, sh and ipython - if you press Ctrl+w in zsh - it will delete characters back to the end of the word (which can be: `./-` - any of those three characters at least), not till it encounters a whitespace.
This makes it really easy to delete just the previous word, not few of then until a whitespace is met.
Thank you! | closed | 2024-02-25T08:10:10Z | 2024-02-25T09:18:10Z | https://github.com/ipython/ipython/issues/14355 | [] | scythargon | 1 |
vitalik/django-ninja | rest-api | 377 | list value in form data is not parsed properly (it always returns list with length 1)[BUG] | Hi! Thanks for the amazing framework!
I think I found a possible bug and the fix please have a look :)
## Context
From the tutorial... I added a simple form data
```py
from django.http import HttpRequest
from ninja import NinjaAPI, Schema, Form
class MyModel(Schema):
my_list: list[str]
api = NinjaAPI()
@api.post("/hello")
async def hello(request: HttpRequest, my_model: MyModel = Form(...)):
return 200, f"{my_model.my_list}"
```
swagger UI renders like below

I added 'a' and 'b' to the list and sent a request.
## What I expected
in the `hello` view, `my_model.my_list` should be `['a', 'b']`
## What actually happened
But `my_model.my_list` is `['a,b']`, I think list is not parsed properly.
## Possible Fix
how about changing `ninja.parser.Parser.parse_querydict` line 24 to...
`result[key] = data.getlist(key)` -> `result[key] = data[key].split(',')`
like this? this fixes the issue.
```py
class Parser:
"Default json parser"
def parse_body(self, request: HttpRequest) -> DictStrAny:
return cast(DictStrAny, json.loads(request.body))
def parse_querydict(
self, data: MultiValueDict, list_fields: List[str], request: HttpRequest
) -> DictStrAny:
result: DictStrAny = {}
for key in data.keys():
if key in list_fields:
result[key] = data.getlist(key)
else:
result[key] = data[key]
return result
```
**Versions (please complete the following information):**
- Python version: 3.9
- Django version: 4.0.2
- Django-Ninja version: 0.17.0
| closed | 2022-02-28T14:00:12Z | 2024-10-20T11:49:29Z | https://github.com/vitalik/django-ninja/issues/377 | [] | triumph1 | 5 |
Anjok07/ultimatevocalremovergui | pytorch | 1,013 | Any body help me? 有人能帮我吗? | My machine info:我的设备信息:
Intel MacOS 12.7.1
launch UVR fail, the error message is follow:启动UVR失败,下面是错误信息:
Traceback (most recent call last):
File "UVR.py", line 8, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/__init__.py", line 209, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/__init__.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/convert.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/notation.py", line 8, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/__init__.py", line 77, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/utils.py", line 1878, in <module>
File "numba/core/decorators.py", line 234, in wrapper
File "numba/core/dispatcher.py", line 863, in enable_caching
File "numba/core/caching.py", line 601, in __init__
File "numba/core/caching.py", line 337, in __init__
RuntimeError: cannot cache function '__shear_dense': no locator available for file 'librosa/util/utils.py'
[906] Failed to execute script 'UVR' due to unhandled exception: cannot cache function '__shear_dense': no locator available for file 'librosa/util/utils.py'
[906] Traceback:
Traceback (most recent call last):
File "UVR.py", line 8, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/__init__.py", line 209, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/__init__.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/convert.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/notation.py", line 8, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/__init__.py", line 77, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/utils.py", line 1878, in <module>
File "numba/core/decorators.py", line 234, in wrapper
File "numba/core/dispatcher.py", line 863, in enable_caching
File "numba/core/caching.py", line 601, in __init__
File "numba/core/caching.py", line 337, in __init__
RuntimeError: cannot cache function '__shear_dense': no locator available for file 'librosa/util/utils.py'
HOW CAN I DO?我该如何做? | open | 2023-12-07T15:09:19Z | 2023-12-08T03:39:00Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1013 | [] | Darling-Lee | 2 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,860 | [Bug]: When setting the LORA directory to a network storage device, the tree view disappears | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [x] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I used the --lora-dir option to point the LORA directory to my network storage. The files and images can be read normally, but the tree view on the left has disappeared.
To troubleshoot this, I conducted multiple tests, switched browsers, and even tried using the mklink command, but all attempts failed.
*When the files are stored locally: Both Dirs and Tree work normally.
*When the files are stored on the network storage: Dirs work normally, but Tree is abnormal.
No error messages were found in the CMD backend logs.
My WEBUI version:
*Version: v1.10.1
*Python: 3.11.9
*Torch: 2.5.1+cu124
*Xformers: 0.0.28.post3
*Gradio: 3.41.2
*Checkpoint: e6bedccf80
The above was translated by ChatGPT. If anything is unclear, I will provide further clarification.

### Steps to reproduce the problem
1.Store the files on a network drive.
2.Use --lora-dir to change the directory or create a symbolic link with mklink.
3.Set the preview mode to TREE.
### What should have happened?
It should display normally, just like when the files are stored locally.
### What browsers do you use to access the UI ?
Google Chrome, Microsoft Edge
### Sysinfo
[sysinfo-2025-02-22-15-20.json](https://github.com/user-attachments/files/18923600/sysinfo-2025-02-22-15-20.json)
### Console logs
```Shell
IIB Database file has been successfully backed up to the backup folder.
Startup time: 44.1s (list SD models: 7.5s, load scripts: 10.1s, create ui: 14.2s, gradio launch: 6.3s, add APIs: 2.0s, app_started_callback: 3.9s).
Python 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --medvram-sdxl --theme dark --port 58888 --xformers --xformers-flash-attention --api --autolaunch --listen --gradio-auth outside:a0975655695 --skip-python-version-check --lora-dir Z:Lora_nas
CHv1.8.13: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.11.1, num models: 17
ControlNet preprocessor location: D:\sd-webui-aki\sd-webui-aki-v4.10\extensions\sd-webui-controlnet\annotator\downloads
2025-02-22 23:29:25,939 - ControlNet - INFO - ControlNet v1.1.455
Secret key loaded successfully.
[sd-webui-pnginfo-injection] Extension init
sd-webui-prompt-all-in-one background API service started successfully.
Forge: False, reForge: False
Loading weights [e6bedccf80] from D:\sd-webui-aki\sd-webui-aki-v4.10\models\Stable-diffusion\Illustrious\asyncsMIXILLUSTRIOUS_ilV30.safetensors
CHv1.8.13: Set Proxy:
Creating model from config: D:\sd-webui-aki\sd-webui-aki-v4.10\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: xformers... done.
Model loaded in 29.2s (load weights from disk: 0.7s, create model: 2.5s, apply weights to model: 20.5s, apply half(): 0.1s, hijack: 0.6s, load textual inversion embeddings: 2.9s, calculate empty prompt: 1.8s).
2025-02-22 23:30:11,470 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: http://0.0.0.0:58888
To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 143.3s (prepare environment: 38.7s, import torch: 13.3s, import gradio: 3.7s, setup paths: 2.9s, initialize shared: 0.5s, other imports: 1.4s, list SD models: 0.1s, load scripts: 26.1s, create ui: 43.6s, gradio launch: 7.3s, add APIs: 1.5s, app_started_callback: 4.0s).
```
### Additional information
_No response_ | closed | 2025-02-22T15:35:46Z | 2025-02-23T15:35:03Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16860 | [
"bug-report"
] | carey6409 | 5 |
horovod/horovod | pytorch | 4,069 | Unable to install horovod on aarch64 platfrom either in host or container | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) : All
2. Framework version: Tensorrt-llm
3. Horovod version: 0.28.1
5. CUDA version: 12.4
6. NCCL version: 2.21.5-1+cuda12.4
7. Python version: 3.10
10. OS and version: Ubuntu 22.04
11. GCC version: 4:11.2.0
12. CMake version: 3.29.2
**Bug report:**
Unable to install horovod on aarch64 platfrom either in host or container (tensorrt_llm).
Also, try to build from source code, still the same issue
Error Log:
local/lib/python3.10/dist-packages/cmake/data/bin/cmake -E cmake_link_script CMakeFiles/tensorflow.dir/link.txt --verbose=1
/usr/bin/c++ -fPIC -I/usr/local/lib/python3.10/dist-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=1 --std=c++17 -DEIGEN_MAX_ALIGN_BYTES=64
-pthread -fPIC -Wall -ftree-vectorize -O3 -g -DNDEBUG -Wl,--version-script=/tmp/pip-install-7munzi1e/horovod_23bdb5121532447da7226726ee7b472a/horovod.l
ds -Wl,-Bsymbolic-functions -Wl,-z,relro,-z,now -shared -Wl,-soname,mpi_lib.cpython-310-aarch64-linux-gnu.so -o /tmp/pip-install-7munzi1e/horovod_23bdb51
21532447da7226726ee7b472a/build/lib.linux-aarch64-cpython-310/horovod/tensorflow/mpi_lib.cpython-310-aarch64-linux-gnu.so CMakeFiles/tensorflow.dir/__/co
mmon/common.cc.o CMakeFiles/tensorflow.dir/__/common/controller.cc.o CMakeFiles/tensorflow.dir/__/common/fusion_buffer_manager.cc.o CMakeFiles/tensorflow
.dir/__/common/group_table.cc.o CMakeFiles/tensorflow.dir/__/common/half.cc.o CMakeFiles/tensorflow.dir/__/common/logging.cc.o CMakeFiles/tensorflow.dir/
__/common/message.cc.o CMakeFiles/tensorflow.dir/__/common/operations.cc.o CMakeFiles/tensorflow.dir/__/common/parameter_manager.cc.o CMakeFiles/tensorfl
ow.dir/__/common/process_set.cc.o CMakeFiles/tensorflow.dir/__/common/response_cache.cc.o CMakeFiles/tensorflow.dir/__/common/stall_inspector.cc.o CMakeF
iles/tensorflow.dir/__/common/thread_pool.cc.o CMakeFiles/tensorflow.dir/__/common/timeline.cc.o CMakeFiles/tensorflow.dir/__/common/tensor_queue.cc.o CM
akeFiles/tensorflow.dir/__/common/ops/collective_operations.cc.o CMakeFiles/tensorflow.dir/__/common/ops/operation_manager.cc.o CMakeFiles/tensorflow.dir
/__/common/optim/bayesian_optimization.cc.o CMakeFiles/tensorflow.dir/__/common/optim/gaussian_process.cc.o CMakeFiles/tensorflow.dir/__/common/utils/env
_parser.cc.o CMakeFiles/tensorflow.dir/__/common/mpi/mpi_context.cc.o CMakeFiles/tensorflow.dir/__/common/mpi/mpi_controller.cc.o CMakeFiles/tensorflow.d
ir/__/common/ops/mpi_operations.cc.o CMakeFiles/tensorflow.dir/__/common/ops/adasum/adasum_mpi.cc.o CMakeFiles/tensorflow.dir/__/common/ops/adasum_mpi_op
erations.cc.o CMakeFiles/tensorflow.dir/__/common/gloo/gloo_context.cc.o CMakeFiles/tensorflow.dir/__/common/gloo/gloo_controller.cc.o CMakeFiles/tensorf
low.dir/__/common/gloo/http_store.cc.o CMakeFiles/tensorflow.dir/__/common/gloo/memory_store.cc.o CMakeFiles/tensorflow.dir/__/common/ops/gloo_operations
.cc.o CMakeFiles/tensorflow.dir/mpi_ops.cc.o CMakeFiles/tensorflow.dir/xla_mpi_ops.cc.o -Wl,-rpath,/opt/hpcx/ompi/lib /opt/hpcx/ompi/lib/libmpi.so -L/us
r/local/lib/python3.10/dist-packages/tensorflow -l:libtensorflow_framework.so.2 /usr/local/lib/python3.10/dist-packages/tensorflow/python/_pywrap_tensorf
low_internal.so /usr/local/lib/python3.10/dist-packages/tensorflow/libtensorflow_cc.so.2 ../../third_party/compatible17_gloo/gloo/libcompatible17_gloo.a
/opt/hpcx/ompi/lib/libmpi.so -lpthread
gmake[2]: Leaving directory '/tmp/pip-install-7munzi1e/horovod_23bdb5121532447da7226726ee7b472a/build/temp.linux-aarch64-cpython-310/RelWithDebInfo
'
[ 97%] Built target tensorflow
gmake[1]: Leaving directory '/tmp/pip-install-7munzi1e/horovod_23bdb5121532447da7226726ee7b472a/build/temp.linux-aarch64-cpython-310/RelWithDebInfo
'
gmake: *** [Makefile:136: all] Error 2
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-7munzi1e/horovod_23bdb5121532447da7226726ee7b472a/setup.py", line 213, in <module>
setup(name='horovod',
File "/usr/local/lib/python3.10/dist-packages/setuptools/__init__.py", line 111, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/core.py", line 184, in setup
return run_commands(dist)
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/core.py", line 200, in run_commands
dist.run_commands()
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 964, in run_commands
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 964, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 948, in run_command
super().run_command(command)
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 983, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.10/dist-packages/setuptools/command/bdist_wheel.py", line 384, in run
self.run_command("build")
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 948, in run_command
super().run_command(command)
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 983, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 948, in run_command
super().run_command(command)
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 983, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.10/dist-packages/setuptools/command/build_ext.py", line 96, in run
_build_ext.run(self)
File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
self.build_extensions()
File "/tmp/pip-install-7munzi1e/horovod_23bdb5121532447da7226726ee7b472a/setup.py", line 145, in build_extensions
subprocess.check_call(command, cwd=cmake_build_dir)
File "/usr/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'RelWithDebInfo', '--', '-j8', 'VERBOSE=1']' returned non-zero exit status 2.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for horovod
Running setup.py clean for horovod
Failed to build horovod
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (horovod)
Please let me know if you need any additional information | open | 2024-08-21T01:59:59Z | 2024-10-18T09:29:04Z | https://github.com/horovod/horovod/issues/4069 | [
"bug"
] | rajeshitshoulders | 1 |
tfranzel/drf-spectacular | rest-api | 613 | How to extend action with several methods using OpenApiViewExtension? | I currently have an action on a view that support GET, POST and DELETE. How can I extend their schema by using the OpenApiViewExtensionClass?
```
class TestAPISchema(OpenApiViewExtension):
target_class = "test.views.TestAPI"
def view_replacement(self):
@extend_schema_view(
test_action=extend_schema(
....
some params for GET Method
....
),
test_action=extend_schema(
....
some params for POSTMethod
....
),
)
```
I can't add the same action twice because it's a dictionary. I'm not sure how to resolve this.
| closed | 2021-12-15T14:32:29Z | 2022-02-07T11:14:50Z | https://github.com/tfranzel/drf-spectacular/issues/613 | [] | MatejMijoski | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,417 | How to use socks5 with auth proxy with undetected_chromedriver? |
At the moment, I have been trying for several days to find a way to make it work, but nothing works out for me.</div> | closed | 2023-07-25T01:26:47Z | 2023-07-26T15:13:46Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1417 | [] | dxdate | 2 |
deeppavlov/DeepPavlov | tensorflow | 717 | Getting Invalid syntax Error when installing en_ranker_tfidf_wiki | When running following command in linux (python 3.5):
`python -m deeppavlov install en_ranker_tfidf_wiki`
I get following error:
```
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.5/runpy.py", line 142, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/usr/lib/python3.5/runpy.py", line 109, in _get_module_details
__import__(pkg_name)
File "/home/pythonfreak12/venv/lib/python3.5/site-packages/deeppavlov/__init__.py", line 19, in <module>
from .configs import configs
File "/home/pythonfreak12/venv/lib/python3.5/site-packages/deeppavlov/configs/__init__.py", line 54
return f'Struct({repr(self._asdict())})'
^
SyntaxError: invalid syntax
```
I already tried changing the quotation marks in case this would an f-string related error - unfortunately didnt help
| closed | 2019-02-19T12:27:53Z | 2019-04-15T09:07:51Z | https://github.com/deeppavlov/DeepPavlov/issues/717 | [] | Benjamin-3281 | 2 |
deepfakes/faceswap | machine-learning | 728 | ImportError: Module use of python37.dll conflicts with this version of Python. | **Describe the bug**
Some python dll import error something something
**To Reproduce**
Steps to reproduce the behavior:
1. Run setup.exe
2. Error
**Expected behavior**
Install w/o error
**Installer Details**
```
(check) Git installed: git version 2.20.1.windows.1
(check) MiniConda installed: conda 4.5.12
(check) CPU Supports AVX Instructions
(check) CPU Supports SSE4 Instructions
(check) Completed check for installed applications
All Prerequisites installed.
Downloading Faceswap...
Cloning into 'C:\Users\Cole\faceswap'...
Creating Conda Virtual Environment...
Remove all packages in environment C:\Users\Cole\MiniConda3\envs\faceswap:
## Package Plan ##
environment location: C:\Users\Cole\MiniConda3\envs\faceswap
The following packages will be REMOVED:
certifi: 2019.3.9-py36_0
pip: 19.1.1-py36_0
python: 3.6.8-h9f7ef89_7
setuptools: 41.0.1-py36_0
sqlite: 3.28.0-he774522_0
vc: 14.1-h0510ff6_4
vs2015_runtime: 14.15.26706-h3a45250_4
wheel: 0.33.2-py36_0
wincertstore: 0.2-py36h7fe50ca_0
Solving environment: ...working... done
## Package Plan ##
environment location: C:\Users\Cole\MiniConda3\envs\faceswap
added / updated specs:
- python=3.6
The following NEW packages will be INSTALLED:
certifi: 2019.3.9-py36_0
pip: 19.1.1-py36_0
python: 3.6.8-h9f7ef89_7
setuptools: 41.0.1-py36_0
sqlite: 3.28.0-he774522_0
vc: 14.1-h0510ff6_4
vs2015_runtime: 14.15.26706-h3a45250_4
wheel: 0.33.2-py36_0
wincertstore: 0.2-py36h7fe50ca_0
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
==> WARNING: A newer version of conda exists. <==
current version: 4.5.12
latest version: 4.6.14
Please update conda by running
$ conda update -n base -c defaults conda
#
# To activate this environment, use
#
# $ conda activate faceswap
#
# To deactivate an active environment, use
#
# $ conda deactivate
Setting up FaceSwap Environment... This may take a while
Traceback (most recent call last):
File "C:\Users\Cole\faceswap\setup.py", line 5, in <module>
import ctypes
File "C:\Users\Cole\MiniConda3\envs\faceswap\lib\ctypes\__init__.py", line 7, in <module>
from _ctypes import Union, Structure, Array
ImportError: Module use of python37.dll conflicts with this version of Python.
Error Setting up Faceswap
Install Aborted
```
**Desktop**
- OS: W10Pro B1803
- All latest cudnn cuda nvidia anything installed
-gtx 1080
**Additional context**
uhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
| closed | 2019-05-14T05:00:38Z | 2019-05-14T10:52:47Z | https://github.com/deepfakes/faceswap/issues/728 | [] | WSADKeysGaming | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 653 | Cannot train vocoder for single-speaker fine tuning; vocoder_preprocess is not generating any mel gtas | I know vritually nothing about programming and am trying to use the tool. I got the voice cloning to work, but I would like to go further and fine-tune a model based on #437. I'm only using a small number of steps just to familiarise myself with the process before committing to using a larger set of utterances and training steps. I made a dataset of 50 utterances, trained the synthesizer to 50 steps as a test run, and tried it out on the toolbox. The positive difference compared to the base pretrained model is staggering and very noticeable!
I want to train the vocoder, but I am stuck on this step:
"Stop training once satisfied with the resulting model. At this point you can fine-tune your vocoder model. First generate the training data for the vocoder" So, I use the following commands:
python vocoder_preprocess.py synthesizer/saved_models/logs-singlespeaker/datasets_root/SV2TTS/synthesizer (This command cannot seem to find train.txt)
python vocoder_preprocess.py synthesizer/saved_models/logs-singlespeaker/datasets_root/ (This one works, but it does not seem to generate any mel gta files)
The command runs, but seems to generate no audios or mel gtas in the resulting vocoder folder. PowerShell says this:
Starting Synthesis
0it [00:00, ?it/s]
Synthesized mel spectrograms at synthesizer/saved_models/logs-singlespeaker/datasets_root/SV2TTS\vocoder\mels_gta
Unsurprsingly, I can't train the vocoder with vocoder_train.py because there are no audios or mel gta files, or entries in synthesizer.txt.
What am I doing wrong? What should do to make sure that the mel gtas are generated?
Thanks for this awesome project.
| closed | 2021-02-10T04:23:15Z | 2021-02-17T20:44:55Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/653 | [] | StElysse | 9 |
pallets/flask | flask | 5,653 | Flash message with Markup fails | Upgrading my platform from Python 3.8 + Flask 3.0.3 + Flask-Session 0.8.0 with redis backend, to Python 3.11 + Flask 3.1.0 + Flask-Session 0.8.0 with redis backend. Same user code.
Issue: fancy flash message breaks on the new platform (work fine on the old platform).
Flash message:
`flash(Markup('press the play button <i class="bi-play-btn-fill black"></i> below'), 'info')`
Error:
```
[2024-12-10 19:01:28,998] ERROR in base: Failed to serialize session data: Encoding objects of type Markup is unsupported
[2024-12-10 19:01:28,998] ERROR in app: Exception on / [POST]
Traceback (most recent call last):
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 1511, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 920, in full_dispatch_request
return self.finalize_request(rv)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 941, in finalize_request
response = self.process_response(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 1322, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "/var/www/lib/python3.11/site-packages/flask_session/base.py", line 305, in save_session
self._upsert_session(app.permanent_session_lifetime, session, store_id)
File "/var/www/lib/python3.11/site-packages/flask_session/redis/redis.py", line 78, in _upsert_session
serialized_session_data = self.serializer.encode(session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask_session/base.py", line 132, in encode
return self.encoder.encode(dict(session))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Encoding objects of type Markup is unsupported
[2024-12-10 19:01:29,002] ERROR in base: Failed to serialize session data: Encoding objects of type Markup is unsupported
[2024-12-10 19:01:29,002] ERROR in app: Request finalizing failed with an error while handling an error
Traceback (most recent call last):
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 1511, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 920, in full_dispatch_request
return self.finalize_request(rv)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 941, in finalize_request
response = self.process_response(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 1322, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "/var/www/lib/python3.11/site-packages/flask_session/base.py", line 305, in save_session
self._upsert_session(app.permanent_session_lifetime, session, store_id)
File "/var/www/lib/python3.11/site-packages/flask_session/redis/redis.py", line 78, in _upsert_session
serialized_session_data = self.serializer.encode(session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask_session/base.py", line 132, in encode
return self.encoder.encode(dict(session))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Encoding objects of type Markup is unsupported
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 941, in finalize_request
response = self.process_response(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask/app.py", line 1322, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "/var/www/lib/python3.11/site-packages/flask_session/base.py", line 305, in save_session
self._upsert_session(app.permanent_session_lifetime, session, store_id)
File "/var/www/lib/python3.11/site-packages/flask_session/redis/redis.py", line 78, in _upsert_session
serialized_session_data = self.serializer.encode(session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/www/lib/python3.11/site-packages/flask_session/base.py", line 132, in encode
return self.encoder.encode(dict(session))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Encoding objects of type Markup is unsupported
``` | closed | 2024-12-10T18:08:02Z | 2024-12-26T00:10:42Z | https://github.com/pallets/flask/issues/5653 | [] | jeroen-80 | 1 |
autokey/autokey | automation | 632 | AutoKey process not killed by panel icon context menu Quit option | ## Classification:
Bug
## Reproducibility:
Always
## Autokey version:
autokey-gtk 0.96.0-beta.9
Used GUI:
Gtk
Installed via:
pip3
Linux distribution:
- Kubuntu 20.04 LTS running inside of VirtualBox
- KDE Plasma Version 5.18.5
- KDE Frameworks Version: 5.68.0
- Qt Version: 5.12.8
- Kernel Version: 5.11.0-40-generic
- OS Type: 64-bit
- Memory: 3.2 GiB of RAM
- Processor: 1 x Intel Core i5-8600K CPU @ 3.60 GHz
## Summary:
The AutoKey process is not killed by the panel icon's context menu "Quit" option.
## Steps to reproduce:
1. Launch autokey-gtk.
2. Check your system for any autokey processes: ```pgrep -c "autokey"```
- Expected result: Result should be 1.
- Actual result: Result is 1.
5. Right-click the AutoKey icon in the panel and choose "Quit".
6. Check your system for any autokey processes: ```pgrep -c "autokey"```
- Expected result: Result should be 0.
- Actual result: Result is 1.
## Expected result:
Multiple expected results. See above.
## Actual result:
Multiple actual results. See above.
## Notes:
This issue doesn't affect closing AutoKey from the "Quit" option inside the "File" menu in the AutoKey main window. That one properly kills the AutoKey process.
| open | 2021-11-19T21:16:38Z | 2022-05-10T06:09:28Z | https://github.com/autokey/autokey/issues/632 | [
"bug",
"autokey-gtk",
"user interface"
] | Elliria | 26 |
InstaPy/InstaPy | automation | 6,582 | Error when trying to like/comment. | Hey, InstaPy worked flawlessly the last weeks. Now it automatically updated and I get this message before it stops:
```
--- Logging error ---
Traceback (most recent call last):
File "C:\Users\xDDra\AppData\Local\Programs\Python\Python39\lib\logging\__init__.py", line 1082, in emit
stream.write(msg + self.terminator)
File "C:\Users\xDDra\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2728' in position 419: character maps to <undefined>
Call stack:
File "C:\Users\xDDra\Downloads\instapy-quickstart-master\quickstart.py", line 124, in <module>
session.like_by_tags(my_hashtags, amount=90, media=None)
File "C:\Users\xDDra\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\instapy.py", line 1977, in like_by_tags
inappropriate, user_name, is_video, reason, scope = check_link(
File "C:\Users\xDDra\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\like_util.py", line 617, in check_link
logger.info("post_page: {}".format(post_page))
Message: "post_page: {'items': [{'taken_at': 1649696640, 'pk': 2814181843960894852, 'id': '2814181843960894852_52135199895', 'device_timestamp': 216800212164278, 'media_type': 1, 'code': 'CcN-3oWpJWE', 'client_cache_key': 'MjgxNDE4MTg0Mzk2MDg5NDg1Mg==.2', 'filter_type': 0, 'is_unified_video': False, 'user': {'pk': 52135199895, 'username': 'skies_by_jie', 'full_name': 'Skies✨', 'is_private': False, 'profile_pic_url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-19/275159678_635233907705714_3301720467448510435_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=TLWqsd_aLjEAX9ZS59D&edm=AABBvjUBAAAA&ccb=7-4&oh=00_AT95n96A9UqSUxqN3fF-sIjhk99P2OG4p3VixXOPeu01hQ&oe=625AEFBE&_nc_sid=83d603', 'profile_pic_id': '2786284984702872632_52135199895', 'friendship_status': {'following': False, 'outgoing_request': False, 'is_bestie': False, 'is_restricted': False, 'is_feed_favorite': False}, 'is_verified': False, 'follow_friction_type': 0, 'growth_friction_info': {'has_active_interventions': False, 'interventions': {}}, 'account_badges': [], 'has_anonymous_profile_picture': False, 'is_unpublished': False, 'is_favorite': False, 'latest_reel_media': 0, 'has_highlight_reels': False, 'has_primary_country_in_feed': False, 'has_primary_country_in_profile': False}, 'can_viewer_reshare': True, 'caption_is_edited': False, 'like_and_view_counts_disabled': False, 'featured_products_cta': None, 'commerciality_status': 'not_commercial', 'is_paid_partnership': False, 'is_visual_reply_commenter_notice_enabled': True, 'original_media_has_visual_reply_media': False, 'comment_likes_enabled': True, 'comment_threading_enabled': True, 'has_more_comments': True, 'max_num_visible_preview_comments': 2, 'preview_comments': [], 'comments': [], 'can_view_more_preview_comments': False, 'comment_count': 2, 'hide_view_all_comment_entrypoint': False, 'inline_composer_display_condition': 'impression_trigger', 'image_versions2': {'candidates': [{'width': 1080, 'height': 1350, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?se=7&stp=dst-jpg_e35&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT9YHOYMpRy7iS3u59OWvVp3Nw9FZ85YtSrW2D1wijTWEg&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 828, 'height': 1035, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p828x828&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT-PB8Dn0u5FrZQnuNLgPIkbx4BwJ2hGPdBYurKK-Ph8CA&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 750, 'height': 938, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p750x750_sh0.08&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT8DADx9YCBehSADJEDfiac_7ZSDbIKu8mK4j1O3lp089g&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 640, 'height': 800, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p640x640_sh0.08&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT84V7TcbmFCXDSZ7qgCLV21pznsmVfbLJQY1mIlm3hQ2A&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 480, 'height': 600, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p480x480&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT_vCyXKR-BrV39UGsCei3zcVyKgwtWbXNAJtoIFrCQfpA&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 320, 'height': 400, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p320x320&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT_e0LD-l7sLXP3e6ljhCHYOfs_Y9fhmL1dsQiwzk3Zprw&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 240, 'height': 300, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p240x240&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT-DCuE8FkvArFRaVPaUULvWNi8zhmixBzLdSyBHLfN_Og&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 1080, 'height': 1080, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?se=7&stp=c0.135.1080.1080a_dst-jpg_e35&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT9prjEpsYUD27seWjB-In5joY6vtuDPSe7tIV5ASU_Dbg&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 750, 'height': 750, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s750x750_sh0.08&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT-HBrE__-m8lI0d5t5PHe9MS1JkiyAlG_injTYqcHBXJw&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 640, 'height': 640, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s640x640_sh0.08&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT9pfN-zELsoOcPL2Vz_VwmYF3hvP9NdvlQknoaUFJm1Ng&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 480, 'height': 480, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s480x480&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT8npxMz99PoM9fu-G2ysaFq58HCV59ya8epBZaGTWA2Lw&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 320, 'height': 320, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s320x320&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT8mbX0-pk_2RpQTcXBpSbc1wmTv60Ix_tI08vM6ORGr9g&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 240, 'height': 240, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s240x240&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT_ZiSiVnli3dZldFdn_ACyphgeF249l5-2uOeI_PCFxvw&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 150, 'height': 150, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT_Kc3SEihspeL1nI1n7jUoDxhz89JQcrTwKWh2D0Mnu4w&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}]}, 'original_width': 1080, 'original_height': 1350, 'like_count': 2, 'has_liked': False, 'top_likers': [], 'facepile_top_likers': [], 'likers': [{'pk': 40843950216, 'username': 'jangid_mahendra_0077', 'full_name': '', 'is_private': False, 'profile_pic_url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-19/272044951_3151532145080890_8039253420465785801_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=110&_nc_ohc=e411HvH4F-4AX_2u7i0&edm=AABBvjUBAAAA&ccb=7-4&oh=00_AT8aYxaW8npTTA1HJ8EajgzYjMu-MFtfihLGCMuPPDl6bQ&oe=625A6D31&_nc_sid=83d603', 'profile_pic_id': '2753611017063197775_40843950216', 'is_verified': False, 'follow_friction_type': -1, 'growth_friction_info': {'has_active_interventions': False, 'interventions': {}}, 'account_badges': []}, {'pk': 44931900829, 'username': 'frnd_jaan', 'full_name': 'jaanu🔵', 'is_private': False, 'profile_pic_url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-19/277881792_407914971146773_4878599047040218559_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=101&_nc_ohc=3YZLxOWxrpAAX_0aSEv&edm=AABBvjUBAAAA&ccb=7-4&oh=00_AT88EqL1tpzupxLKyf0N5HamMTptt3PHtG8rmEn4Jx03vg&oe=625B856B&_nc_sid=83d603', 'profile_pic_id': '2810240202946905849_44931900829', 'is_verified': False, 'follow_friction_type': -1, 'growth_friction_info': {'has_active_interventions': False, 'interventions': {}}, 'account_badges': []}], 'photo_of_you': False, 'is_organic_product_tagging_eligible': False, 'can_see_insights_as_brand': False, 'caption': {'pk': 17902493708485887, 'user_id': 52135199895, 'text': '#explorepage #sunsetphotography #sunset #samsunga20', 'type': 1, 'created_at': 1649696640, 'created_at_utc': 1649696640, 'content_type': 'comment', 'status': 'Active', 'bit_flags': 0, 'did_report_as_spam': False, 'share_enabled': False, 'user': {'pk': 52135199895, 'username': 'skies_by_jie', 'full_name': 'Skies✨', 'is_private': False, 'profile_pic_url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-19/275159678_635233907705714_3301720467448510435_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=TLWqsd_aLjEAX9ZS59D&edm=AABBvjUBAAAA&ccb=7-4&oh=00_AT95n96A9UqSUxqN3fF-sIjhk99P2OG4p3VixXOPeu01hQ&oe=625AEFBE&_nc_sid=83d603', 'profile_pic_id': '2786284984702872632_52135199895', 'is_verified': False, 'follow_friction_type': -1, 'growth_friction_info': {'has_active_interventions': False, 'interventions': {}}, 'account_badges': []}, 'is_covered': False, 'media_id': 2814181843960894852, 'private_reply_status': 0}, 'can_viewer_save': True, 'organic_tracking_token': 'eyJ2ZXJzaW9uIjo1LCJwYXlsb2FkIjp7ImlzX2FuYWx5dGljc190cmFja2VkIjp0cnVlLCJ1dWlkIjoiNjE4Mzg5MzA1MjA1NGY5Y2FmYmQ1MDI1M2IzOWJiNDcyODE0MTgxODQzOTYwODk0ODUyIiwic2VydmVyX3Rva2VuIjoiMTY0OTY5Njc0MDUxOHwyODE0MTgxODQzOTYwODk0ODUyfDE3OTU1MzUwMzF8NmRjNzlmMzFjMjczYmQ1Y2E1ZDZmN2M1MmRkMTdmNDQxMDBjYjBjMDNhZGY1MzkwYzk5ODM2ZjkzYWQzNGY4MSJ9LCJzaWduYXR1cmUiOiIifQ==', 'has_shared_to_fb': 0, 'sharing_friction_info': {'should_have_sharing_friction': False, 'bloks_app_url': None, 'sharing_friction_payload': None}, 'comment_inform_treatment': {'should_have_inform_treatment': False, 'text': '', 'url': None, 'action_type': None}, 'product_type': 'feed', 'is_in_profile_grid': False, 'profile_grid_control_enabled': False, 'deleted_reason': 0, 'integrity_review_decision': 'pending', 'music_metadata': {'music_canonical_id': '0', 'audio_type': None, 'music_info': None, 'original_sound_info': None}}], 'num_results': 1, 'more_available': False, 'auto_load_more_enabled': False}"
Arguments: ()
INFO [2022-04-11 19:05:42] [coldt_astrophotography] post_page: {'items': [{'taken_at': 1649696640, 'pk': 2814181843960894852, 'id': '2814181843960894852_52135199895', 'device_timestamp': 216800212164278, 'media_type': 1, 'code': 'CcN-3oWpJWE', 'client_cache_key': 'MjgxNDE4MTg0Mzk2MDg5NDg1Mg==.2', 'filter_type': 0, 'is_unified_video': False, 'user': {'pk': 52135199895, 'username': 'skies_by_jie', 'full_name': 'Skies✨', 'is_private': False, 'profile_pic_url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-19/275159678_635233907705714_3301720467448510435_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=TLWqsd_aLjEAX9ZS59D&edm=AABBvjUBAAAA&ccb=7-4&oh=00_AT95n96A9UqSUxqN3fF-sIjhk99P2OG4p3VixXOPeu01hQ&oe=625AEFBE&_nc_sid=83d603', 'profile_pic_id': '2786284984702872632_52135199895', 'friendship_status': {'following': False, 'outgoing_request': False, 'is_bestie': False, 'is_restricted': False, 'is_feed_favorite': False}, 'is_verified': False, 'follow_friction_type': 0, 'growth_friction_info': {'has_active_interventions': False, 'interventions': {}}, 'account_badges': [], 'has_anonymous_profile_picture': False, 'is_unpublished': False, 'is_favorite': False, 'latest_reel_media': 0, 'has_highlight_reels': False, 'has_primary_country_in_feed': False, 'has_primary_country_in_profile': False}, 'can_viewer_reshare': True, 'caption_is_edited': False, 'like_and_view_counts_disabled': False, 'featured_products_cta': None, 'commerciality_status': 'not_commercial', 'is_paid_partnership': False, 'is_visual_reply_commenter_notice_enabled': True, 'original_media_has_visual_reply_media': False, 'comment_likes_enabled': True, 'comment_threading_enabled': True, 'has_more_comments': True, 'max_num_visible_preview_comments': 2, 'preview_comments': [], 'comments': [], 'can_view_more_preview_comments': False, 'comment_count': 2, 'hide_view_all_comment_entrypoint': False, 'inline_composer_display_condition': 'impression_trigger', 'image_versions2': {'candidates': [{'width': 1080, 'height': 1350, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?se=7&stp=dst-jpg_e35&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT9YHOYMpRy7iS3u59OWvVp3Nw9FZ85YtSrW2D1wijTWEg&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 828, 'height': 1035, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p828x828&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT-PB8Dn0u5FrZQnuNLgPIkbx4BwJ2hGPdBYurKK-Ph8CA&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 750, 'height': 938, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p750x750_sh0.08&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT8DADx9YCBehSADJEDfiac_7ZSDbIKu8mK4j1O3lp089g&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 640, 'height': 800, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p640x640_sh0.08&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT84V7TcbmFCXDSZ7qgCLV21pznsmVfbLJQY1mIlm3hQ2A&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 480, 'height': 600, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p480x480&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT_vCyXKR-BrV39UGsCei3zcVyKgwtWbXNAJtoIFrCQfpA&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 320, 'height': 400, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p320x320&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT_e0LD-l7sLXP3e6ljhCHYOfs_Y9fhmL1dsQiwzk3Zprw&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 240, 'height': 300, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=dst-jpg_e35_p240x240&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT-DCuE8FkvArFRaVPaUULvWNi8zhmixBzLdSyBHLfN_Og&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 1080, 'height': 1080, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?se=7&stp=c0.135.1080.1080a_dst-jpg_e35&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT9prjEpsYUD27seWjB-In5joY6vtuDPSe7tIV5ASU_Dbg&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 750, 'height': 750, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s750x750_sh0.08&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT-HBrE__-m8lI0d5t5PHe9MS1JkiyAlG_injTYqcHBXJw&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 640, 'height': 640, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s640x640_sh0.08&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT9pfN-zELsoOcPL2Vz_VwmYF3hvP9NdvlQknoaUFJm1Ng&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 480, 'height': 480, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s480x480&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT8npxMz99PoM9fu-G2ysaFq58HCV59ya8epBZaGTWA2Lw&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 320, 'height': 320, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s320x320&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT8mbX0-pk_2RpQTcXBpSbc1wmTv60Ix_tI08vM6ORGr9g&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 240, 'height': 240, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s240x240&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT_ZiSiVnli3dZldFdn_ACyphgeF249l5-2uOeI_PCFxvw&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}, {'width': 150, 'height': 150, 'url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-15/278165220_380496157267572_6465440574021271147_n.webp?stp=c0.135.1080.1080a_dst-jpg_e35_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6pDaMzYBgwIAX_unUPj&edm=AABBvjUBAAAA&ccb=7-4&ig_cache_key=MjgxNDE4MTg0Mzk2MDg5NDg1Mg%3D%3D.2-ccb7-4&oh=00_AT_Kc3SEihspeL1nI1n7jUoDxhz89JQcrTwKWh2D0Mnu4w&oe=625BE0A0&_nc_sid=83d603', 'scans_profile': 'e35'}]}, 'original_width': 1080, 'original_height': 1350, 'like_count': 2, 'has_liked': False, 'top_likers': [], 'facepile_top_likers': [], 'likers': [{'pk': 40843950216, 'username': 'jangid_mahendra_0077', 'full_name': '', 'is_private': False, 'profile_pic_url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-19/272044951_3151532145080890_8039253420465785801_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=110&_nc_ohc=e411HvH4F-4AX_2u7i0&edm=AABBvjUBAAAA&ccb=7-4&oh=00_AT8aYxaW8npTTA1HJ8EajgzYjMu-MFtfihLGCMuPPDl6bQ&oe=625A6D31&_nc_sid=83d603', 'profile_pic_id': '2753611017063197775_40843950216', 'is_verified': False, 'follow_friction_type': -1, 'growth_friction_info': {'has_active_interventions': False, 'interventions': {}}, 'account_badges': []}, {'pk': 44931900829, 'username': 'frnd_jaan', 'full_name': 'jaanu🔵', 'is_private': False, 'profile_pic_url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-19/277881792_407914971146773_4878599047040218559_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=101&_nc_ohc=3YZLxOWxrpAAX_0aSEv&edm=AABBvjUBAAAA&ccb=7-4&oh=00_AT88EqL1tpzupxLKyf0N5HamMTptt3PHtG8rmEn4Jx03vg&oe=625B856B&_nc_sid=83d603', 'profile_pic_id': '2810240202946905849_44931900829', 'is_verified': False, 'follow_friction_type': -1, 'growth_friction_info': {'has_active_interventions': False, 'interventions': {}}, 'account_badges': []}], 'photo_of_you': False, 'is_organic_product_tagging_eligible': False, 'can_see_insights_as_brand': False, 'caption': {'pk': 17902493708485887, 'user_id': 52135199895, 'text': '#explorepage #sunsetphotography #sunset #samsunga20', 'type': 1, 'created_at': 1649696640, 'created_at_utc': 1649696640, 'content_type': 'comment', 'status': 'Active', 'bit_flags': 0, 'did_report_as_spam': False, 'share_enabled': False, 'user': {'pk': 52135199895, 'username': 'skies_by_jie', 'full_name': 'Skies✨', 'is_private': False, 'profile_pic_url': 'https://instagram.ftxl2-1.fna.fbcdn.net/v/t51.2885-19/275159678_635233907705714_3301720467448510435_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.ftxl2-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=TLWqsd_aLjEAX9ZS59D&edm=AABBvjUBAAAA&ccb=7-4&oh=00_AT95n96A9UqSUxqN3fF-sIjhk99P2OG4p3VixXOPeu01hQ&oe=625AEFBE&_nc_sid=83d603', 'profile_pic_id': '2786284984702872632_52135199895', 'is_verified': False, 'follow_friction_type': -1, 'growth_friction_info': {'has_active_interventions': False, 'interventions': {}}, 'account_badges': []}, 'is_covered': False, 'media_id': 2814181843960894852, 'private_reply_status': 0}, 'can_viewer_save': True, 'organic_tracking_token': 'eyJ2ZXJzaW9uIjo1LCJwYXlsb2FkIjp7ImlzX2FuYWx5dGljc190cmFja2VkIjp0cnVlLCJ1dWlkIjoiNjE4Mzg5MzA1MjA1NGY5Y2FmYmQ1MDI1M2IzOWJiNDcyODE0MTgxODQzOTYwODk0ODUyIiwic2VydmVyX3Rva2VuIjoiMTY0OTY5Njc0MDUxOHwyODE0MTgxODQzOTYwODk0ODUyfDE3OTU1MzUwMzF8NmRjNzlmMzFjMjczYmQ1Y2E1ZDZmN2M1MmRkMTdmNDQxMDBjYjBjMDNhZGY1MzkwYzk5ODM2ZjkzYWQzNGY4MSJ9LCJzaWduYXR1cmUiOiIifQ==', 'has_shared_to_fb': 0, 'sharing_friction_info': {'should_have_sharing_friction': False, 'bloks_app_url': None, 'sharing_friction_payload': None}, 'comment_inform_treatment': {'should_have_inform_treatment': False, 'text': '', 'url': None, 'action_type': None}, 'product_type': 'feed', 'is_in_profile_grid': False, 'profile_grid_control_enabled': False, 'deleted_reason': 0, 'integrity_review_decision': 'pending', 'music_metadata': {'music_canonical_id': '0', 'audio_type': None, 'music_info': None, 'original_sound_info': None}}], 'num_results': 1, 'more_available': False, 'auto_load_more_enabled': False}
```
It does fetch the posts from the specified hashtags but as soon as it wants to like or comments (idk at what point it stops), it stops working and just says:
```
INFO [2022-04-11 19:05:43] [coldt_astrophotography] Sessional Live Report:
|> No any statistics to show
On session start was FOLLOWING 209 users & had 600 FOLLOWERS
[Session lasted 1.67 minutes]
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-04-11 19:05:43] [coldt_astrophotography] Session ended!
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
Traceback (most recent call last):
File "C:\Users\xDDra\Downloads\instapy-quickstart-master\quickstart.py", line 124, in <module>
session.like_by_tags(my_hashtags, amount=90, media=None)
File "C:\Users\xDDra\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\instapy.py", line 1977, in like_by_tags
inappropriate, user_name, is_video, reason, scope = check_link(
File "C:\Users\xDDra\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\like_util.py", line 618, in check_link
media = post_page[0]["shortcode_media"]
KeyError: 0
Press any key to continue . . .
--- Logging error ---
Traceback (most recent call last):
File "C:\Users\xDDra\AppData\Local\Programs\Python\Python39\lib\logging\__init__.py", line 1082, in emit
stream.write(msg + self.terminator)
File "C:\Users\xDDra\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
```
## Expected Behavior
Fetch posts from hashtags, like / comment and go on to other posts
## Current Behavior
fetches posts from hashtags, goes to first post and then ends the program.
## Possible Solution (optional)
## InstaPy configuration
# general settings
```
session.set_dont_like(['sad', 'rain', 'depression', 'food', 'selfie'])
session.set_do_follow(enabled=True, percentage=70, times=1)
session.set_do_comment(enabled=True, percentage=80)
session.set_comments([
u'What an amazing shot! :heart_eyes: What do '
u'you think of my recent shot?',
u'What an amazing shot! :heart_eyes: I think '
u'you might also like mine. :wink:',
u'Wonderful!! :heart_eyes: Would be awesome if '
u'you would checkout my photos as well!',
u'Wonderful!! :heart_eyes: I would be honored '
u'if you would checkout my images and tell me '
u'what you think. :wink:',
u'This is awesome!! :heart_eyes: Any feedback '
u'for my photos? :wink:',
u'This is awesome!! :heart_eyes: maybe you '
u'like my photos, too? :wink:',
u'I really like the way you captured this. I '
u'bet you like my photos, too :wink:',
u'I really like the way you captured this. If '
u'you have time, check out my photos, too. I '
u'bet you will like them. :wink:',
u'Great capture!! :smiley: Any feedback for my '
u'recent shot? :wink:',
u'Great capture!! :smiley: :thumbsup: What do '
u'you think of my recent photo?',
u'Nice shot! @{}',
u'I love your profile! @{}',
u'Your feed is an inspiration :thumbsup:',
u'Just incredible :open_mouth:',
u'What camera did you use @{}?',
u'Love your posts @{}',
u'Looks awesome @{}',
u'Getting inspired by you @{}',
u':raised_hands: Yes!',
u'I can feel your passion @{} :muscle:',
u'I wish I could pull that off',
u'So CLEAN. Damn...',
u'I love your feed! Followed! :D',
u'Leave some awesomeness for the rest of us. Dang..'],
media='Photo')
session.set_do_like(True, percentage=70)
session.set_delimit_liking(enabled=True, max_likes=100, min_likes=0)
session.set_delimit_commenting(enabled=True, max_comments=20, min_comments=0)
session.set_relationship_bounds(enabled=True,
potency_ratio=None,
delimit_by_numbers=True,
max_followers=3000,
max_following=2000,
min_followers=50,
min_following=50)
session.set_quota_supervisor(enabled=True,
sleep_after=["likes", "follows"],
sleepyhead=True, stochastic_flow=True,
notify_me=True,
peak_likes_hourly=150,
peak_likes_daily=400,
peak_comments_hourly=80,
peak_comments_daily=182,
peak_follows_hourly=48,
peak_follows_daily=None,
peak_unfollows_hourly=35,
peak_unfollows_daily=402,
peak_server_calls_hourly=None,
peak_server_calls_daily=4700)
session.set_user_interact(amount=10, randomize=True, percentage=80)
# activity
session.like_by_tags(my_hashtags, amount=90, media=None)
""" Joining Engagement Pods...
"""
session.join_pods(topic='photography', engagement_mode='no_comments')
```
| open | 2022-04-11T20:51:20Z | 2022-05-02T13:44:36Z | https://github.com/InstaPy/InstaPy/issues/6582 | [] | Anima1337 | 13 |
dask/dask | scikit-learn | 11,416 | Significant slowdown in Numba compiled functions from Dask 2024.8.1 | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
In [sgkit](https://github.com/sgkit-dev/sgkit), we use a lot of Numba compiled functions in Dask Array `map_blocks` calls, and we noticed a significant (approx 4x) slowdown in performance when running the test suite (see https://github.com/sgkit-dev/sgkit/issues/1267).
**Minimal Complete Verifiable Example**:
`dask-slowdown-min.py`:
```python
from numba import guvectorize
import numpy as np
import dask.array as da
@guvectorize(
[
"void(int8[:], int8[:])",
"void(int16[:], int16[:])",
"void(int32[:], int32[:])",
"void(int64[:], int64[:])",
],
"(n)->(n)", nopython=True
)
def inc(x, res):
for i in range(x.shape[0]):
res[i] = x[i] + 1
if __name__ == "__main__":
for i in range(3):
a = da.ones((10000, 1000, 10), chunks=(1000, 1000, 10), dtype=np.int8)
res = da.map_blocks(inc, a, dtype=np.int8).compute()
print(i)
```
With the latest version of Dask:
```shell
pip install 'dask[array]' numba
time python dask-slowdown-min.py
1
2
3
python dask-slowdown-min.py 2.61s user 0.21s system 40% cpu 6.929 total
```
With Dask 2024.8.0:
```shell
pip install -U 'dask[array]====2024.8.0'
time python dask-slowdown-min.py
0
1
2
python dask-slowdown-min.py 0.62s user 0.13s system 99% cpu 0.752 total
```
**Anything else we need to know?**:
I ran a git bisect and it looks like the problem was introduced in 1d771959509d09c34195fa19d9ae8446ae3a8726 (#11320).
I'm not sure what the underlying problem is, but I noticed that the slow version is compiling Numba functions many times compared to the older version:
```shell
# Dask latest version
NUMBA_DEBUG=1 python dask-slowdown-min.py | grep 'DUMP inc' | wc -l
152
# Dask 2024.8.0
NUMBA_DEBUG=1 python dask-slowdown-min.py | grep 'DUMP inc' | wc -l
8
```
**Environment**:
- Dask version: 2024.8.1 and later
- Python version: Python 3.11
- Operating System: macOS
- Install method (conda, pip, source): pip
| closed | 2024-10-07T15:55:42Z | 2024-10-08T10:17:48Z | https://github.com/dask/dask/issues/11416 | [
"needs triage"
] | tomwhite | 2 |
CorentinJ/Real-Time-Voice-Cloning | python | 513 | i am confuse with the places of folders? | Hello i did downloaded the LibriSpeech/train-clean-100 (tar.gz 5gb) and extract it to the root of the app as LibriSpeech/
but i am getting this.
python3.7 demo_toolbox.py --low_mem -d LibriSpeech/
Arguments:
datasets_root: LibriSpeech
enc_models_dir: encoder/saved_models
syn_models_dir: synthesizer/saved_models
voc_models_dir: vocoder/saved_models
low_mem: True
seed: None
Warning: you do not have any of the recognized datasets in LibriSpeech.
The recognized datasets are:
LibriSpeech/dev-clean
LibriSpeech/dev-other
LibriSpeech/test-clean
LibriSpeech/test-other
LibriSpeech/train-clean-100
LibriSpeech/train-clean-360
LibriSpeech/train-other-500
LibriTTS/dev-clean
LibriTTS/dev-other
LibriTTS/test-clean
LibriTTS/test-other
LibriTTS/train-clean-100
LibriTTS/train-clean-360
LibriTTS/train-other-500
LJSpeech-1.1
VoxCeleb1/wav
VoxCeleb1/test_wav
VoxCeleb2/dev/aac
VoxCeleb2/test/aac
VCTK-Corpus/wav48
Feel free to add your own. You can still use the toolbox by recording samples yourself.
[AVAudioResampleContext @ 0x5639cd9da6c0] Invalid input channel layout: 0
[AVAudioResampleContext @ 0x5639cd9cf340] Invalid input channel layout: 0
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
| closed | 2020-08-28T07:24:42Z | 2020-09-04T05:12:43Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/513 | [] | afantasialiberal | 6 |
ray-project/ray | python | 51,275 | [core][gpu-objects] Exception handling | ### Description
* Application-level exceptions
* System-level exceptions: If sender actor dies, receiver actor should throw an error to avoid hanging in recv
### Use case
_No response_ | open | 2025-03-11T22:33:38Z | 2025-03-11T22:42:35Z | https://github.com/ray-project/ray/issues/51275 | [
"enhancement",
"P1",
"core",
"gpu-objects"
] | kevin85421 | 1 |
fastapiutils/fastapi-utils | fastapi | 68 | [BUG] @repeat_every will have no stack trace in terminal | **Describe the bug**
No error will show
**To Reproduce**
```
from fastapi import FastAPI
from fastapi_utils.tasks import repeat_every
app = FastAPI()
items = {}
@app.on_event("startup")
@repeat_every(seconds=60)
async def startup_event():
raise Exception
items["foo"] = {"name": "Fighters"}
items["bar"] = {"name": "Tenders"}
@app.get("/items/{item_id}")
async def read_items(item_id: str):
return items[item_id]
```
`uvicorn app.main:app --reload`, the `raise Exception` should produce a stack trace in terminal, but none.
**Environment:**
- Windows
```Python
Python 3.7.7 (tags/v3.7.7:d7c567b08f, Mar 10 2020, 10:41:24) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import fastapi_utils
>>> import fastapi
>>> import pydantic.utils
>>> print(fastapi_utils.__version__)
0.2.1
>>> print(fastapi.__version__)
0.58.0
>>> print(pydantic.utils.version_info())
pydantic version: 1.5.1
pydantic compiled: True
install path: D:\git\testFastapiutils\venv\Lib\site-packages\pydantic
python version: 3.7.7 (tags/v3.7.7:d7c567b08f, Mar 10 2020, 10:41:24) [MSC v.1900 64 bit (AMD64)]
platform: Windows-10-10.0.18362-SP0
optional deps. installed: []
```
| closed | 2020-06-20T10:32:40Z | 2021-01-01T02:18:11Z | https://github.com/fastapiutils/fastapi-utils/issues/68 | [
"bug"
] | shizidushu | 2 |
ageitgey/face_recognition | python | 633 | face_detection: command not found | * face_recognition version: Latest
* Python version: Don't know
* Operating System: Linux
### Description
Paste the command(s) you ran and the output.
```$ face_detection
face_detection: command not found```
If there was a crash, please include the traceback here.
```
| open | 2018-09-28T10:48:22Z | 2019-10-18T14:12:45Z | https://github.com/ageitgey/face_recognition/issues/633 | [] | adv-ai-tech | 3 |
thtrieu/darkflow | tensorflow | 979 | what are the classes in VOC2007 and how many labels it contains | what are the classes in VOC2007 and how many labels it contain?
| closed | 2019-02-01T16:56:34Z | 2019-02-03T01:55:52Z | https://github.com/thtrieu/darkflow/issues/979 | [] | Francisobiagwu | 0 |
ResidentMario/missingno | data-visualization | 61 | Add continuous integration | Once #46 and #47 are resolved, it will be invaluable for you to set up continuous integration via [Travis-CI](https://travis-ci.org) or related services. That way every commit, PR, etc. will automatically be tested for you.
If you're unsure how to add CI to a GitHub repo, see the [GitHub docpage](https://github.com/marketplace/category/continuous-integration) or ping me on this issue. These services are free for public repos. | closed | 2018-01-31T03:20:48Z | 2020-04-09T21:14:42Z | https://github.com/ResidentMario/missingno/issues/61 | [] | rhiever | 0 |
davidteather/TikTok-Api | api | 137 | [INSTALLATION] - All "api.___ " function is taking forever to execute" | **Describe the error**
All api.___() functions are taking forever to execute
**The buggy code**
`
from TikTokApi import TikTokApi
api = TikTokApi()
results = 10
trending = api.trending(count=results)
for tiktok in trending:
print(tiktok['desc'])
print(len(trending))
`
**Desktop (please complete the following information):**
- OS: [e.g. Windows 10]
- Browser [e.g. chrome]
- Version [e.g. 82]
**Additional context**
It's not like I'm receiving any error message. This section of the code is taking forever to execute --kernel keeps running for hours at this step
| closed | 2020-06-11T06:12:08Z | 2020-09-06T06:54:21Z | https://github.com/davidteather/TikTok-Api/issues/137 | [
"bug",
"installation_help"
] | JoyceJiang73 | 7 |
chezou/tabula-py | pandas | 396 | Problem in output table characters | ### Summary
It seems it output a table with correct dimension but the content of each cell is ??????
### Did you read the FAQ?
- [X] I have read the FAQ
### Did you search GitHub issues?
- [X] I have searched the issues
### Did you search GitHub Discussions?
- [X] I have searched the discussions
### (Optional) PDF URL
https://drive.google.com/file/d/1OW0EBy_HpTJFyMpA3CyD7_G-3tQeX1su/view?usp=sharing
### About your environment
```markdown
tabula.environment_info()
Python version:
3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)]
Java version:
java version "18.0.1.1" 2022-04-22
Java(TM) SE Runtime Environment (build 18.0.1.1+2-6)
Java HotSpot(TM) 64-Bit Server VM (build 18.0.1.1+2-6, mixed mode, sharing)
tabula-py version: 2.9.3
platform: Windows-10-10.0.22000-SP0
```
### What did you do when you faced the problem?

### Code
df = tabula.read_pdf("sktech.pdf", pages='all')
df[0]
### Expected behavior
A dataframe containing the following items:
<html>
<head><style>
<!--
br
{
mso-data-placement:same-cell;
}
table
{
mso-displayed-decimal-separator:"\.";
mso-displayed-thousand-separator:"\, ";
}
tr
{
mso-height-source:auto;
mso-ruby-visibility:none;
}
td
{
border:.5pt solid windowtext;
}
.NormalTable{cellspacing:0;cellpadding:10;border-collapse:collapse;mso-table-layout-alt:fixed;border:none; mso-border-alt:solid windowtext .75pt;mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-border-insideh:.75pt solid windowtext;mso-border-insidev:.75pt solid windowtext}
.fontstyle0
{
font-family:Yas;
font-size:12pt;
font-style:normal;
font-weight:normal;
color:rgb(0,0,0);
}
.fontstyle1
{
font-size:12pt;
font-style:normal;
font-weight:normal;
color:rgb(0,0,0);
}
-->
</style></head><body>
<!--StartFragment-->
سلام | ستون ۲
-- | --
۲۵۵۴۶ .۰ | محتوای یک
۸۵۹۴۳۱۵ | محتوای دو
۱۰۰۰۰۲۳ | محتوای سه
۳۵۴ | محتوای ۴
<br style=" font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; ">
<!--EndFragment-->
</body>
</html>
### Actual behavior
cell content are ???????
### Related issues
_No response_ | closed | 2024-08-01T14:26:10Z | 2024-08-01T15:36:39Z | https://github.com/chezou/tabula-py/issues/396 | [
"not a bug"
] | DsDastgheib | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,839 | Reverse selection ( NOT ) | 
Hello all, is it possible have a search that reverse the selection on a relationship table ?
I look for having results thats does not match the tagse.
tags = relationship("Tags", secondary=assoc_tags_item, backref="item")
Have a nice day. | open | 2022-05-01T13:09:14Z | 2022-05-03T19:22:47Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1839 | [
"question"
] | Th4nat0s | 3 |
babysor/MockingBird | deep-learning | 518 | n_fft参数设置 | Synthesizer和hifigan中的n_fft参数分别是800和1024,这样设置同一个波形文件处理的到的mels应该是不一样的,hifigan为什么会work呢 | open | 2022-04-26T09:23:34Z | 2022-04-26T09:23:34Z | https://github.com/babysor/MockingBird/issues/518 | [] | 1nlplearner | 0 |
aiortc/aiortc | asyncio | 1,176 | Lazy decoding of video frames | Hi,
First of all - thank you for all the effort you have put into building this useful package!
I was about to create a PR with the change I need but I thought I'd ask first to ensure it's aligned with your vision.
I would like to be able to prevent frames from being automatically decoded, and instead decode them only when I need them. Our pipeline is dropping frames if it can't keep up, so it does not make sense in our case to decode all frames. Would you accept PR for feature like this?
Thanks,
Grzegorz | closed | 2024-10-17T12:35:15Z | 2024-10-24T20:43:32Z | https://github.com/aiortc/aiortc/issues/1176 | [] | grzegorz-roboflow | 0 |
Gozargah/Marzban | api | 622 | اتصال نود در Haproxy | سلام ممنون بابت پروژه عالی مرزبان
مشکل اینجاست که سرور اصلی را با HAproxy تک پورت کردم ولی سرور نود را نمیتونم وصل کنم و هیچ اموزشی وجود نداره خواهشااا در این مورد راهنمایی کنید چون هیچ منبعی در این رابطه وجود نداره.
طبق مطلبی که در این اینجا هست
https://gozargah.github.io/marzban/examples/all-on-one-port
تمام تنظیمات
cfg
سرور مرکزی را توی سرور نود قرار دادم
سوال اینجاست دامنه پنل و یا بقیه تنظیمات باید تغییر کنند یا خیر ؟
توی کانفیگ مرزبان به چه صورت باید تغییرات را اعمال کرد؟
تشکر | closed | 2023-11-07T05:55:08Z | 2023-12-31T11:06:22Z | https://github.com/Gozargah/Marzban/issues/622 | [
"Feature"
] | reza3723 | 1 |
onnx/onnx | deep-learning | 6,751 | test source distribution / test_sdist_preview (3.10, arm64) fails | ```
2025-03-02T08:54:42.9539070Z Current runner version: '2.322.0'
2025-03-02T08:54:42.9555000Z ##[group]Operating System
2025-03-02T08:54:42.9555480Z macOS
2025-03-02T08:54:42.9555800Z 14.7.2
2025-03-02T08:54:42.9556110Z 23H311
2025-03-02T08:54:42.9556410Z ##[endgroup]
2025-03-02T08:54:42.9556810Z ##[group]Runner Image
2025-03-02T08:54:42.9557170Z Image: macos-14-arm64
2025-03-02T08:54:42.9557490Z Version: 20250120.774
2025-03-02T08:54:42.9558160Z Included Software: https://github.com/actions/runner-images/blob/macos-14-arm64/20250120.774/images/macos/macos-14-arm64-Readme.md
2025-03-02T08:54:42.9559260Z Image Release: https://github.com/actions/runner-images/releases/tag/macos-14-arm64%2F20250120.774
2025-03-02T08:54:42.9560030Z ##[endgroup]
2025-03-02T08:54:42.9560610Z ##[group]Runner Image Provisioner
2025-03-02T08:54:42.9561220Z 2.0.422.1+55c30c14fe2a0a1547db1b656933ae07d97649a9
2025-03-02T08:54:42.9561680Z ##[endgroup]
2025-03-02T08:54:42.9562380Z ##[group]GITHUB_TOKEN Permissions
2025-03-02T08:54:42.9563650Z Contents: read
2025-03-02T08:54:42.9564010Z Metadata: read
2025-03-02T08:54:42.9564360Z ##[endgroup]
2025-03-02T08:54:42.9566350Z Secret source: Actions
2025-03-02T08:54:42.9566860Z Prepare workflow directory
2025-03-02T08:54:42.9968780Z Prepare all required actions
2025-03-02T08:54:43.0012360Z Uses: onnx/onnx/.github/workflows/preview_source_dist_test.yml@refs/heads/main (b3e627c5208a22eace1e3005210c21965025f093)
2025-03-02T08:54:43.0015540Z ##[group] Inputs
2025-03-02T08:54:43.0015930Z publish_pypi_weekly: no
2025-03-02T08:54:43.0016310Z publish_testpypi_weekly: yes
2025-03-02T08:54:43.0016700Z os: macos
2025-03-02T08:54:43.0017020Z ##[endgroup]
2025-03-02T08:54:43.0017490Z Complete job name: test source distribution / test_sdist_preview (3.10, arm64)
2025-03-02T08:54:43.0394610Z ##[group]Run python -m pip uninstall -y onnx-weekly
2025-03-02T08:54:43.0395210Z [36;1mpython -m pip uninstall -y onnx-weekly[0m
2025-03-02T08:54:43.0395650Z [36;1mpython -m pip install setuptools[0m
2025-03-02T08:54:43.0396270Z [36;1mpython -m pip install --use-deprecated=legacy-resolver --no-binary onnx-weekly onnx-weekly[0m
2025-03-02T08:54:43.0396970Z [36;1mpython -m pip install pytest[0m
2025-03-02T08:54:43.0397340Z [36;1mpytest[0m
2025-03-02T08:54:43.1693480Z shell: /bin/bash -e {0}
2025-03-02T08:54:43.1694010Z env:
2025-03-02T08:54:43.1694310Z MACOSX_DEPLOYMENT_TARGET: 12.0
2025-03-02T08:54:43.1694650Z ##[endgroup]
2025-03-02T08:54:44.2118560Z WARNING: Skipping onnx-weekly as it is not installed.
2025-03-02T08:54:44.6424510Z Collecting setuptools
2025-03-02T08:54:44.6897880Z Downloading setuptools-75.8.2-py3-none-any.whl.metadata (6.7 kB)
2025-03-02T08:54:44.7080010Z Downloading setuptools-75.8.2-py3-none-any.whl (1.2 MB)
2025-03-02T08:54:44.7575410Z ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 43.2 MB/s eta 0:00:00
2025-03-02T08:54:44.7605170Z Installing collected packages: setuptools
2025-03-02T08:54:45.2648890Z Successfully installed setuptools-75.8.2
2025-03-02T08:54:45.3600570Z
2025-03-02T08:54:45.3602010Z [notice] A new release of pip is available: 24.3.1 -> 25.0.1
2025-03-02T08:54:45.3602580Z [notice] To update, run: python -m pip install --upgrade pip
2025-03-02T08:54:45.6656510Z Collecting onnx-weekly
2025-03-02T08:54:45.7016110Z Downloading onnx_weekly-1.18.0.dev20250224.tar.gz (12.2 MB)
2025-03-02T08:54:45.7940460Z ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 168.1 MB/s eta 0:00:00
2025-03-02T08:54:47.5032010Z Installing build dependencies: started
2025-03-02T08:54:48.7136980Z Installing build dependencies: finished with status 'done'
2025-03-02T08:54:48.7145570Z Getting requirements to build wheel: started
2025-03-02T08:54:48.7467990Z Getting requirements to build wheel: finished with status 'done'
2025-03-02T08:54:48.7661930Z
2025-03-02T08:54:48.7662310Z [notice] A new release of pip is available: 24.3.1 -> 25.0.1
2025-03-02T08:54:48.7662730Z [notice] To update, run: python -m pip install --upgrade pip
2025-03-02T08:54:48.7717590Z ERROR: Exception:
2025-03-02T08:54:48.7718010Z Traceback (most recent call last):
2025-03-02T08:54:48.7719390Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/cli/base_command.py", line 105, in _run_wrapper
2025-03-02T08:54:48.7719950Z status = _inner_run()
2025-03-02T08:54:48.7720490Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/cli/base_command.py", line 96, in _inner_run
2025-03-02T08:54:48.7721040Z return self.run(options, args)
2025-03-02T08:54:48.7721260Z ~~~~~~~~^^^^^^^^^^^^^^^
2025-03-02T08:54:48.7721760Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper
2025-03-02T08:54:48.7722280Z return func(self, options, args)
2025-03-02T08:54:48.7722810Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/commands/install.py", line 379, in run
2025-03-02T08:54:48.7723320Z requirement_set = resolver.resolve(
2025-03-02T08:54:48.7723690Z reqs, check_supported_wheels=not options.target_dir
2025-03-02T08:54:48.7723960Z )
2025-03-02T08:54:48.7724460Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/resolution/legacy/resolver.py", line 181, in resolve
2025-03-02T08:54:48.7725060Z discovered_reqs.extend(self._resolve_one(requirement_set, req))
2025-03-02T08:54:48.7725360Z ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
2025-03-02T08:54:48.7725930Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/resolution/legacy/resolver.py", line 505, in _resolve_one
2025-03-02T08:54:48.7727540Z dist = self._get_dist_for(req_to_install)
2025-03-02T08:54:48.7728220Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/resolution/legacy/resolver.py", line 458, in _get_dist_for
2025-03-02T08:54:48.7728850Z dist = self.preparer.prepare_linked_requirement(req)
2025-03-02T08:54:48.7729490Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/operations/prepare.py", line 527, in prepare_linked_requirement
2025-03-02T08:54:48.7730120Z return self._prepare_linked_requirement(req, parallel_builds)
2025-03-02T08:54:48.7730450Z ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
2025-03-02T08:54:48.7731060Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/operations/prepare.py", line 642, in _prepare_linked_requirement
2025-03-02T08:54:48.7731590Z dist = _get_prepared_distribution(
2025-03-02T08:54:48.7731790Z req,
2025-03-02T08:54:48.7731990Z ...<3 lines>...
2025-03-02T08:54:48.7732160Z self.check_build_deps,
2025-03-02T08:54:48.7732370Z )
2025-03-02T08:54:48.7734120Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/operations/prepare.py", line 72, in _get_prepared_distribution
2025-03-02T08:54:48.7734750Z abstract_dist.prepare_distribution_metadata(
2025-03-02T08:54:48.7735020Z ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
2025-03-02T08:54:48.7735270Z finder, build_isolation, check_build_deps
2025-03-02T08:54:48.7735570Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-02T08:54:48.7735800Z )
2025-03-02T08:54:48.7735930Z ^
2025-03-02T08:54:48.7736470Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/distributions/sdist.py", line 56, in prepare_distribution_metadata
2025-03-02T08:54:48.7737050Z self._install_build_reqs(finder)
2025-03-02T08:54:48.7737280Z ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
2025-03-02T08:54:48.7737830Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/distributions/sdist.py", line 126, in _install_build_reqs
2025-03-02T08:54:48.7738380Z build_reqs = self._get_build_requires_wheel()
2025-03-02T08:54:48.7739510Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/distributions/sdist.py", line 103, in _get_build_requires_wheel
2025-03-02T08:54:48.7740100Z return backend.get_requires_for_build_wheel()
2025-03-02T08:54:48.7740360Z ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
2025-03-02T08:54:48.7741870Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_internal/utils/misc.py", line 701, in get_requires_for_build_wheel
2025-03-02T08:54:48.7742480Z return super().get_requires_for_build_wheel(config_settings=cs)
2025-03-02T08:54:48.7742800Z ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
2025-03-02T08:54:48.7743410Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 166, in get_requires_for_build_wheel
2025-03-02T08:54:48.7744010Z return self._call_hook('get_requires_for_build_wheel', {
2025-03-02T08:54:48.7744300Z ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-02T08:54:48.7744570Z 'config_settings': config_settings
2025-03-02T08:54:48.7744800Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-02T08:54:48.7745020Z })
2025-03-02T08:54:48.7745160Z ^^
2025-03-02T08:54:48.7745630Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 321, in _call_hook
2025-03-02T08:54:48.7746180Z raise BackendUnavailable(data.get('traceback', ''))
2025-03-02T08:54:48.7746590Z pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
2025-03-02T08:54:48.7748950Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
2025-03-02T08:54:48.7749600Z obj = import_module(mod_path)
2025-03-02T08:54:48.7750080Z File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/importlib/__init__.py", line 88, in import_module
2025-03-02T08:54:48.7750630Z return _bootstrap._gcd_import(name[level:], package, level)
2025-03-02T08:54:48.7750970Z ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-02T08:54:48.7751300Z File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
2025-03-02T08:54:48.7751700Z File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
2025-03-02T08:54:48.7752090Z File "<frozen importlib._bootstrap>", line 1324, in _find_and_load_unlocked
2025-03-02T08:54:48.7752440Z ModuleNotFoundError: No module named 'backend'
2025-03-02T08:54:48.7752610Z
2025-03-02T08:54:49.2236550Z ##[error]Process completed with exit code 2.
2025-03-02T08:54:49.2306180Z Cleaning up orphan processes
``` | closed | 2025-03-02T09:43:34Z | 2025-03-03T20:52:34Z | https://github.com/onnx/onnx/issues/6751 | [
"bug"
] | andife | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,175 | Batch size > 1 during testing | Sorry to bother. I would like to set batch size >1 during testing. Current version is based on batchsize ==1. Could you tell me where to modify?
I have done some modification to make it work but there is still some problems. I trained a pix2pix model with batch normalization and batchsize ==1. It works fine when I set batchsize ==1 during testing (using -- model pix2pix or test, both get fine results). But keeping other settings the same, when I set batchsize >1 during testing, results are not good as batchsize ==1.
In the section3.3 of paper, it is mentioned, "This approach to batch normalization, when the batch size is set to 1, has been termed “instance normalization” and has been demonstrated to be effective at image generation tasks [54]."
So should I train a model with instance normalization and batchsize >1, say batchsize ==4, and test using same batchsize?
Could you give me some advice?
Thank you so much! | open | 2020-11-06T03:16:46Z | 2020-11-06T08:05:00Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1175 | [] | wangjialuu | 0 |
keras-team/keras | pytorch | 20,397 | Need to reenable coverage test on CI | I had to disable coverage testing here: https://github.com/keras-team/keras/blob/master/.github/workflows/actions.yml#L87-L101
because it was causing a failure on the torch backend CI. The failure was related to coverage file merging.
This does not appear to have been caused by a specific commit. None of our dependencies got updated around the time it started failing, best I can tell. Cause is entirely unclear.
We need to debug it and then reenable coverage testing. | closed | 2024-10-23T03:37:42Z | 2024-10-23T14:47:30Z | https://github.com/keras-team/keras/issues/20397 | [] | fchollet | 0 |
mwaskom/seaborn | data-visualization | 3,428 | Seaborn 0.12.2 and NumPy 1.24.3 causing Python crash upon Ctrl+C (reproduces across multiple Windows PCs) | # Overview
Ok, here's a really puzzling and interesting bug:
With the latest Seaborn version, if the NumPy version is 1.24.3: **At any time after importing Seaborn into Python, pressing CTRL+C *reproducibly* causes the Python process to forcibly terminate.**
# Observed and Expected Behaviors
Below is a minimum reproducible example. Observed behavior:
```
Microsoft Windows [Version 10.0.22621.2070]
(c) Microsoft Corporation. All rights reserved.
C:\Users\peter>python
Python 3.11.4 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 13:38:37) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import seaborn
>>> (Note: user presses CTRL+C here)
forrtl: error (200): program aborting due to control-C event
Image PC Routine Line Source
libifcoremd.dll 00007FFE8EA9DF54 Unknown Unknown Unknown
KERNELBASE.dll 00007FFF6CF86E57 Unknown Unknown Unknown
KERNEL32.DLL 00007FFF6E9A26AD Unknown Unknown Unknown
ntdll.dll 00007FFF6F70AA68 Unknown Unknown Unknown
C:\Users\peter>
```
Expected behavior, as seen if only Seaborn's dependencies are imported:
```
C:\Users\peter>python
Python 3.11.4 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 13:38:37) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy, scipy, matplotlib
>>> (Note: user presses CTRL+C here)
KeyboardInterrupt
>>>
```
-----
This reproducibly occurs across multiple tested Windows 11 PCs if Seaborn 0.12.2 (latest) and NumPy 1.24.3 are both installed. It occurs both in the Python interpreter as well as IPython. The Python version seems not to matter (the bug reproduces on Python 3.9 and 3.11). NumPy 1.25.0 and higher do not result in a crash (on any of the tested machines). No other Seaborn versions have been tested. It is unknown if lower NumPy versions exhibit the same behavior with the latest Seaborn version. It appears to happen on Windows 11 PCs, but not on Ubuntu PCs (even with the same Seaborn/NumPy version).
While this bug might appear to be purely a NumPy bug, it's hard to be believe that it is, since importing NumPy alone and then pressing CTRL+C doesn't cause a crash. The bug is due to something about how Seaborn and NumPy are interacting, but only on these specific versions of each package.
# Version Info
- Microsoft Windows 11
- Python 3.11.4 (Anaconda distribution)
- Seaborn 0.12.2 (latest)
- NumPy 1.24.3
- SciPy 1.10.1
- Matplotlib 3.7.1 | closed | 2023-07-27T00:27:38Z | 2023-08-02T11:21:39Z | https://github.com/mwaskom/seaborn/issues/3428 | [] | peterdsharpe | 3 |
waditu/tushare | pandas | 923 | 业绩预告的调用方式希望与其他财务数据调用方式保持一致 | 业绩预告的必选参数 现在是 ann_date 或 period 二选一,但是其他所有财务数据的必选参数是或至少包含ts_code。希望修改业绩预告的必选参数以保持与其他api的一致(希望将ts_code加入必选、允许只输入ts_code以返回对应代码的业绩预告)。由于ann_date无法被预知,period无法被精确定位,实际上这两个参数在设定上需要站在已知此2字段的基础上,这与预告的更新逻辑略有冲突。 | closed | 2019-02-14T14:16:19Z | 2019-02-17T13:39:56Z | https://github.com/waditu/tushare/issues/923 | [] | ProV1denCEX | 1 |
mars-project/mars | pandas | 3,235 | [BUG] Dataframe setitem bug | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
```python
import pandas as pd
import mars
import mars.dataframe as md
def calc_pandas():
print("calc pandas")
df = pd.DataFrame(
[list(range(5))] * 10,
columns=["cc" + str(i) for i in range(5)],
index=["i" + str(i) for i in range(10)],
)
df2 = df.apply(lambda x: x * 2, axis=1)
columns = ["dd" + str(i) for i in range(5)]
columns[1] = "cc1"
df2.columns = columns
df[columns] = df2[columns]
print(df)
def calc_mars():
print("calc mars")
mars.new_session()
data = pd.DataFrame(
[list(range(5))] * 10,
columns=["cc" + str(i) for i in range(5)],
index=["i" + str(i) for i in range(10)],
)
df = md.DataFrame(data, chunk_size=3)
df2 = df.apply(lambda x: x * 2, axis=1)
columns = ["dd" + str(i) for i in range(5)]
columns[1] = "cc1"
df2.columns = columns
df[columns] = df2[columns]
print(df.execute())
if __name__ == "__main__":
calc_pandas()
calc_mars()
```
calc_pandas() works fine, but calc_mars() failed.
```python
/home/admin/.pyenv/versions/3.8.13/bin/python home/admin/mars/t3.py
calc pandas
cc0 cc1 cc2 cc3 cc4 dd0 dd2 dd3 dd4
i0 0 2 2 3 4 0 4 6 8
i1 0 2 2 3 4 0 4 6 8
i2 0 2 2 3 4 0 4 6 8
i3 0 2 2 3 4 0 4 6 8
i4 0 2 2 3 4 0 4 6 8
i5 0 2 2 3 4 0 4 6 8
i6 0 2 2 3 4 0 4 6 8
i7 0 2 2 3 4 0 4 6 8
i8 0 2 2 3 4 0 4 6 8
i9 0 2 2 3 4 0 4 6 8
calc mars
Web service started at http://0.0.0.0:50685
100%|██████████| 100.0/100 [00:00<00:00, 257.55it/s]
Traceback (most recent call last):
File "home/admin/mars/t3.py", line 40, in <module>
calc_mars()
File "home/admin/mars/t3.py", line 35, in calc_mars
print(df.execute())
File "home/admin/mars/mars/core/entity/tileables.py", line 462, in execute
result = self.data.execute(session=session, **kw)
File "home/admin/mars/mars/core/entity/executable.py", line 144, in execute
return execute(self, session=session, **kw)
File "home/admin/mars/mars/deploy/oscar/session.py", line 1890, in execute
return session.execute(
File "home/admin/mars/mars/deploy/oscar/session.py", line 1684, in execute
execution_info: ExecutionInfo = fut.result(
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "home/admin/mars/mars/deploy/oscar/session.py", line 1870, in _execute
await execution_info
File "home/admin/mars/mars/deploy/oscar/session.py", line 105, in wait
return await self._aio_task
File "home/admin/mars/mars/deploy/oscar/session.py", line 953, in _run_in_background
raise task_result.error.with_traceback(task_result.traceback)
File "home/admin/mars/mars/services/task/supervisor/processor.py", line 368, in run
async for stage_args in self._iter_stage_chunk_graph():
File "home/admin/mars/mars/services/task/supervisor/processor.py", line 158, in _iter_stage_chunk_graph
chunk_graph = await self._get_next_chunk_graph(chunk_graph_iter)
File "home/admin/mars/mars/services/task/supervisor/processor.py", line 149, in _get_next_chunk_graph
chunk_graph = await fut
File "home/admin/mars/mars/lib/aio/_threads.py", line 36, in to_thread
return await loop.run_in_executor(None, func_call)
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "home/admin/mars/mars/services/task/supervisor/processor.py", line 144, in next_chunk_graph
return next(chunk_graph_iter)
File "home/admin/mars/mars/services/task/supervisor/preprocessor.py", line 197, in tile
for chunk_graph in chunk_graph_builder.build():
File "home/admin/mars/mars/core/graph/builder/chunk.py", line 440, in build
yield from self._build()
File "home/admin/mars/mars/core/graph/builder/chunk.py", line 434, in _build
graph = next(tile_iterator)
File "home/admin/mars/mars/services/task/supervisor/preprocessor.py", line 74, in _iter_without_check
to_update_tileables = self._iter()
File "home/admin/mars/mars/core/graph/builder/chunk.py", line 317, in _iter
self._tile(
File "home/admin/mars/mars/core/graph/builder/chunk.py", line 211, in _tile
need_process = next(tile_handler)
File "home/admin/mars/mars/core/graph/builder/chunk.py", line 183, in _tile_handler
tiled_tileables = yield from handler.tile(tiled_tileables)
File "home/admin/mars/mars/core/entity/tileables.py", line 79, in tile
tiled_result = yield from tile_handler(op)
File "home/admin/mars/mars/dataframe/indexing/setitem.py", line 224, in tile
c.index[0], target_index_to_value[c.index[1]]
KeyError: 1
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version 3.8.13
2. The version of Mars you use Latest master
3. Versions of crucial packages, such as numpy, scipy and pandas pandas==1.4.3
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-08-29T09:05:20Z | 2022-09-05T04:52:00Z | https://github.com/mars-project/mars/issues/3235 | [
"type: bug"
] | fyrestone | 0 |
kizniche/Mycodo | automation | 1,368 | Request to add DFRobot i2c Fermion: LWLP5000 Differential Pressure Sensor (±500pa) | Too keep negative pressure in the grow tent; box; room when using intake and exhaust fan in the setup.
Or filter clogging detection.
link to produkt: [https://www.dfrobot.com/product-2096.html](https://www.dfrobot.com/product-2096.html)
link to wiki page: [https://wiki.dfrobot.com/Differential_Pressure_Sensor_%C2%B1500pa_SKU_SEN0343](https://wiki.dfrobot.com/Differential_Pressure_Sensor_%C2%B1500pa_SKU_SEN0343)
link to datasheet: [https://dfimg.dfrobot.com/nobody/wiki/d1aa97db1d9b73780d3f8493be74ac54.pdf](https://dfimg.dfrobot.com/nobody/wiki/d1aa97db1d9b73780d3f8493be74ac54.pdf)
| open | 2024-03-02T18:47:49Z | 2024-03-02T18:47:49Z | https://github.com/kizniche/Mycodo/issues/1368 | [] | silverhawk1983 | 0 |
encode/databases | sqlalchemy | 208 | aiopg engine raises ResourceWarning in transactions | Step to reproduce:
Python 3.7.7
```python3
import asyncio
from databases import Database
url = "postgresql+aiopg://localhost:5432"
async def generate_series(db, *args):
async with db.connection() as conn:
async for row in conn.iterate( # implicitly starts transaction
f"select generate_series({', '.join(f'{a}' for a in args)}) as i"
):
yield row["i"]
async def main():
async with Database(url) as db:
print([i async for i in generate_series(db, 1, 10)])
async with db.transaction():
print(await db.fetch_val("select 1"))
asyncio.run(main())
```
Result:
```
.../python3.7/site-packages/aiopg/connection.py:256: ResourceWarning: You can only have one cursor per connection. The cursor for connection will be closed forcibly <aiopg.connection::Connection isexecuting=False, closed=False, echo=False, cursor=<aiopg.cursor::Cursor name=None, closed=False>>. ' {!r}.').format(self), ResourceWarning)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
.../python3.7/site-packages/aiopg/connection.py:256: ResourceWarning: You can only have one cursor per connection. The cursor for connection will be closed forcibly <aiopg.connection::Connection isexecuting=False, closed=False, echo=False, cursor=<aiopg.cursor::Cursor name=None, closed=False>>. ' {!r}.').format(self), ResourceWarning)
1
``` | open | 2020-05-20T10:55:25Z | 2021-03-16T16:59:38Z | https://github.com/encode/databases/issues/208 | [] | nkoshell | 3 |
deezer/spleeter | deep-learning | 82 | [Bug] Occur OMP:Waring #190 when execute on Docker | <!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
<!-- Give us a clear and concise description of the bug you are reporting. -->
Can't separate music when execute on Docker
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Build Docker image according to this page (https://github.com/deezer/spleeter/wiki/2.-Getting-started#using-docker-image)
2. Run Docker image
3. Got `OMP: Warning #190`
4. No file produced in output folder
## Output
```
INFO:spleeter:Downloading model archive https://github.com/deezer/spleeter/releases/download/v1.4.0/2stems.tar.gz
INFO:spleeter:Extracting downloaded 2stems archive
INFO:spleeter:2stems model file(s) extracted
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #213: KMP_AFFINITY: x2APIC ids not unique - decoding legacy APIC ids.
OMP: Info #149: KMP_AFFINITY: Affinity capable, using global cpuid info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0,1
OMP: Info #156: KMP_AFFINITY: 2 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #159: KMP_AFFINITY: 2 packages x 1 cores/pkg x 1 threads/core (2 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 1
OMP: Info #250: KMP_AFFINITY: pid 1 tid 1 thread 0 bound to OS proc set 0
OMP: Info #250: KMP_AFFINITY: pid 1 tid 47 thread 1 bound to OS proc set 1
INFO:spleeter:Loading audio b'/input/input.mp3' from 0.0 to 600.0
INFO:spleeter:Audio data loaded successfully
OMP: Info #250: KMP_AFFINITY: pid 1 tid 46 thread 2 bound to OS proc set 0
OMP: Info #250: KMP_AFFINITY: pid 1 tid 67 thread 3 bound to OS proc set 1
OMP: Warning #190: Forking a process while a parallel region is active is potentially unsafe.
OMP: Warning #190: Forking a process while a parallel region is active is potentially unsafe.
OMP: Warning #190: Forking a process while a parallel region is active is potentially unsafe.
OMP: Warning #190: Forking a process while a parallel region is active is potentially unsafe.
OMP: Warning #190: Forking a process while a parallel region is active is potentially unsafe.
OMP: Warning #190: Forking a process while a parallel region is active is potentially unsafe.
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | MacOS | |
| Hardware spec | CPU |
| closed | 2019-11-13T06:47:02Z | 2019-11-15T10:24:45Z | https://github.com/deezer/spleeter/issues/82 | [
"bug",
"help wanted",
"MacOS",
"docker"
] | jaus14 | 3 |
tensorly/tensorly | numpy | 479 | Error reported when import tensorly (version: 0.8.0) in Python 3.7 | In Pyhone 3.7, importing TensorLy, `import tensorly as tl`, will get the following error message:
```
from .generalised_inner_product import inner
File "<fstring>", line 1
(tensor1.shape=)
^
SyntaxError: invalid syntax
```
The problem should be caused by the updates in the following lines
https://github.com/tensorly/tensorly/blob/8ff7e2950ca939c64641bfe0169e455f7cc59b56/tensorly/tenalg/core_tenalg/generalised_inner_product.py#L34
https://github.com/tensorly/tensorly/blob/8ff7e2950ca939c64641bfe0169e455f7cc59b56/tensorly/tenalg/core_tenalg/generalised_inner_product.py#L48
which only work in Python 3.8 and 3.9. Will TensorLy stop supporting Python 3.7 and 3.6?
| closed | 2023-01-17T17:44:30Z | 2023-01-24T00:54:02Z | https://github.com/tensorly/tensorly/issues/479 | [] | shuo-zhou | 3 |
tqdm/tqdm | pandas | 891 | Tqdm.Notebook does not show nested bars with get_fdata | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [x] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```
4.42.1 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0] linux
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
To get rid of the following Deprecation Warning:
```
DeprecationWarning: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
```
I swapped all my get_data() to get_fdata().

However, as a result, I am not seeing any of the nested progress bars from my parallel processing.
With get_data():


With get_fdata():

| open | 2020-02-10T03:33:51Z | 2020-08-11T06:21:35Z | https://github.com/tqdm/tqdm/issues/891 | [
"invalid ⛔",
"question/docs ‽",
"need-feedback 📢"
] | NeoShin96 | 2 |
nicodv/kmodes | scikit-learn | 75 | KModes module cannot be found | Hello,
Even if I downloaded the whole package and installed according to the instructions, I cannot use it. I want to use K-prototypes algorithm but it cannot even import k-modes in the first place. When I import k-modes like this :
`from kmodes.KModes import KModes`
I have this error:
> ModuleNotFoundError: No module named 'kmodes.KModes'
I even placed this package's py folders into the same file which is my workspace for my codes. What could be the reason? How can I fix it?
| closed | 2018-06-01T22:47:25Z | 2025-03-14T23:42:23Z | https://github.com/nicodv/kmodes/issues/75 | [
"question"
] | bendiste | 8 |
plotly/dash | data-science | 3,199 | Update dcc.Markdown for react 18 compatibility | > @T4rk1n
> I just noticed that `dcc.Mardown` generates a warning in the console - which is weird because it's not a function component.
>Warning: v: Support for defaultProps will be removed from function components in a future major release. Use JavaScript default parameters instead. Error Component Stack
>
> ```
>
> from dash import Dash, dcc
>
> app=Dash()
> app.layout = dcc.Markdown("I have a console warning")
>
> if __name__ == "__main__":
> app.run(debug=True)
>
>
>
> ```
_Originally posted by @AnnMarieW in [#3148](https://github.com/plotly/dash/issues/3148#issuecomment-2686404238)_ | open | 2025-03-06T19:12:02Z | 2025-03-06T20:51:40Z | https://github.com/plotly/dash/issues/3199 | [
"feature",
"P1"
] | T4rk1n | 0 |
InstaPy/InstaPy | automation | 6,674 | i will try it | open | 2023-01-07T18:57:27Z | 2023-01-07T18:57:27Z | https://github.com/InstaPy/InstaPy/issues/6674 | [] | mohamedibrahimmorsi | 0 | |
pywinauto/pywinauto | automation | 835 | if absolute bug? | Is this a bug?
https://github.com/pywinauto/pywinauto/blob/69eedc759d9327c64305f95ba1b3f1593b0ebd14/pywinauto/controls/hwndwrapper.py#L1676-L1677
Should it be `if not absolute`? | closed | 2019-10-14T22:29:48Z | 2019-10-20T13:14:12Z | https://github.com/pywinauto/pywinauto/issues/835 | [
"duplicate"
] | Akababa | 3 |
erdewit/ib_insync | asyncio | 444 | How to get localSymbol from the API to qualify a Option Future contract? | closed | 2022-02-17T15:33:31Z | 2022-04-28T12:12:07Z | https://github.com/erdewit/ib_insync/issues/444 | [] | WesleySantosMaxtelll | 1 | |
hankcs/HanLP | nlp | 1,886 | phraseTree引发的import error | <!--
感谢找出bug,请认真填写下表:
-->
**Describe the bug**
python3.9+中将cgi.escape 移除,修改为html.escape ,新版本的nltk库中已经进行修改,但是由于本项目引用的是没有进行相关修改的phraseTree,因此在python 3.9+的环境中使用pretty_print方法会报错。
是否可以尝试将phraseTree都统一替换为nltk.tree 来解决此问题。
**Code to reproduce the issue**
```python
import hanlp
from hanlp_common.document import Document
def merge_pos_into_con(doc: Document):
flat = isinstancse(doc['pos'][0], str)
if flat:
doc = Document((k, [v]) for k, v in doc.items())
for tree, tags in zip(doc['con'], doc['pos']):
offset = 0
for subtree in tree.subtrees(lambda t: t.height() == 2):
tag = subtree.label()
if tag == '_':
subtree.set_label(tags[offset])
offset += 1
if flat:
doc = doc.squeeze()
return doc
con = hanlp.load('CTB9_CON_FULL_TAG_ELECTRA_SMALL')
tok = hanlp.load(hanlp.pretrained.tok.COARSE_ELECTRA_SMALL_ZH)
pos = hanlp.load(hanlp.pretrained.pos.CTB9_POS_ELECTRA_SMALL)
nlp = hanlp.pipeline().append(pos, input_key='tok', output_key='pos') \
.append(con, input_key='tok', output_key='con')
doc = nlp(tok=["2021年", "HanLPv2.1", "带来", "最", "先进", "的", "多", "语种", "NLP", "技术", "。"])['con']
doc.pretty_print()
```
**Describe the current behavior**
A clear and concise description of what happened.
**Expected behavior**
A clear and concise description of what you expected to happen.
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version:3.10
- HanLP version:2.1.0b56
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

* - [x] I've completed this form and searched the web for solutions.
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! --> | closed | 2024-03-22T08:57:55Z | 2024-03-25T01:37:36Z | https://github.com/hankcs/HanLP/issues/1886 | [
"bug"
] | oasis-0927 | 2 |
vitalik/django-ninja | pydantic | 490 | TestClient does not use data as query parameters in a get request | Why does TestClient use the data parameter only for the post request. For a get request, it is not added as query parameters?
| open | 2022-06-29T14:13:52Z | 2024-05-05T17:42:46Z | https://github.com/vitalik/django-ninja/issues/490 | [] | iforvard | 3 |
alteryx/featuretools | scikit-learn | 2,151 | Replace tmpdir fixtures with tmp_path fixtures | Pytest recommends using `tmp_path` over `tmpdir`:
https://docs.pytest.org/en/latest/how-to/tmp_path.html#the-tmpdir-and-tmpdir-factory-fixtures | closed | 2022-06-27T15:33:06Z | 2022-10-31T20:45:32Z | https://github.com/alteryx/featuretools/issues/2151 | [] | rwedge | 0 |
schemathesis/schemathesis | graphql | 2,576 | [FEATURE] Support examples as hypotheses | ### Is your feature request related to a problem? Please describe
I would like the ability to add request examples to hypotheses to ensure there are accurate/valid tests conducted. This would reduce the amount of effort manually writing tests.
### Describe the solution you'd like
An additional flag in the CLI to instruct schemathesis to include examples in specs i.e. `--include-spec-examples`
### Describe alternatives you've considered
I'm not aware of any workarounds to accomplish this, other than manually writing tests.
| closed | 2024-11-13T19:36:56Z | 2024-11-13T20:37:53Z | https://github.com/schemathesis/schemathesis/issues/2576 | [
"Status: Needs Triage",
"Type: Feature"
] | depopry | 3 |
huggingface/datasets | machine-learning | 7,087 | Unable to create dataset card for Lushootseed language | ### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options?
### Motivation
I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents.
### Your contribution
I can submit a pull request | closed | 2024-08-04T14:27:04Z | 2024-08-06T06:59:23Z | https://github.com/huggingface/datasets/issues/7087 | [
"enhancement"
] | vaishnavsudarshan | 2 |
deezer/spleeter | tensorflow | 156 | [Discussion] What's the difference between Xstems and Xstems-finetune models ? | The github release links to 2 versions for each steams model (2/4/5) : Xstems and Xstems-finetune.
What's the difference between the 2 kind of models, and how/why would you use one over the other ?
| closed | 2019-12-01T19:36:00Z | 2019-12-02T13:20:05Z | https://github.com/deezer/spleeter/issues/156 | [
"question"
] | divideconcept | 1 |
strawberry-graphql/strawberry | asyncio | 3,699 | codegen doesn't properly handle postponed type annotations |
`strawberry codegen --schema schema.graphql` generates models referencing postponed types as its class variable instead of using a string to be deferred.
## Describe the Bug
From schema, the following code has been generated. In this example, AccessCredentialFilter is referencing itself inside the model
```python
@strawberry.input
class AccessCredentialFilter:
and_: list[AccessCredentialFilter | None] | None = strawberry.field(name="and")
```
However, it should rather be
```python
@strawberry.input
class AccessCredentialFilter:
and_: list[Union["AccessCredentialFilter", None]] | None = strawberry.field(name="and")
```
or
```python
@strawberry.input
class AccessCredentialFilter:
and_: list[Optional["AccessCredentialFilter"]] | None = strawberry.field(name="and")
```
## System Information
- Operating system: Ubuntu 22.04
- Strawberry version (if applicable): 0.248.1
## Additional Context
It seems there was a related discussion: https://github.com/strawberry-graphql/strawberry/issues/769
but not sure if it was discussed in the context of codegen.
I'd be happy to contribute if I can be directed to the right part of the code. | open | 2024-11-14T22:19:42Z | 2025-03-20T15:56:56Z | https://github.com/strawberry-graphql/strawberry/issues/3699 | [
"bug"
] | jbkoh | 0 |
modelscope/data-juicer | streamlit | 398 | Heavy dependency of Data-Juicer | As the title say, the dependency of Data-Juicer is heavy. I must install the total environment if I only want to use one OP.
TODO: Set installment for each OP, for example `pip install .[OP_NAME]` | closed | 2024-08-22T07:45:51Z | 2024-09-25T06:18:36Z | https://github.com/modelscope/data-juicer/issues/398 | [
"enhancement"
] | BeachWang | 4 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 104 | Is there any filtering functionalities on triplet sampling? | I'd like to ask a question about how to implement sampling with this package in the following situations.
Now I have the following data.
- samples with the label "A"
- samples with the label "B".
- samples with the label "C".
- samples with the label "not A".
"not A" means "B or C, or an unknown label other than A~C".
The sampling I want to implement is when a sample labeled "A" is an anchor, only the sample labeled "not A" is used for the negative.
In other words, I don't want to use samples labeled "not A" as anchor.
I want to use them only as negative.
In this case, how should I implement it using pytorch-metric-learning? | closed | 2020-05-17T09:16:40Z | 2020-06-20T05:28:34Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/104 | [
"question"
] | knao124 | 2 |
aimhubio/aim | data-visualization | 3,143 | Using Aim package on FIPS compatible machine results in Error. | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
Running AIM with any script on a FIPS server results in errors like these making it unusable.
```
TypeError: 'digest_size' is an invalid keyword argument for openssl_blake2b()
TypeError: 'digest_size' is an invalid keyword argument for openssl_blake2b()
Exception ignored in: 'aim.storage.hashing.hashing.hash_object'
Traceback (most recent call last):
File "/usr/local/lib64/python3.11/site-packages/aim/storage/context.py", line 40, in _calc_hash
return hash_auto(self._context)
^^^^^^^^^^^^^^^^^^^^^^^^
```
### To reproduce
Install AIM on FIPS enabled machine and run with any script the error seems to be 100% reproducible on our end.
### Expected behavior
AIM hash function to generate hash without any error.
### Environment
- Aim Version (e.g., 3.0.1) - 3.19.3
- Python version - 3.11
- pip version
- OS (e.g., Linux) - Linux
- Any other relevant information
### Additional context
The problem seems to be stemming from python library `hashlib`, on a FIPS enabled server the `_hashlib.get_fips_mode()` returns `1`
```
python3.11
>>> import _hashlib
>>> _hashlib.get_fips_mode()
1
```
And API call like this fails.
```
>>> import hashlib
>>> hashlib.blake2b(digest_size=256 //8)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'digest_size' is an invalid keyword argument for openssl_blake2b()
```
While on other server it works,
```
python3.11
>>> import _hashlib
>>> _hashlib.get_fips_mode()
0
```
```
>>> import hashlib
>>> hashlib.blake2b(digest_size=256 //8)
<_blake2.blake2b object at 0x7fe3f5d0cc70>
``` | open | 2024-05-06T13:55:19Z | 2024-09-11T11:29:42Z | https://github.com/aimhubio/aim/issues/3143 | [
"type / bug",
"help wanted"
] | dushyantbehl | 3 |
dpgaspar/Flask-AppBuilder | flask | 1,367 | name column resizing in ab_view_menu table | While working with apache-airflow when we enable rbac, all the tables related to flask-ui are added into the metadata. On that note, system tries to insert the rows in ab_view_menu table with respect to all the available DAGs. The length of the dag_id is 250 chars but in ab_view_menu the aligned column's (name) length is 100 chars and in turn value is truncated. Thereafter, it throws the Integrity Error when tries to insert the same value of name because of the truncation. And finally database goes down.
So, to improve this, we need to keep in sync with the ab_view_menu table. For that, I need to change the size of the column from 100 to 250. | closed | 2020-05-05T19:33:37Z | 2020-05-21T09:57:10Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1367 | [] | guptakumartanuj | 1 |
tflearn/tflearn | data-science | 246 | Architecture Err when extending Titanic tutorial | I tried Quickstart titanic tutorial successfully and made some tests further.
I am predicting a float target by 8 float inputs, and I modified some of the tutorial then
'ValueError: Cannot feed value of shape (64,) for Tensor u'TargetsData/Y:0', which has shape '(?, 1)''
# Build neural network
net = tflearn.input_data(shape=[None, 8])
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 1, activation='relu')
net = tflearn.regression(net)
# Define model
model = tflearn.DNN(net, tensorboard_verbose=0)
# Start training (apply gradient descent algorithm)
model.fit(data, mfe, n_epoch=100)#Err occurs
Could somebody kindly help me:
1. What do 'shape (64,) ' and shape '(?, 1)' stand for?
2. How can I fix this Architecture Err?
3. Could you make some recommendation of materials learning neural networks architecture?
Thanks & Regards,
Simon
| open | 2016-07-30T02:14:52Z | 2016-11-27T11:54:07Z | https://github.com/tflearn/tflearn/issues/246 | [] | forhonourlx | 5 |
biolab/orange3 | pandas | 6,860 | Path processing in the Formula widget | **What's your use case?**
This would be most useful for _Spectroscopy_ and similar workflows.
Files always come with a path that carry information about organization. In `Formula` it is possible to do plenty of math but not much of text processing (except for the obvious string operations). However, it would be awesome to be able to extract the path and the filename using the `os` python module for example.
**What's your proposed solution?**
<!-- Be specific, clear, and concise. -->
Make path handling functions available in `Formula`.
**Are there any alternative solutions?**
One can use string functions but that can become a bit awkward.
| open | 2024-07-23T08:00:43Z | 2024-07-23T08:00:43Z | https://github.com/biolab/orange3/issues/6860 | [] | borondics | 0 |
minimaxir/textgenrnn | tensorflow | 1 | Python 2 UnicodeDecode Error | Most of Python 2 works with the script, but need a noncomplicated way of handling UnicodeDecodeErrors when a Unicode character is chosen as the `next_char`. | closed | 2017-08-07T02:31:32Z | 2018-04-19T06:38:38Z | https://github.com/minimaxir/textgenrnn/issues/1 | [
"bug"
] | minimaxir | 1 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 231 | 如何增加chinese-llama2-Alpaca-2-16K(7B) 回复答案的长度? | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
效果问题
### 基础模型
Chinese-Alpaca-2-16K (7B/13B)
### 操作系统
Linux
### 详细描述问题
chinese-llama2与chatglm对比回答过于简短的问题,能够被解决吗?还有想问下官方作者训练数据的模板能公开吗?以便用户创建提问模板。(上边为chinese-llama2,下边为chatglm2-6b,在本地知识库和embedding模型都相同的情况下,回答的表现)


### 依赖情况(代码类问题务必提供)
_No response_
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
``` | closed | 2023-09-04T09:51:46Z | 2023-09-22T22:04:37Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/231 | [
"stale"
] | dkw-wkd | 24 |
ageitgey/face_recognition | python | 674 | [WinError 3] The system cannot find the path specified: 'knn_examples/train' | I created the train directory but it gives out an error: [WinError 3] The system cannot find the path specified: 'knn_examples/train'
| closed | 2018-11-17T02:36:39Z | 2018-11-17T03:08:45Z | https://github.com/ageitgey/face_recognition/issues/674 | [] | nhangox22 | 0 |
explosion/spaCy | deep-learning | 13,625 | Cannot install spaCy 3.8 in python3.8 environment | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
On a clean environment with python 3.8 and pip, try pip install spaCy==3.8
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: windows
* Python Version Used: 3.8
* spaCy Version Used: 3.8
* Environment Information: clean environment with python 3.8 and pip

| open | 2024-09-12T23:06:39Z | 2024-10-25T04:20:08Z | https://github.com/explosion/spaCy/issues/13625 | [
"deprecated",
"resolved"
] | jianlins | 6 |
lepture/authlib | django | 377 | httpx 0.18.x upgrade breaks addition of "Authorization" header | **Describe the bug**
An application was upgraded to httpx 0.18.2 and we discovered that the "Authorization" header was no longer being included in the request.
**Error Stacks**
403 Errors from downstream services.
Trace Logging revealed that the Authorization header was missing.
Using httpx 0.18.2
```
TRACE [2021-08-20 15:40:12] httpcore._async.connection - create_connection socket=<httpcore._backends.anyio.SocketStream object at 0x108c02250> http_version='HTTP/1.1'
TRACE [2021-08-20 15:40:12] httpcore._async.connection - connection.handle_async_request method=b'GET' url=(b'https', b'textcat-doc-classifier-v1-0-0-tst.kfserving.ms-stg.ai.locusdev.net', None, b'/v1/models') headers=[(b'Host', b'textcat-doc-classifier-v1-0-0-tst.kfserving.ms-stg.ai.locusdev.net'), (b'Accept', b'*/*'), (b'Accept-Encoding', b'gzip, deflate'), (b'Connection', b'keep-alive'), (b'User-Agent', b'python-httpx/0.18.2')]
TRACE [2021-08-20 15:40:12] httpcore._async.http11 - send_request method=b'GET' url=(b'https', b'textcat-doc-classifier-v1-0-0-tst.kfserving.ms-stg.ai.locusdev.net', None, b'/v1/models') headers=[(b'Host', b'textcat-doc-classifier-v1-0-0-tst.kfserving.ms-stg.ai.locusdev.net'), (b'Accept', b'*/*'), (b'Accept-Encoding', b'gzip, deflate'), (b'Connection', b'keep-alive'), (b'User-Agent', b'python-httpx/0.18.2')]
DEBUG [2021-08-20 15:40:12] httpx._client - HTTP Request: GET https://textcat-doc-classifier-v0-0-0-tst.kfserving.ms-stg.ai.locusdev.net/v1/models "HTTP/1.1 403 Forbidden"
Using httpx 0.17.1, httpx trace data:
```
Using httpx 0.17.1
```
TRACE [2021-08-20 15:54:10] httpcore._async.connection - create_connection socket=<httpcore._backends.asyncio.SocketStream object at 0x1083243a0> http_version='HTTP/1.1'
TRACE [2021-08-20 15:54:10] httpcore._async.connection - connection.arequest method=b'GET' url=(b'https', b'textcat-doc-classifier-v0-0-0-tst.kfserving.ms-stg.ai.locusdev.net', None, b'/v1/models') headers=[(b'Host', b'textcat-doc-classifier-v0-0-0-tst.kfserving.ms-stg.ai.locusdev.net'), (b'Accept', b'*/*'), (b'Accept-Encoding', b'gzip, deflate'), (b'Connection', b'keep-alive'), (b'User-Agent', b'python-httpx/0.17.1'), (b'Authorization', b'Bearer <redacted>')]
TRACE [2021-08-20 15:54:10] httpcore._async.http11 - send_request method=b'GET' url=(b'https', b'textcat-doc-classifier-v0-0-0-tst.kfserving.ms-stg.ai.locusdev.net', None, b'/v1/models') headers=[(b'Host', b'textcat-doc-classifier-v0-0-0-tst.kfserving.ms-stg.ai.locusdev.net'), (b'Accept', b'*/*'), (b'Accept-Encoding', b'gzip, deflate'), (b'Connection', b'keep-alive'), (b'User-Agent', b'python-httpx/0.17.1'), (b'Authorization', b'Bearer <redacted>')]
DEBUG [2021-08-20 15:54:10] httpx._client - HTTP Request: GET https://cies-ner-pt-info-v0-5-0-dev.kfserving.ms-stg.ai.locusdev.net/v1/models "HTTP/1.1 200 OK"
```
**To Reproduce**
We use auth0 for our auth provider so have a custom JWT that sets the 'audience' parameter.
```python
from authlib.common.urls import add_params_to_qs
from authlib.integrations.httpx_client import AsyncOAuth2Client
from authlib.oauth2 import ClientAuth
from authlib.oauth2.rfc6750.parameters import add_to_headers
from app.settings import (
DEFAULT_TIMEOUT,
OIDC_AUDIENCE,
OIDC_CLIENT_ID,
OIDC_CLIENT_SECRET,
OIDC_HOST,
)
class JWTWithAudience:
name: str = "client_secret_post_with_audience"
def __init__(self, audience: str):
self.audience = audience
def __call__(
self,
auth: ClientAuth,
method: str,
uri: str,
headers: Dict[str, str],
body: str,
) -> Tuple[str, Dict[str, str], str]:
return self.encode_client_secret_post_audience(auth, method, uri, headers, body)
def encode_client_secret_post_audience(
self,
auth: ClientAuth,
method: str,
uri: str,
headers: Dict[str, str],
body: str,
) -> Tuple[str, Dict[str, str], str]:
body = add_params_to_qs(
body or "",
[
("client_id", auth.client_id),
("client_secret", auth.client_secret or ""),
("audience", self.audience or ""),
],
)
if "Content-Length" in headers:
headers["Content-Length"] = str(len(body))
return uri, headers, body
async def setup_client() -> AsyncOAuth2Client:
token_endpoint = OIDC_HOST + "/oauth/token"
client = AsyncOAuth2Client(
OIDC_CLIENT_ID,
OIDC_CLIENT_SECRET,
token_endpoint_auth_method=JWTWithAudience.name,
audience=OIDC_AUDIENCE,
grant_type="client_credentials",
token_endpoint=token_endpoint,
timeout=DEFAULT_TIMEOUT,
)
client.register_client_auth_method(JWTWithAudience(client.metadata["audience"]))
await client.fetch_token()
return client
```
So prior to our httpx 0.18. the code above would produce functional oauth2 client.
As a workaround, we are able to fix it with:
```python
async def setup_client() -> AsyncOAuth2Client:
token_endpoint = OIDC_HOST + "/oauth/token"
client = AsyncOAuth2Client(
OIDC_CLIENT_ID,
OIDC_CLIENT_SECRET,
token_endpoint_auth_method=JWTWithAudience.name,
audience=OIDC_AUDIENCE,
grant_type="client_credentials",
token_endpoint=token_endpoint,
timeout=DEFAULT_TIMEOUT,
)
client.register_client_auth_method(JWTWithAudience(client.metadata["audience"]))
token = await client.fetch_token()
if "Authorization" not in client.headers:
# Ensure access token is added to headers to avoid 403 errors
access_token = token["access_token"]
headers = add_to_headers(access_token)
client.headers.update(headers)
return client
```
**Expected behavior**
The OAuth2Client should handle adding this header as it did previously.
**Environment:**
- OS: Linux/Mac
- Python Version: 3.8.9
- Authlib Version: 0.5.4
**Additional context**
| closed | 2021-08-20T21:27:19Z | 2021-10-18T12:17:17Z | https://github.com/lepture/authlib/issues/377 | [
"bug"
] | timothyjlaurent | 4 |
betodealmeida/shillelagh | sqlalchemy | 212 | asyncio-compatible dialect | SQLAlchemy 1.4 now supports Python asyncio. In order to take advantage of this, the [dialect must be "asyncio-compatible"](https://docs.sqlalchemy.org/en/14/orm/extensions/asyncio.html#asynchronous-i-o-asyncio). It would be great to have a version of the base `Dialect` that can be used for this.
A discussion for what that entails: https://github.com/sqlalchemy/sqlalchemy/discussions/7854.
Currently the `APSWDialect` class subclasses `SQLiteDialect`, which is not async. There is a `SQLiteDialect_aiosqlite` that could potentially be used. The goal here is not necessarily to have async operations vis a vis sqlite but rather to allow async operations when connecting to the APIs.
This might be tricky as you would want this to be an async iterator:
https://github.com/betodealmeida/shillelagh/blob/97197bd564e96a23c5587be5c9e315f7c0e693ea/src/shillelagh/backends/apsw/db.py#L221
and then likewise have the `get_rows` call be an async iterator:
https://github.com/betodealmeida/shillelagh/blob/97197bd564e96a23c5587be5c9e315f7c0e693ea/src/shillelagh/backends/apsw/vt.py#L439-L441
I filed this upstream though I am not 100% sure this is the right way to solve the ask: https://github.com/rogerbinns/apsw/issues/325 | open | 2022-03-24T20:47:58Z | 2023-10-27T15:46:24Z | https://github.com/betodealmeida/shillelagh/issues/212 | [
"enhancement"
] | cancan101 | 2 |
tableau/server-client-python | rest-api | 1,050 | Type hint issues | **Describe the bug**
Recently tried to bump tableauserverclient from version 0.16.0 to 0.18.0
**Versions**
Details of your environment, including:
- Tableau Server version : Tableau Online
- Python version: 3.9
- TSC library version: 0.18.0
- mypy version: mypy 0.942
**To Reproduce**
Create this simplistic file:
```python
#!/usr/bin/env/python
import os
import tableauserverclient as TSC
tableau_auth = TSC.TableauAuth(os.environ["USERNAME"], os.environ["PASSWORD"], site_id=os.environ["SITE_ID"])
server = TSC.Server("https://dub01.online.tableau.com")
my_group = TSC.GroupItem(os.environ["GROUP_ITEM"])
```
Run `mypy test_script.py --ignore-missing-imports --strict`
**Results**
```
mypy test_script.py --ignore-missing-imports --strict
test_script.py:8: error: Module has no attribute "TableauAuth"
test_script.py:9: error: Module has no attribute "Server"
test_script.py:10: error: Module has no attribute "GroupItem"
```
=> this is simplistic and mypy complains
I suspect the typed.py file is not at the right place or the imports are not being set correctly In the __init__.py file
Maybe https://github.com/tableau/server-client-python/issues/991 not being implemented correctly | closed | 2022-05-31T10:56:14Z | 2024-09-19T21:41:02Z | https://github.com/tableau/server-client-python/issues/1050 | [
"bug",
"help wanted"
] | pierresouchay | 6 |
Esri/arcgis-python-api | jupyter | 1,359 | Job failed when publishing JSON file | I have a JSON file that I want to publish on a notebook (conda) used on maps.arcgis.com. As an example and for test, I replaced the JSON file directlty with a dictionnary but I have the same problem with a file. It looks like the problem doesn't come from the data as I have tried many différent ones.
This is my code:
```
from arcgis.gis import GIS
from IPython.display import display
gis = GIS(username="XX", password="XX")
data = { "type": "FeatureCollection", "features": [ { "type": "Feature", "geometry": { "type": "Point", "coordinates": [20, 10] }, "properties": { "name": "null island" } } ] }
item_properties_dict = {"type": "GeoJson",
'title': 'Test',
'tags': 'Test',
'snippet': 'Test',
"text":data}
item = gis.content.add(item_properties = item_properties_dict)
item.publish()
```
When I execute this code, I have the following error:
```
---> 43 item.publish()
File /opt/conda/lib/python3.9/site-packages/arcgis/gis/__init__.py:13921, in Item.publish(self, publish_parameters, address_fields, output_type, overwrite, file_type, build_initial_cache, item_id, geocode_service)
13919 return Item(self._gis, ret[0]["serviceItemId"])
13920 else:
> 13921 serviceitem_id = self._check_publish_status(ret, folder)
13922 return Item(self._gis, serviceitem_id)
File /opt/conda/lib/python3.9/site-packages/arcgis/gis/__init__.py:14273, in Item._check_publish_status(self, ret, folder)
14271 # print(str(job_response))
14272 if job_response.get("status") in ("esriJobFailed", "failed"):
> 14273 raise Exception("Job failed.")
14274 elif job_response.get("status") == "esriJobCancelled":
14275 raise Exception("Job cancelled.")
Exception: Job failed.
```
Do you have any idea where the problem might be coming from or any solutions to solve it? | closed | 2022-10-12T05:27:48Z | 2022-10-24T10:48:13Z | https://github.com/Esri/arcgis-python-api/issues/1359 | [
"help wanted"
] | jrmyyy | 5 |
ccxt/ccxt | api | 25,219 | XT.com - ccxt has no function for alter stop limit (/future/trade/v1/entrust/update-profit-stop) | ### Operating System
Debian 12
### Programming Languages
Python
### CCXT Version
4.4.50
### Description
Hi folks,
does anyone know how to use the defined API “/future/trade/v1/entrust/update-profit-stop” in xt.py?
Unfortunately I have not found a way to update a set stop loss or take profit with ccxt?
Thanks for any advice.
### Code
```
'post': {
'future/trade/v1/entrust/cancel-all-plan': 1,
'future/trade/v1/entrust/cancel-all-profit-stop': 1,
'future/trade/v1/entrust/cancel-plan': 1,
'future/trade/v1/entrust/cancel-profit-stop': 1,
'future/trade/v1/entrust/create-plan': 1,
'future/trade/v1/entrust/create-profit': 1,
'future/trade/v1/entrust/update-profit-stop': 1,
'future/trade/v1/order/cancel': 1,
'future/trade/v1/order/cancel-all': 1,
'future/trade/v1/order/create': 1,
'future/trade/v1/order/create-batch': 1,
```
| closed | 2025-02-07T13:44:26Z | 2025-02-09T09:23:18Z | https://github.com/ccxt/ccxt/issues/25219 | [] | AltePfeife | 2 |
sunscrapers/djoser | rest-api | 248 | Guidance needed - user activation - how to POST | I am using DREF and Djoser for Authentication and User Registration. When a new user registers, Djoser sends an activation email with a link that does a GET request. In order to activate, I need to extract uid and token from the activation url and make a POST request for Djoser to be able to activate the user.
My environment is Python 3 and Django 1.11, Djoser 1.0.1.
Any help on how to handle this in Django / Djoser?
What I would like to do is to handle the get request in Django, extract uid and token and then make a POST request. I have extracted uid and token and would like to make a POST (within this GET request). I do not know how to make this POST request in the background. | open | 2017-11-07T11:58:07Z | 2022-02-16T12:43:36Z | https://github.com/sunscrapers/djoser/issues/248 | [] | leferaycloud | 28 |
flasgger/flasgger | flask | 603 | Query parameter not getting picked up in `parsed_data` after 0.9.5 | I'm using openapi 3 documentations like this for `GET` requests with `parse=True`:
```
parameters:
- name: feature
required: false
in: query
schema:
type: string
explode: true
```
I've noticed since the upgrade to 0.9.7 that `request.parsed_data` only contains `{'path': {}}`, whereas previously it contained also `args`.
Digging through the code I found that in 0.9.5, parsers were constructed here based on the `parameters` in the openapi docs:
https://github.com/flasgger/flasgger/blob/0.9.5/flasgger/base.py#L667
In the current version of the code, this has been moved to the method `update_schemas_parsers`, which is fine, but here it's only executed if the openapi version is 2:
https://github.com/flasgger/flasgger/blob/3c16b776f4848813209f2704b18cba81762ac030/flasgger/base.py#L789
I'm not sure if I'm misunderstanding something here, but shouldn't parsers for `parameters` be constructed for both openapi version 2 and 3?
If I'm reading the code correctly, then for openapi version 3 only parsers for the body would get constructed? | open | 2024-01-04T10:49:32Z | 2024-01-04T10:49:32Z | https://github.com/flasgger/flasgger/issues/603 | [] | totycro | 0 |
mwaskom/seaborn | data-visualization | 3,706 | UNITS of colorbar in 2D KDE plot? | Can someone please tell me what would be the unit of the colorbar on the 2D KDE plot? The document says that it is the normalized density. Normalized by what? And what would be the unit in that case. | closed | 2024-06-03T14:07:53Z | 2024-06-06T07:55:06Z | https://github.com/mwaskom/seaborn/issues/3706 | [] | ven1996 | 4 |
gee-community/geemap | jupyter | 448 | Impossible to load 1 band from a multi-band raster using add_raster method | ### Environment Information
- geemap version: 0.8.15
- Python version: 3.6.9
- Operating System: Linux Debian
### Description
when using a multi-band raster where 1,2 and 3 are not representing satellites bands but grid information (in my case the first band is the magnitude of change in the pixel during a monitoring period and the second one is the day of the break in julian days).
what I'm doing is using the `add_raster` method as follow:
```python
Map.add_raster( image_path, bands=1, layer_name='magnitude', colormap='viridis')
```
But my map is always displayed in RGB flavour.
Looking at the code I found this `if` statement https://github.com/giswqs/geemap/blob/681d757d29f278a370882646a3ab5f7f0ce5bc28/geemap/geemap.py#L3201 :
```python
multi_band = False
if len(da.band) > 1:
multi_band = True
if bands is None:
bands = [3, 2, 1]
else:
bands = 1
if multi_band:
da = da.rio.write_nodata(0)
else:
da = da.rio.write_nodata(np.nan)
da = da.sel(band=bands)
```
So in my case as the file is multiband the `multi_band` bool will end up as `True`. Problem is that in the call to xarrayleaflet, the `rgb_dim="band"`will be used instead of `colormap`
| closed | 2021-04-29T12:14:50Z | 2021-04-29T12:37:49Z | https://github.com/gee-community/geemap/issues/448 | [
"bug"
] | 12rambau | 1 |
yunjey/pytorch-tutorial | pytorch | 24 | beam search support | Hi, Is there a code version with beam search? currently the lstm uses greedy output | closed | 2017-04-21T05:53:12Z | 2017-04-21T09:43:16Z | https://github.com/yunjey/pytorch-tutorial/issues/24 | [] | sujayr91 | 1 |
jupyterhub/repo2docker | jupyter | 521 | Determine python version support | Split off from #520. What versions of python do we support for running repo2docker? Note that this is different from what version of python repositories being built with repo2docker can request. | open | 2018-12-18T18:21:55Z | 2019-05-21T07:55:30Z | https://github.com/jupyterhub/repo2docker/issues/521 | [
"needs: discussion"
] | yuvipanda | 3 |
matterport/Mask_RCNN | tensorflow | 2,966 | Multi Class - Imbalance Dataset | I am using instance segmentation task with SFPI dataset which has imbalance data.
These are the losses i could arrive at,
loss :0.5265
mrcnn_bbox_loss : 0.09073
mrcnn_class_loss :0.1468
mrcnn_mask_loss:0.2024
rpn_bbox_loss:0.07144
rpn_class_loss:0.01511
val_loss:0.5339
val_mrcnn_bbox_loss:0.09229
val_mrcnn_class_loss: 0.1499
val_mrcnn_mask_loss:0.2025
val_rpn_Bbox_loss:0.07311
val_rpn_class_loss:0.0161
when visualizedm the results are very poor. mAp is also 0.
Can anyone help?
PS: I am using tf2 | open | 2023-06-30T04:53:00Z | 2023-07-18T05:03:44Z | https://github.com/matterport/Mask_RCNN/issues/2966 | [] | avinash-218 | 1 |
holoviz/panel | plotly | 7,427 | Cookiecutter/cruft for Panel extensions | I'd like to see some standard boilerplate of creating Panel extension repos, where directory, CI, documentation are automatically set up, and users just have to implement their feature and tweak a few settings to get it hosted.
Just as an example [prefect-collection-template](https://github.com/PrefectHQ/prefect-collection-template) automatically sets up the tools necessary to build and maintain a collection:
[mkdocs](https://www.mkdocs.org/) for automatic documentation generation
[black](https://github.com/psf/black), [isort](https://github.com/PyCQA/isort), and [flake8](https://flake8.pycqa.org/en/latest/) for automatic code formatting and linting
[pytest](https://docs.pytest.org/en/7.1.x/) for unit testing
[interrogate](https://interrogate.readthedocs.io/en/latest/) for documentation coverage analysis
[Coverage.py](https://coverage.readthedocs.io/en/6.3.2/) for code coverage analysis
[pre-commit](https://pre-commit.com/) to automatically run code formatting and linting prior to git commit
[versioneer](https://github.com/python-versioneer/python-versioneer) for automatic package versioning
(some of these tooling can be replaced with pixi / ruff / sphinx / etc).
I also recommend cruft over cookiecutter:
> cruft is different. It automates the creation of new projects like the others, but then it also helps you to manage the boilerplate through the life of the project. cruft makes sure your code stays in-sync with the template it came from for you.
cc @MarcSkovMadsen | open | 2024-10-21T18:11:54Z | 2024-10-25T13:00:56Z | https://github.com/holoviz/panel/issues/7427 | [] | ahuang11 | 9 |
pyg-team/pytorch_geometric | deep-learning | 9,808 | onnx problem with edge_index is [2, 0] | ### 🐛 Describe the bug
#### my model:
```
class MultiHeadAttentionPool(torch.nn.Module):
def __init__(self, input_dim, num_heads):
super(MultiHeadAttentionPool, self).__init__()
self.num_heads = num_heads
self.att_mlp = torch.nn.Linear(input_dim, num_heads)
def forward(self, x, batch):
alpha = self.att_mlp(x)
alpha = F.leaky_relu(alpha)
alpha = F.softmax(alpha, dim=0)
out = 0
for head in range(self.num_heads):
out += alpha[:, head].unsqueeze(-1) * x
res = torch.zeros((torch.unique(batch).shape[0], out.shape[-1]), dtype=out.dtype).to(out.device)
return torch.scatter_add(res, 0, batch.unsqueeze(-1).expand(-1, out.shape[-1]), out)
class GraphTransformerClassifierAttentionPool(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, num_classes, heads=4, device='cpu', use_patch_loss=True):
super(GraphTransformerClassifierAttentionPool, self).__init__()
self.device = device
self.conv1 = TransformerConv(input_dim, hidden_dim, heads=heads, concat=False, dropout=0.1)
self.conv2 = TransformerConv(hidden_dim, hidden_dim, heads=heads, concat=False, dropout=0.1)
self.att_pool = MultiHeadAttentionPool(hidden_dim, 8)
def forward(self, x, edge_index, batch):
x = x.to(self.device)
batch = batch.to(dtype=torch.int64).to(self.device)
edge_index = edge_index.to(self.device)
x = self.conv1(x, edge_index)
x = F.relu(x)
x = self.conv2(x, edge_index)
x = F.relu(x)
patch_emb = self.att_pool(x, batch)
return patch_emb
```
#### onnx wrapper:
```
class ONNXWrapper_inner(nn.Module):
def __init__(self, ori_model):
super().__init__()
self.model = ori_model
self.model.eval()
def forward(self, x, edge_index):
# x: [M, 256]
# edge_index: [2, k]
patch_feature, _ = self.model(x, edge_index, torch.zeros((x.shape[0])))
return patch_feature
```
### onnx convert code:
```
from torch_geometric.nn import knn_graph
model_inner = ONNXWrapper_inner(GraphTransformerClassifierAttentionPool(256, 256, 20, heads=8, device='cuda'))
dummy_x = torch.randn((random.randint(1, 100), 256))
dummy_edge_index = knn_graph(F.normalize(dummy_x, p=2, dim=-1), k=10, loop=True).to(torch.int64).to('cuda')
torch.onnx.export(
model_inner,
(dummy_x, dummy_edge_index),
'model_inner.onnx',
export_params=True,
opset_version=16,
do_constant_folding=True,
input_names=['x', 'edge_index'],
output_names=['patch_feature'],
dynamic_axes={
'x': {0: 'cell_num', 1: 'cell_dia'},
'edge_index': {1: 'edge_num'},
'patch_feature': {0: 'patch_num', 1: 'dia'},
},
verbose=True,
)
```
### onnx runtime test code:
```
session1 = onnxruntime.InferenceSession('./tmp/model_inner.onnx')
input_name1 = session1.get_inputs()[0].name
input_name2 = session1.get_inputs()[1].name
print(input_name1, input_name2)
output_name = session1.get_outputs()[0].name
print(output_name)
for _ in range(10):
x = torch.randn((random.randint(1, 2), 256))
patch_list.append(x)
x = F.normalize(x, p=2, dim=-1)
edge_index = knn_graph(x, k=10, loop=False)
print(x.shape, edge_index.shape)
outputs = session1.run([e.name for e in session1.get_outputs()], {input_name1: x.numpy(), input_name2: edge_index.numpy()})
```
In the loop of runtime test(10 loop), if the shape[0] of random generated **x** more than 1, the onnx can run without problem.
**But** when the shape[0] of x is 1(in this case, edge_index is [2, 0]), ONNXRuntimeError happend:
```
2024-11-27 10:59:23.162258055 [E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running Expand node. Name:'/model/conv1/Expand' Status Message: invalid expand shape
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Expand node. Name:'/model/conv1/Expand' Status Message: invalid expand shape
```
### Versions
the part packages version of my environment:
```
torch 2.4.1
torch_cluster 1.6.3
torch-geometric 2.6.1
torch_scatter 2.1.2
torch_sparse 0.6.18
onnx 1.17.0
numpy 1.26.4
``` | open | 2024-11-27T03:14:12Z | 2024-11-27T03:40:36Z | https://github.com/pyg-team/pytorch_geometric/issues/9808 | [
"bug"
] | LQY404 | 0 |
JoeanAmier/TikTokDownloader | api | 106 | 下载完后保存在那里 | 没找到保存位置 | open | 2023-12-22T20:43:14Z | 2023-12-26T08:47:21Z | https://github.com/JoeanAmier/TikTokDownloader/issues/106 | [] | iapeng | 6 |
2noise/ChatTTS | python | 97 | 如何切换女生音色呢 | ```
spk_stat = torch.load('ChatTTS/asset/spk_stat.pt')
rand_spk = torch.randn(768) * spk_stat.chunk(2)[0] + spk_stat.chunk(2)[1]
params_infer_code = {'spk_emb' : rand_spk, 'temperature':.3}
params_refine_text = {'prompt':'[oral_2][laugh_0][break_4]'}
```
1. 是不是这个模型只有男生音色,没有女生音色?
2. 如果想切换女生音色 代码应该怎么写呢? | closed | 2024-05-30T11:34:27Z | 2024-07-17T04:01:43Z | https://github.com/2noise/ChatTTS/issues/97 | [
"stale"
] | CodeCat-maker | 4 |
freqtrade/freqtrade | python | 10,589 | Error Using Whitelisted Pairs on BloFin | ## Describe your environment
* Operating system: Debian Linux
* Python Version: 3.11.2
* CCXT version: _____ (`pip freeze | grep ccxt`) - This command returned nothing
* Freqtrade Version: 2024.7.1
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
When attempting to do a dry run using a custom strategy on the BloFin exchange, every single pair that I have whitelisted says it is not available on the exchange. I have verified that several pairs it has told me to remove are indeed available on BloFin Spot. Examples: BCH/USDT | BCT/USDT.
### Steps to reproduce:
1. Whitelist these pairs in config file:
```
"BTC/USDT",
"BCH/USDT",
"ETH/USDT",
"LINK/USDT",
"LTC/USDT",
"SOL/USDT",
"BNB/USDT",
"XRP/USDT",
"ADA/USDT",
"DOT/USDT",
"ETC/USDT",
"ALGO/USDT",
"LUNA/USDT"
```
2. Activate a dry run of freqtrade using exchange "BloFin".
3. View logs to see that, one by one, the system rejects all of these pairs, citing that they are not available on BloFin.
### Observed Results:
* Log indicates failure to find pair on exchange.
### Relevant code exceptions or logs
Note: Please copy/paste text of the messages, no screenshots of logs please.
```
2024-08-27 04:01:49,918 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
2024-08-27 04:01:49,918 - freqtrade.exchange.exchange - INFO - Using CCXT 4.3.65
2024-08-27 04:01:49,923 - freqtrade.exchange.exchange - INFO - Using Exchange "BloFin"
2024-08-27 04:01:50,327 - freqtrade - ERROR - Pair ETH/USDT is not available on BloFin spot. Please remove ETH/USDT from your whitelist.
2024-08-27 04:02:54,618 - freqtrade.loggers - INFO - Verbosity set to 0
2024-08-27 04:02:54,618 - freqtrade.configuration.configuration - INFO - Runmode set to dry_run.
2024-08-27 04:02:54,618 - freqtrade.configuration.configuration - INFO - Parameter --db-url detected ...
2024-08-27 04:02:54,618 - freqtrade.configuration.configuration - INFO - Dry run is enabled
2024-08-27 04:02:54,618 - freqtrade.configuration.configuration - INFO - Using DB: "sqlite:////freqtrade/user_data/tradesv3.sqlite"
2024-08-27 04:02:54,618 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 5 ...
2024-08-27 04:02:54,624 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ...
2024-08-27 04:02:54,624 - freqtrade.configuration.configuration - INFO - Using data directory: /freqtrade/user_data/data/blofin ...
2024-08-27 04:02:54,625 - freqtrade.exchange.check_exchange - INFO - Checking exchange...
2024-08-27 04:02:54,628 - freqtrade.exchange.check_exchange - WARNING - Exchange "blofin" is known to the ccxt library, available for the bot, but not officially supported by the Freqtrade development team. It may work flawlessly (please report back) or have serious issues. Use it at your own discretion.
2024-08-27 04:02:54,629 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
2024-08-27 04:02:54,646 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy ichiV1 from '/freqtrade/user_data/strategies/ichiV1.py'...
2024-08-27 04:02:54,646 - freqtrade.strategy.hyper - INFO - Found no parameter file.
2024-08-27 04:02:54,647 - freqtrade.resolvers.strategy_resolver - WARNING - DEPRECATED: Using 'sell_profit_only' moved to 'exit_profit_only'.
2024-08-27 04:02:54,647 - freqtrade.resolvers.strategy_resolver - WARNING - DEPRECATED: Using 'use_sell_signal' moved to 'use_exit_signal'.
2024-08-27 04:02:54,647 - freqtrade.resolvers.strategy_resolver - WARNING - DEPRECATED: Using 'ignore_roi_if_buy_signal' moved to 'ignore_roi_if_entry_signal'.
2024-08-27 04:02:54,647 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: USDT.
2024-08-27 04:02:54,647 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: unlimited.
2024-08-27 04:02:54,647 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with value in config file: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}.
2024-08-27 04:02:54,647 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with value in config file: 5.
2024-08-27 04:02:54,647 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'0': 0.059, '10': 0.037, '41': 0.012, '114': 0}
2024-08-27 04:02:54,647 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 5m
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.275
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: False
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: False
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: False
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry': 'limit', 'exit': 'limit', 'stoploss': 'limit', 'stoploss_on_exchange': False, 'stoploss_on_exchange_interval': 60}
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'entry': 'GTC', 'exit': 'GTC'}
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: unlimited
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using protections: []
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 96
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False
2024-08-27 04:02:54,648 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_entry_signal: False
2024-08-27 04:02:54,649 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0
2024-08-27 04:02:54,649 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks: False
2024-08-27 04:02:54,649 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_buying_expired_candle_after: 0
2024-08-27 04:02:54,649 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using position_adjustment_enable: False
2024-08-27 04:02:54,649 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_entry_position_adjustment: -1
2024-08-27 04:02:54,649 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 5
2024-08-27 04:02:54,649 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2024-08-27 04:02:54,651 - freqtrade.resolvers.exchange_resolver - INFO - No Blofin specific subclass found. Using the generic class instead.
2024-08-27 04:02:54,651 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
2024-08-27 04:02:54,651 - freqtrade.exchange.exchange - INFO - Using CCXT 4.3.65
2024-08-27 04:02:54,656 - freqtrade.exchange.exchange - INFO - Using Exchange "BloFin"
2024-08-27 04:02:55,064 - freqtrade - ERROR - Pair ETH/USDT is not available on BloFin spot. Please remove ETH/USDT from your whitelist.
```
| closed | 2024-08-27T04:07:10Z | 2024-08-27T16:21:28Z | https://github.com/freqtrade/freqtrade/issues/10589 | [
"unsupported exchange"
] | MadMaximusJB | 2 |
CTFd/CTFd | flask | 1,981 | Deployment instructions for AWS Elastic Beanstalk | We've tried Procfiles and .ebextensions for WSGI. Are there known instructions for deploying to elastic beanstalk?
Procfile used:
`web: gunicorn --bind :8000 --workers 3 --threads 2 project.wsgi:application`
.ebextensions/python.config file used:
`option_settings:
"aws:elasticbeanstalk:container:python":
WSGIPath: wsgi:app` | open | 2021-08-31T12:20:00Z | 2021-08-31T12:20:00Z | https://github.com/CTFd/CTFd/issues/1981 | [] | bitshiftnetau | 0 |
InstaPy/InstaPy | automation | 6,148 | User biography | ## Expected Behavior
I want to get user biography when call `grab_following` function
## Current Behavior
Only return usernames
| closed | 2021-04-13T04:20:44Z | 2021-04-16T06:19:40Z | https://github.com/InstaPy/InstaPy/issues/6148 | [] | masihmoloodian | 0 |
ansible/ansible | python | 84,492 | Aws Ansible on prem server | ### Summary
Hello, Can aws ansible help me install some tools on a sever machine (on prem) or not ???
### Issue Type
Documentation Report
### Component Name
nothing
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Aws, Linux
### Additional Information
thanks
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-12-25T10:58:06Z | 2024-12-25T21:05:46Z | https://github.com/ansible/ansible/issues/84492 | [] | HamzaLghali | 2 |
pnkraemer/tueplots | matplotlib | 38 | ICML tutorial | With the ICML deadline approaching, it might be a good idea to have a minimal working example about tueplots-for-icml somewhere.
Presumably the readme? A notebook would also be neat, but a readme is easier to update in the future (i.e., after the conference). Deleting a file (a notebook) might be less fun. | closed | 2021-12-14T07:49:21Z | 2021-12-14T08:38:53Z | https://github.com/pnkraemer/tueplots/issues/38 | [] | pnkraemer | 0 |
albumentations-team/albumentations | machine-learning | 1,628 | [Tech debt] Improve interface for RandomSnow | Right now in the transform we have separate parameters for `snow_point_lower` and `snow_point_higher`
Better would be to have one parameter `snow_point_range = [snow_point_lower, snow_point_higher]`
=>
We can update transform to use new signature, keep old as working, but mark as deprecated.
----
PR could be similar to https://github.com/albumentations-team/albumentations/pull/1704 | closed | 2024-04-05T18:31:53Z | 2024-05-20T21:57:26Z | https://github.com/albumentations-team/albumentations/issues/1628 | [
"good first issue",
"Tech debt"
] | ternaus | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.