repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
noirbizarre/flask-restplus | flask | 569 | Model "required=True" is ignored on POST request | I have a model defined like this:
```python
model = api.model('User', {
'username': fields.String(required=True, description='Username for the user'),
'password': fields.String(required=True, description='Password for the user')
})
```
I am using a route defined like this:
```python
@api.route('/users')
class UsersResource(Resource):
@api.expect(model)
@api.marshal_with(model)
def post(self):
'''Create a new user'''
data = api.payload
return {'username': data['username'], 'password': data['password']}
```
When I use Swagger UI to test my resource with a POST, I was expecting the correct response (400, 422 etc), but instead the method body tries to run and causes an error and then returns 500 to Swagger UI:
```bash
backend_1 | File "./api/__init__.py", line 19, in post
backend_1 | return {'username': data['username'], 'password': data['password']}
backend_1 | KeyError: 'password'
```
Is my understanding of the documentation incorrect? | closed | 2019-01-08T17:59:00Z | 2019-01-09T14:11:42Z | https://github.com/noirbizarre/flask-restplus/issues/569 | [] | MelonFunction | 2 |
deepspeedai/DeepSpeed | pytorch | 6,005 | [BUG] Circular import error with PyTorch nightly | **Describe the bug**
Circular import error with PyTorch nightly. If I uninstall deepspeed it works fine.
```
Traceback (most recent call last):
File "/test/oss.py", line 322, in <module>
mp.spawn(
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 283, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method="spawn")
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 239, in start_processes
while not context.join():
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 190, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 77, in _wrap
fn(i, *args)
File "/test/oss.py", line 160, in train
model = DDP(model, device_ids=device_ids, find_unused_parameters=False) # type: ignore
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 900, in __init__
optimize_ddp = torch._dynamo.config._get_optimize_ddp_mode()
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/__init__.py", line 2561, in __getattr__
return importlib.import_module(f".{name}", __name__)
File "/opt/conda/envs/ptca/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/__init__.py", line 3, in <module>
from . import convert_frame, eval_frame, resume_execution
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 50, in <module>
from . import config, exc, trace_rules
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/trace_rules.py", line 45, in <module>
from .variables import (
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/variables/__init__.py", line 96, in <module>
from .torch import TorchCtxManagerClassVariable, TorchInGraphFunctionVariable
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 137, in <module>
torch._dynamo.external_utils.is_compiling: True,
AttributeError: partially initialized module 'torch._dynamo' has no attribute 'external_utils' (most likely due to a circular import)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Install Pytorch nightly
2. Install Deepspeed
3. Clone fairscale repo: https://github.com/facebookresearch/fairscale
4. cd benchmarks/oss.py
5. Run python oss.py
**Expected behavior**
The script should run without any errors.
**ds_report output**
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Ampere and newer architectures
[WARNING] FP Quantizer is using an untested triton version (3.0.0+dedb7bdf33), only 2.3.0 and 2.3.1 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.5
[WARNING] using untested triton version (3.0.0+dedb7bdf33), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/envs/ptca/lib/python3.10/site-packages/torch']
torch version .................... 2.5.0.dev20240815+cu118
deepspeed install path ........... ['/opt/conda/envs/ptca/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.14.5, unknown, unknown
torch cuda version ............... 11.8
torch hip version ................ None
nvcc version ..................... 11.8
deepspeed wheel compiled w. ...... torch 2.5, cuda 11.8
shared memory (/dev/shm) size .... 330.54 GB
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System info (please complete the following information):**
- OS: Ubuntu 20.04
- GPU count and types: 1 Node, with 8 V100
- Interconnects (if applicable): No
- Python version: 3.10.14
- Any other relevant info about your setup
**Launcher context**
Are you launching your experiment with the `deepspeed` launcher, MPI, or something else?
No
**Docker context**
Are you using a specific docker image that you can share?
**Additional context**
Add any other context about the problem here.
| open | 2024-08-16T00:13:25Z | 2024-12-18T19:23:41Z | https://github.com/deepspeedai/DeepSpeed/issues/6005 | [
"bug",
"training"
] | ajindal1 | 4 |
amdegroot/ssd.pytorch | computer-vision | 58 | cuda Runtime Error (77): an illegal memory access was encountered | `iter 510 || Loss: 6.8001 || THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/THC/generated/../THCReduceAll.cuh line=334 error=77 : an illegal memory access was encountered
Traceback (most recent call last):
File "train.py", line 231, in <module>
train()
File "train.py", line 183, in train
loss_l, loss_c = criterion(out, targets)
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/users/gpu/utkrsh/code/ssd.pytorch/layers/modules/multibox_loss.py", line 137, in forward
conf_p = conf_data[(pos_idx+neg_idx).gt(0)].view(-1, self.num_classes)
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/variable.py", line 72, in __getitem__
return MaskedSelect.apply(self, key)
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/_functions/tensor.py", line 468, in forward
return tensor.masked_select(mask)
RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/THC/generated/../THCReduceAll.cuh:334
`
I am trying to train the network with a slight modification in localization loss in `multibox_loss.py`. I keep on getting this error message for the same line of code. Also, when starting to train, there is a warning
`/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/_functions/tensor.py:450: UserWarning: mask is not broadcastable to self, but they have the same number of
elements. Falling back to deprecated pointwise behavior.
return tensor.masked_fill_(mask, value)`
I am training with `batch_size=32` in train.py and everything else is at the default value. I have tried to modify the code but there is no impact on the warning and I keep getting this error.
Also, if I use a larger `batch_size` in `train.py` like 40, I get this illegal memory access error much earlier than with size 32.
Any suggestions for what might be wrong? | closed | 2017-08-24T05:46:50Z | 2017-12-12T08:38:00Z | https://github.com/amdegroot/ssd.pytorch/issues/58 | [] | chauhan-utk | 7 |
mouredev/Hello-Python | fastapi | 381 | 2025最佳豪赌在线赌场 | 2025最佳豪赌在线赌场
欢迎来到令人着迷的高额在线赌场世界,在这个世界中,您的高额赌注可以为您带来更高的回报。如果您是一位喜欢高额赌注并获得丰厚奖金的玩家,那么现在是时候踏入高额赌场的上线赌博公司 使用华纳探索顶级高额赌场。此外,通过查阅我们的华纳 赌场排行榜来增强您的游戏之旅——这是您发现为高额赌客量身定制的最佳赌场的首选资源!
华纳上线博彩平台推荐现场百家乐、现场龙虎、VIP百家乐、VIP龙虎,牛牛、炸金花、大小、单双、筒子……7–8大类游戏72张牌桌,游戏项目全网最丰富!
娱乐游戏网站:[376838.com](http://376838.com/)
游戏开户经理微:xiaolu460570
飞机:lc15688 平台,
华纳博彩公司的服务出款灵活,支持银行卡、支付宝、微信及USDT充值和出款,总部客服24小时随时在线办理,200亿彩金让您解除。 | closed | 2025-02-09T05:01:05Z | 2025-02-11T07:20:12Z | https://github.com/mouredev/Hello-Python/issues/381 | [] | shiaclylbcpttj | 0 |
clovaai/donut | computer-vision | 216 | Questions for Label of dataset customizing | Hi, I read your paper and find it really inspiring. I have some questions as follows:
1. about own dataset making : Do I still need to label bounding box of each text area?
2. Does model abandon predicting text bounding box like image detection? | open | 2023-06-27T07:45:21Z | 2023-10-16T07:57:21Z | https://github.com/clovaai/donut/issues/216 | [] | Major1994 | 1 |
jacobgil/pytorch-grad-cam | computer-vision | 215 | about cam for object detection | File "H:/codefile/personal/yolov5/pytorch-grad-cam-master/my_cam.py", line 144, in <module>
reshape_transform=None)
TypeError: EigenCAM() takes no arguments
I found that the 'fasterrcnn_reshape_transform' was just defined but didn't use it when I try to implement the faster-rcnn. Then the error came out. | closed | 2022-03-18T02:07:20Z | 2022-04-06T10:18:07Z | https://github.com/jacobgil/pytorch-grad-cam/issues/215 | [] | JumeLin | 4 |
sanic-org/sanic | asyncio | 3,011 | Allow exception handlers to accept BaseExceptions | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
I've recently upgraded from Sanic 21.12.0 -> 23.12.0 and over this version boundary the behaviour of the exception Handler has changed (commit `d255d1a`, file `sanic/mixins/exceptions.py`) to accept `Union[Exception, List[Exception]]`.
We're using the exception handler to catch `asyncio`'s `CancelledError` which inherits from `BaseException` (see https://github.com/python/cpython/blob/3.9/Lib/asyncio/exceptions.py#L9). However with the Sanic upgrade Mypy is raising errors that we cannot use the exception handler to handle `CancelledError`, I've done some further investigation in ways of fixing this but they all seem to be limited by the argument types of `fn exception` in `sanic/mixins/exceptions.py`
### Describe the solution you'd like
Allow the exception handler to handle `BaseException`s.
### Additional context
_No response_ | open | 2024-11-26T15:25:15Z | 2024-12-12T16:16:45Z | https://github.com/sanic-org/sanic/issues/3011 | [
"feature request"
] | nicholas-watson-hx | 1 |
mitmproxy/mitmproxy | python | 7,377 | Failed to load HAR file with following error: Casting to number results in NaN | #### Problem Description
I generated a .har file with: `mitmdump --set hardump=record.har --mode socks5`, am able to import it in firefox, but importing it in chrome gives the error in the title.
#### Steps to reproduce the behavior:
Unsure, other .har files can be imported in chrome without this problem.
#### System Information
```
Mitmproxy: 10.2.2
Python: 3.12.6
OpenSSL: OpenSSL 3.1.0 14 Mar 2023
Platform: Linux-6.6.52-200.fc39.x86_64-x86_64-with-glibc2.38
```
| closed | 2024-12-07T14:43:53Z | 2025-01-09T08:33:10Z | https://github.com/mitmproxy/mitmproxy/issues/7377 | [
"kind/triage"
] | powellnorma | 1 |
polakowo/vectorbt | data-visualization | 117 | cryptocurrency inverse contract | Thank you very much for your amazing project.
I am trading cryptocurrency and I believe you are too, but I am trading with contract with is inverse contract(or coin base? I am sorry I don't know how to say in English). So the calculation with portfolio is quite different with the existing one in vectorbt. So could you show us a way to switch the gear, or it need more development?
Really thanks!! | closed | 2021-03-29T07:06:50Z | 2024-03-16T09:24:24Z | https://github.com/polakowo/vectorbt/issues/117 | [] | XieXiaonan | 2 |
dask/dask | numpy | 11,414 | Failing test `test_groupby_value_counts_all_na_partitions` for pandas nightly build | https://github.com/dask/dask/actions/runs/11209570425/job/31154974473
```python-traceback
[gw1] linux -- Python 3.12.7 /home/runner/miniconda3/envs/test-environment/bin/python3.12
def test_groupby_value_counts_all_na_partitions():
size = 100
na_size = 90
npartitions = 10
df = pd.DataFrame(
{
"A": np.random.randint(0, 2, size=size, dtype=bool),
"B": np.append(np.nan * np.zeros(na_size), np.random.randn(size - na_size)),
}
)
ddf = dd.from_pandas(df, npartitions=npartitions)
> assert_eq(
ddf.groupby("A")["B"].value_counts(),
df.groupby("A")["B"].value_counts(),
)
dask/dataframe/tests/test_groupby.py:3757:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
dask/dataframe/utils.py:585: in assert_eq
a = _check_dask(
dask/dataframe/utils.py:477: in _check_dask
result = dsk.compute(scheduler=scheduler)
../../../miniconda3/envs/test-environment/lib/python3.12/site-packages/dask_expr/_collection.py:481: in compute
return DaskMethodsMixin.compute(out, **kwargs)
dask/base.py:372: in compute
(result,) = compute(self, traverse=False, **kwargs)
dask/base.py:660: in compute
results = schedule(dsk, keys, **kwargs)
../../../miniconda3/envs/test-environment/lib/python3.12/site-packages/dask_expr/_groupby.py:293: in chunk
return _apply_chunk(df, *by, **kwargs)
dask/dataframe/groupby.py:459: in _apply_chunk
return func(g[columns], **kwargs)
dask/dataframe/groupby.py:3143: in _value_counts
return x.value_counts(**kwargs)
properties.pyx:36: in pandas._libs.properties.CachedProperty.__get__
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pandas.core.groupby.ops.BaseGrouper object at 0x7f6a2573e720>
levels = [Index([False, True], dtype='bool'), Index([], dtype='float64')]
codes = [array([0, 1, 0, 1, 1, 1, 0, 0, 0, 1]), array([-1, -1, -1, -1, -1, -1, -1, -1, -1, -1])]
names = ['A', 'B'], sorts = [True, False]
def _ob_index_and_ids(
self,
levels: list[Index],
codes: list[npt.NDArray[np.intp]],
names: list[Hashable],
sorts: list[bool],
) -> tuple[MultiIndex, npt.NDArray[np.intp]]:
consistent_sorting = all(sorts[0] == sort for sort in sorts[1:])
sort_in_compress = sorts[0] if consistent_sorting else False
shape = tuple(len(level) for level in levels)
group_index = get_group_index(codes, shape, sort=True, xnull=True)
ob_ids, obs_group_ids = compress_group_index(group_index, sort=sort_in_compress)
ob_ids = ensure_platform_int(ob_ids)
ob_index_codes = decons_obs_group_ids(
ob_ids, obs_group_ids, shape, codes, xnull=True
)
ob_index = MultiIndex(
levels=levels,
codes=ob_index_codes,
names=names,
verify_integrity=False,
)
if not consistent_sorting:
# Sort by the levels where the corresponding sort argument is True
n_levels = len(sorts)
drop_levels = [
n_levels - idx
for idx, sort in enumerate(reversed(sorts), 1)
if not sort
]
if len(drop_levels) > 0:
sorter = ob_index._drop_level_numbers(drop_levels).argsort()
else:
sorter = ob_index.argsort()
ob_index = ob_index.take(sorter)
_, index = np.unique(sorter, return_index=True)
> ob_ids = np.where(ob_ids == -1, -1, index.take(ob_ids))
E IndexError: cannot do a non-empty take from an empty axes.
``` | closed | 2024-10-07T09:40:24Z | 2024-10-08T10:17:17Z | https://github.com/dask/dask/issues/11414 | [
"tests",
"upstream"
] | fjetter | 3 |
pydantic/FastUI | fastapi | 188 | GoToEvent Class: url disregarded when query is present | GoToEvent class ignores url when query is provided, defaulting to '/?id=1'.
```py
c.FireEvent(event=GoToEvent(url='/users', query={'id': 1}))
``` | open | 2024-02-13T12:53:22Z | 2024-02-13T12:53:22Z | https://github.com/pydantic/FastUI/issues/188 | [] | arrshad | 0 |
pydata/xarray | numpy | 9,489 | DataTree: Copy during assignment to child nodes? | I think both of these should have the same behavior:
``` python
bart = xr.DataTree(name="Bart")
lisa = xr.DataTree(name="Lisa")
homer = xr.DataTree(name="Homer", children={"Bart": bart, "Lisa": lisa})
print(list(lisa.siblings))
homer2 = xr.DataTree(name="Homer")
homer2.children = {"Bart": bart, "Lisa": lisa}
print(list(lisa.siblings))
```
``` python
[]
['Bart']
```
@TomNicholas
_Originally posted by @flamingbear in https://github.com/pydata/xarray/issues/9033#issuecomment-2346526772_
| open | 2024-09-12T18:20:50Z | 2024-09-12T21:29:25Z | https://github.com/pydata/xarray/issues/9489 | [
"bug",
"topic-DataTree"
] | TomNicholas | 2 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 465 | cookie问题的一点发现 | 跟很多人反馈的一样,我也困扰于明明填了最新的cookie,但仍返回空的问题
[刚才又尝试了一下,从]()
这个地方获取了完整的cookie,填入配置文件后发现居然可以用了
怀疑抖音后台对cookie的检测更严格了,从network那边获得的cookie片段检测不通过了
不过也不确定,还请大家再测试一下 | closed | 2024-08-10T16:43:47Z | 2024-10-09T07:16:43Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/465 | [] | supersaiyan2019 | 3 |
Miserlou/Zappa | django | 1,240 | Failed to generate or install certificate! | **My Domain SSL has expired. So I tried to create a new SSL key by doing**
> $ openssl genrsa 2048 > account.key;
**And Added my Domain name and Path to zappa_settings.json file.
I tried to run**
> $ zappa certify production
**This is what the Error I am getting**
> $ zappa certify production
Calling certify for stage production..
Are you sure you want to certify? [y/n] y
Certifying domain xxxx.domain.com ..
Error registering: 400 {
"type": "urn:acme:error:malformed",
"detail": "Provided agreement URL [https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf] does not match current agreement URL [https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf]",
"status": 400
}
Failed to generate or install certificate! :(
==============
I find the link [https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf](https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf) has nothing but "404 Not Found" Error. what is the problem ? And also do we have to run $ zappa certify everytime we need to
renew the certificate ?
| closed | 2017-11-15T10:56:42Z | 2017-11-16T21:19:24Z | https://github.com/Miserlou/Zappa/issues/1240 | [] | sandeshnaroju | 1 |
python-gitlab/python-gitlab | api | 2,976 | ProjectFileManager does not implement head method to meet API spec | ## Description of the problem, including code/CLI snippet
Attempting to use a HEAD request to retrieve file metadata from a project repository ([Gitlab REST API docs](https://docs.gitlab.com/ee/api/repository_files.html#get-file-metadata-only)) fails with a 404
```python
dest_file_get_args = {
'file_path': file_def['dest_filepath'],
'ref': 'HEAD',
}
existing_file = project.files.head(**dest_file_get_args)
```
## Expected Behavior
Should return 200 and file metadata. The same request as a GET request returns 200, and they follow the same URL path and parameter requirements. Sending a GET request logs this URL path:
`
DEBUG:urllib3.connectionpool:https://gitlab.com:443 "GET /api/v4/projects/1/repository/files/somefile.txt?ref=HEAD HTTP/11" 200 418
`
## Actual Behavior
A HTTP HEAD request is sent with the following URL path:
`DEBUG:urllib3.connectionpool:https://gitlab.com:443 "HEAD /api/v4/projects/1/repository/files?file_path=somefile.txt&ref=HEAD HTTP/11" 404 0
`
## Specifications
- python-gitlab version: ~~3.1.43~~ 4.9.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): [v16.7.4-ee](https://gitlab.com/gitlab-org/gitlab/-/tags/v16.7.4-ee)
| closed | 2024-09-11T15:05:35Z | 2024-09-14T16:31:36Z | https://github.com/python-gitlab/python-gitlab/issues/2976 | [] | holysoles | 12 |
yzhao062/pyod | data-science | 165 | 3D visualization for outlier detection(performance is ok.) | Hello.
I coded the 3d outlier detection as below. (from pyod example)
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.font_manager
from pyod.models.abod import ABOD
from pyod.models.knn import KNN
from pyod.utils.data import generate_data, get_outliers_inliers
X_train, Y_train = generate_data(n_train=200,train_only=True, n_features=3)
outlier_fraction = 0.1
x_outliers, x_inliers = get_outliers_inliers(X_train,Y_train)
n_inliers = len(x_inliers)
n_outliers = len(x_outliers)
F1 = X_train[:,[0]].reshape(-1,1)
F2 = X_train[:,[1]].reshape(-1,1)
F3 = X_train[:,[2]].reshape(-1,1)
xx , yy, zz = np.meshgrid(np.linspace(-10, 10, 200), np.linspace(-10, 10, 200), np.linspace(-10, 10, 200))
xxx , yyy = np.meshgrid(np.linspace(-10, 10, 200), np.linspace(-10, 10, 200))
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(F1, F2, F3)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
classifiers = {
'K Nearest Neighbors (KNN)' : KNN(contamination=outlier_fraction),
'Angle-based Outlier Detector (ABOD)' : ABOD(contamination=outlier_fraction)
}
plt.figure(figsize=(10, 10))
for i, (clf_name,clf) in enumerate(classifiers.items()) :
# fit the dataset to the model
clf.fit(X_train)
scores_pred = clf.decision_function(X_train)*-1
y_pred = clf.predict(X_train)
n_errors = (y_pred != Y_train).sum()
print('No of Errors : ',clf_name, n_errors)
threshold = stats.scoreatpercentile(scores_pred,100 *outlier_fraction)
it's works.
but when i commit the next line below
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel(), zz.ravel()]) * -1
it stops....
Can i get the visualization part for getting the decision reigion?
| closed | 2020-02-24T04:40:36Z | 2020-03-11T06:51:11Z | https://github.com/yzhao062/pyod/issues/165 | [] | NeighborhoodCoding | 2 |
plotly/dash-table | dash | 928 | [BUG]: Fixed columns overlay non-fixed columns when using horizontal scrollbar, merged_duplicate_headers and fixed_columns | ## Description
On a table with two levels of headers with `merge_duplicate_headers=True`, a horizontal scrollbar and first two columns fixed, the fixed columns overlay some of the non-fixed columns, as shown in the gif. The numbers in the first and second header names should match, and "Sub Level Header 3" and "Sub Level Header 4" disappeared. The blank headers of the first row appear to have been merged in such a way that the entire row is shifted to the left. In addition, the first header is enlarged and does not seem to respond to changed settings.

## Expected Behavior
The table is expected to behave similarly to the second gif with `merged_duplicate_headers=False`.

```python
import dash
import dash_html_components as html
import dash_table
import pandas as pd
df = pd.DataFrame(
{
'column_0': [1],
'column_1': [1],
'column_2': [1],
'column_3': [1],
'column_4': [1],
'column_5': [1],
'column_6': [1],
'column_7': [1],
'column_8': [1],
'column_9': [1],
'column_10': [1],
'column_11': [1],
'column_12': [1],
'column_13': [1],
'column_14': [1],
'column_15': [1],
'column_16': [1],
'column_17': [1],
'column_18': [1],
'column_19': [1]
}
)
columns = [
{'name': ['', 'Sub Level Header 1'], 'id': 'column_0', 'type': 'numeric', 'selectable': True},
{'name': ['', 'Sub Level Header 2'], 'id': 'column_1', 'type': 'numeric', 'selectable': True},
{'name': ['', 'Sub Level Header 3'], 'id': 'column_2', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 4', 'Sub Level Header 4'], 'id': 'column_3', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 5', 'Sub Level Header 5'], 'id': 'column_4', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 6', 'Sub Level Header 6'], 'id': 'column_5', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 7', 'Sub Level Header 7'], 'id': 'column_6', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 8', 'Sub Level Header 8'], 'id': 'column_7', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 9', 'Sub Level Header 9'], 'id': 'column_8', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 10', 'Sub Level Header 10'], 'id': 'column_9', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 11', 'Sub Level Header 11'], 'id': 'column_10', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 12', 'Sub Level Header 12'], 'id': 'column_11', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 13', 'Sub Level Header 13'], 'id': 'column_12', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 14', 'Sub Level Header 14'], 'id': 'column_13', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 15', 'Sub Level Header 15'], 'id': 'column_14', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 16', 'Sub Level Header 16'], 'id': 'column_15', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 17', 'Sub Level Header 17'], 'id': 'column_16', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 18', 'Sub Level Header 18'], 'id': 'column_17', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 19', 'Sub Level Header 19'], 'id': 'column_18', 'type': 'numeric', 'selectable': True},
{'name': ['Top Level Header 20', 'Sub Level Header 20'], 'id': 'column_19', 'type': 'numeric', 'selectable': True}
]
app = dash.Dash(__name__)
demo_table = dash_table.DataTable(
id='demo_table',
columns=columns,
data=df.to_dict('records'),
merge_duplicate_headers=True,
fixed_columns={'headers':True, 'data':2},
style_table={
'overflowX': 'scroll',
'minWidth': '100%'
},
style_cell={
'minWidth': '150px', 'width': '150px', 'maxWidth': '150px'
}
)
app.layout = html.Div([
html.Div("Demonstration Table:"),
demo_table
])
if __name__ == '__main__':
app.run_server(debug=True, host='0.0.0.0', port=8000)
```
## Context
OS: macOS Big Sur 11.4
Browser: Firefox 90.0.2 (64-bit)
dash 1.21.0
dash-bootstrap-components 0.10.7
dash-core-components 1.17.1
dash-extensions 0.0.51
dash-html-components 1.1.4
dash-renderer 1.9.1
dash-table 4.12.0 | open | 2021-07-28T15:05:38Z | 2022-12-20T13:58:24Z | https://github.com/plotly/dash-table/issues/928 | [] | FabianHi | 2 |
mwaskom/seaborn | pandas | 3,475 | This is the excel plot, the labels are perfectly aligned under bars | 
| closed | 2023-09-15T17:14:18Z | 2023-09-15T21:41:15Z | https://github.com/mwaskom/seaborn/issues/3475 | [] | Utsav-2301 | 0 |
oegedijk/explainerdashboard | dash | 27 | Use ScatterGL to improve performance | Hi Oege,
first of all thanks for the amazing work! Is it possible to use WebGL Scatter Plots (https://plotly.com/python/webgl-vs-svg/) instead of the default ones to improve the performance of the vanilla "Feature Dependence" Plot for large datasets (say >10k)? | closed | 2020-11-26T10:37:18Z | 2020-12-01T16:07:33Z | https://github.com/oegedijk/explainerdashboard/issues/27 | [] | hkoppen | 4 |
jackzhenguo/python-small-examples | data-science | 37 | [99] 乘法表加个一行代码的版本吧,哈哈 | ```python
print("\n".join("\t".join(["%s*%s=%s" % (x, y, x * y) for y in range(1, x + 1)]) for x in range(1, 10)))
``` | closed | 2020-06-06T08:43:36Z | 2020-06-06T10:46:20Z | https://github.com/jackzhenguo/python-small-examples/issues/37 | [] | CLannadZSY | 1 |
dsdanielpark/Bard-API | api | 262 | Google Official API | # This Bard API Package is unofficial Python package.
As you may know, this repository was provided for testing certain functionalities due to the delayed release of Google Bard's official API (which has not been released yet). In the early stages, the cookie values were static, allowing for stable usage. However, it has become unstable recently due to frequent changes in cookie values within 15-20 minutes or even faster.
Furthermore, since Bard Advanced API is expected to be publicly available in 2024, this repository will become a public archive upon the official API's release. The [Bard website](https://bard.google.com/chat) is currently presenting responses randomly from a range of diverse AI models, such as PaLM2 and Gemini. You can test the functionalities through the web UI on the Bard website.
**I hope that this package has been of some help to other developers and that it has contributed to open-source projects, even if only slightly. I sincerely thank all the contributors and community participants. I wish you all the best and hope that the new year brings only good things.**
### To ensure stable usage, we strongly recommend replacing your code with Google's official API, which we introduce as follows.
<br><br>
***
<br><br>
# Google API Console
- Web: https://console.cloud.google.com/apis/library?pli=1
<br>
# Google PaLM API
You can explore information about various generative AI models by Google. Although the palm2 API seems to be under preparation, you can check out demos related to palm2 on the demo page.
## PaLM API
Try demo at https://makersuite.google.com/app/prompts/new_text.
```
who are you?
>> I am powered by PaLM 2, which stands for Pathways Language Model 2, a large language model from Google AI.
```
Google Generative AI
- Official Page: https://blog.google/technology/ai/google-palm-2-ai-large-language-model/
- GitHub: https://github.com/GoogleCloudPlatform/generative-ai
- Try Demo: https://makersuite.google.com/app/prompts/new_text.
- Official Library: https://makersuite.google.com/app/library
- Get API Key: https://makersuite.google.com/app/apikey
- Quick Start Tutorial: https://developers.generativeai.google/tutorials/text_quickstart
### Quick Start
```
$ pip install -q google-generativeai
```
```python
import pprint
import google.generativeai as palm
palm.configure(api_key='YOUR_API_KEY')
models = [m for m in palm.list_models() if 'generateText' in m.supported_generation_methods]
model = models[0].name
print(model)
prompt = "Who are you?"
completion = palm.generate_text(
model=model,
prompt=prompt,
temperature=0,
# The maximum length of the response
max_output_tokens=800,
)
print(completion.result)
```
## Google PaLM 2
- Paper: https://arxiv.org/abs/2305.10403
- Web: https://blog.google/technology/ai/google-palm-2-ai-large-language-model/
- Official API On PaLM API.
<br><br>
# Google Gemini
- Paper: https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf
- Web: https://blog.google/technology/ai/google-gemini-ai/#capabilities
- Code Guide: https://ai.google.dev/tutorials/python_quickstart
- Official API On Google AI Studio.
## Google AI Studio
Google AI Studio creates a new Google Cloud project for each new API key. You also can create an API key in an existing Google Cloud project. All projects are subject to the [Google Cloud Platform Terms of Service](https://cloud.google.com/terms).
- Web: https://makersuite.google.com/app/apikey
Note: The Gemini API is currently in public preview. Production applications are not supported yet.
<br>
<br><br>
# Google Bard
- Web: https://bard.google.com/chat
- Update News: https://bard.google.com/updates
- The official API is not available yet.
<br><br>
# Google Generative AI Github
- Github: https://github.com/GoogleCloudPlatform/generative-ai | open | 2024-01-13T18:33:34Z | 2024-01-13T18:52:40Z | https://github.com/dsdanielpark/Bard-API/issues/262 | [
"notice"
] | dsdanielpark | 0 |
google-research/bert | nlp | 984 | token_type_id has only two possible value, yes? | I read the code, found that in the comments,
token_type_ids = tf.constant([[0, 0, 1], [0, 2, 0]])
But I think token_type_ids can only be 1 or 0 , am I wrong | open | 2020-01-08T07:50:02Z | 2020-03-28T14:02:25Z | https://github.com/google-research/bert/issues/984 | [] | novas-meng | 1 |
Nekmo/amazon-dash | dash | 107 | FreeNAS (BSD) support | Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with amazon-dash)
- [ X ] Feature request (request for a new functionality)
- [ X ] Question
- [ ] Other
* amazon-dash version: 1.3.1
* Python version: 3.6.6
* Pip & Setuptools version: 18.1
* Operating System: FreeNAS 11.2
- Using iocage jail to run amazon-dash
* iocage jail OS: FreeBSD 11.2
- [ X ] The `pip install` or `setup install` command has been completed without errors
- [ ] The `python -m amazon_dash.install` command has been completed without errors
- [ X ] The `amazon-dash discovery` command works without errors
- [ X ] I have created/edited the configuration file
- [ X ] *Amazon-dash service* or `amazon-dash --debug run` works
#### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
Create a [FreeNAS plugin](https://doc.freenas.org/11.2/plugins.html#plugins) for amazon-dash
#### What I Did
pkg install python36
python3.6 -m ensurepip
pip install amazon-dash
cp /usr/local/lib/python3.6/site-packages/amazon_dash/install/amazon-dash.yml /root/amazon-dash.yml
amazon-dash run
Everything seems to be working fine
```
Paste the command(s) you ran and the output.
python3.6 -m amazon_dash.install
Executing all install scripts for Amazon-Dash
[OK] config has been installed successfully
ps: illegal option -- -
usage: ps [-aCcdefHhjlmrSTuvwXxZ] [-O fmt | -o fmt] [-G gid[,gid...]]
[-J jid[,jid...]] [-M core] [-N system]
[-p pid[,pid...]] [-t tty[,tty...]] [-U user[,user...]]
ps [-L]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/site-packages/amazon_dash/install/__main__.py", line 3, in <module>
catch(cli)()
File "/usr/local/lib/python3.6/site-packages/amazon_dash/install/__init__.py", line 47, in wrap
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/amazon_dash/install/__init__.py", line 152, in all
has_service = has_service or (service().install() and
File "/usr/local/lib/python3.6/site-packages/amazon_dash/install/__init__.py", line 71, in install
self.is_installable()
File "/usr/local/lib/python3.6/site-packages/amazon_dash/install/__init__.py", line 107, in is_installable
if get_init_system() != 'systemd' or not get_systemd_services_path():
File "/usr/local/lib/python3.6/site-packages/amazon_dash/install/__init__.py", line 30, in get_init_system
return check_output(['ps', '--no-headers', '-o', 'comm', '1']).strip(b'\n ').decode('utf-8')
File "/usr/local/lib/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/usr/local/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ps', '--no-headers', '-o', 'comm', '1']' returned non-zero exit status 1.
If there was a crash, please include the traceback here.
```
I wondering what the amazon_dash.install command does as amazon.dash seems to be working fine without it? It seems like it may just be coping the config file into place and checking for systemd.
| closed | 2018-11-18T03:11:31Z | 2019-03-20T22:26:50Z | https://github.com/Nekmo/amazon-dash/issues/107 | [
"Documentation"
] | tprelog | 6 |
Lightning-AI/pytorch-lightning | deep-learning | 20,332 | Add a Chinese version of README | ### 📚 Documentation
These are the reasons why I want to add a Chinese version of the README:
1、Reduce language barriers and expand user base: Chinese is one of the most widely spoken languages in the world, and providing a Chinese version of the README will help a large number of Chinese developers and researchers get up to speed with PyTorch Lightning, especially those who are not familiar with English, thereby attracting more people to participate and use the project.
2、Increase the international reach of open source projects: Adding multilingual support, especially Chinese, will help PyTorch Lightning spread globally, especially in academia and industry in China and other Chinese-speaking regions. This will greatly enhance the project's user base and number of contributors.
3、Accelerate community contributions: By providing Chinese documentation, Chinese developers can better understand the project, which makes it easier to participate in the development and contribution of the project, and promotes the activity and growth of the open source community.
4、Improve learning efficiency: Providing Chinese users with native language versions of documents can significantly shorten their learning curve, allowing them to focus on the technology itself instead of spending extra time on language understanding. This will improve the efficiency of learning and research.
cc @borda | open | 2024-10-10T08:18:47Z | 2024-10-10T08:19:08Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20332 | [
"docs",
"needs triage"
] | nocoding03 | 0 |
datadvance/DjangoChannelsGraphqlWs | graphql | 28 | asyncio.exceptions.CancelledError when connecting | I'm just trying to get DjangoChannelsGraphqlWs up and running, but whenever I connect to the websocket with `wscat` I just get a `CancelledError` exception.
```
WebSocket HANDSHAKING /ws/graphql/ [172.19.0.1:56906]
Exception in callback AsyncioSelectorReactor.callLater.<locals>.run() at /usr/local/lib/python3.8/site-packages/twisted/internet/asyncioreactor.py:287
handle: <TimerHandle when=312902.170321491 AsyncioSelectorReactor.callLater.<locals>.run() at /usr/local/lib/python3.8/site-packages/twisted/internet/asyncioreactor.py:287>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/asyncio/events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.8/site-packages/twisted/internet/asyncioreactor.py", line 290, in run
f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/daphne/server.py", line 270, in application_checker
exception = application_instance.exception()
asyncio.exceptions.CancelledError
WebSocket DISCONNECT /ws/graphql/ [172.19.0.1:56906]
```
I haven't tried any other WebSocket clients but I also have a simple Echo consumer at `/ws/echo/` which works fine.
Additionally I'm getting a similar error in my tests.
```
git test -k channels
=============================================================== test session starts ===============================================================
platform linux -- Python 3.8.0, pytest-5.2.1, py-1.8.0, pluggy-0.13.0
Django settings: corso.settings (from environment variable)
rootdir: /app
plugins: mock-1.11.1, django-3.6.0, asyncio-0.10.0
collected 55 items / 54 deselected / 1 selected
knowledge_tree/tests/test_channels/test_topic_tree_subscriptions.py F [100%]
==================================================================== FAILURES =====================================================================
____________________________________________________________ test_basic_graphql_query _____________________________________________________________
pyfuncitem = <Function test_basic_graphql_query>
@pytest.mark.tryfirst
def pytest_pyfunc_call(pyfuncitem):
"""
Run asyncio marked test functions in an event loop instead of a normal
function call.
"""
for marker_name, fixture_name in _markers_2_fixtures.items():
if marker_name in pyfuncitem.keywords \
and not getattr(pyfuncitem.obj, 'is_hypothesis_test', False):
event_loop = pyfuncitem.funcargs[fixture_name]
funcargs = pyfuncitem.funcargs
testargs = {arg: funcargs[arg]
for arg in pyfuncitem._fixtureinfo.argnames}
> event_loop.run_until_complete(
asyncio.ensure_future(
pyfuncitem.obj(**testargs), loop=event_loop))
/usr/local/lib/python3.8/site-packages/pytest_asyncio/plugin.py:156:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_UnixSelectorEventLoop running=False closed=False debug=False>
future = <Task cancelled name='Task-1' coro=<test_basic_graphql_query() done, defined at /app/knowledge_tree/tests/test_channels/test_topic_tree_subscriptions.py:23>>
def run_until_complete(self, future):
"""Run until the Future is done.
If the argument is a coroutine, it is wrapped in a Task.
WARNING: It would be disastrous to call run_until_complete()
with the same coroutine twice -- it would wrap it in two
different Tasks and that can't be good.
Return the Future's result, or raise its exception.
"""
self._check_closed()
new_task = not futures.isfuture(future)
future = tasks.ensure_future(future, loop=self)
if new_task:
# An exception is raised if the future didn't complete, so there
# is no need to log the "destroy pending task" message
future._log_destroy_pending = False
future.add_done_callback(_run_until_complete_cb)
try:
self.run_forever()
except:
if new_task and future.done() and not future.cancelled():
# The coroutine raised a BaseException. Consume the exception
# to not log a warning, the caller doesn't have access to the
# local task.
future.exception()
raise
finally:
future.remove_done_callback(_run_until_complete_cb)
if not future.done():
raise RuntimeError('Event loop stopped before Future completed.')
> return future.result()
E asyncio.exceptions.CancelledError
/usr/local/lib/python3.8/asyncio/base_events.py:608: CancelledError
================================================================ warnings summary =================================================================
/usr/local/lib/python3.8/site-packages/aiohttp/helpers.py:107
/usr/local/lib/python3.8/site-packages/aiohttp/helpers.py:107: DeprecationWarning: "@coroutine" decorator is deprecated since Python 3.8, use "async def" instead
def noop(*args, **kwargs): # type: ignore
-- Docs: https://docs.pytest.org/en/latest/warnings.html
================================================== 1 failed, 54 deselected, 1 warnings in 2.57s ==================================================
```
### Relevant files:
My test is pretty simple:
```python
"""Tests for subscriptions to Topic tree changes."""
import channels_graphql_ws.testing
import pytest
from boltons.iterutils import get_path
from corso.routing import application
@pytest.fixture
def wsclient(db, request):
"""Make GraphQL testing client.""" # noqa: D202
def constructor():
transport = channels_graphql_ws.testing.GraphqlWsTransport(
application=application, path="ws/graphql/"
)
return channels_graphql_ws.testing.GraphqlWsClient(transport)
yield constructor
@pytest.mark.asyncio
async def test_basic_graphql_query(wsclient):
"""Test raw connection to GraphqlWsConsumer."""
client = wsclient()
await client.connect_and_init()
response = await client.execute("query { ping }")
pong = get_path(response, ["data", "ping"])
assert pong == "pong"
await client.finalize()
```
`settings.py` relevant bits
```python
...
ASGI_APPLICATION = "corso.routing.application"
...
# Channels
CHANNEL_LAYERS = dict(default=dict(BACKEND="channels.layers.InMemoryChannelLayer"))
```
`corso/schema.py` has my Graphene schema and Consumer classes
```python
"""GraphQL API aggregator."""
import channels
import channels_graphql_ws
import graphene
import knowledge_tree.mutations
import knowledge_tree.schema
import knowledge_tree.subscriptions
class Query(knowledge_tree.schema.Query, graphene.ObjectType):
"""
Queries from multiple apps.
- Knowledge Tree
"""
class Mutation(knowledge_tree.mutations.Mutation, graphene.ObjectType):
"""
Mutations from multiple apps.
- Knowledge Tree
"""
class Subscription(knowledge_tree.subscriptions.Subscription, graphene.ObjectType):
"""
Subscriptions from multiple apps.
- Knowledge Tree
"""
schema = graphene.Schema(query=Query, mutation=Mutation, subscription=Subscription)
class GraphqlWsConsumer(channels_graphql_ws.GraphqlWsConsumer):
"""Channels WebSocket consumer which provides GraphQL API."""
async def on_connect(self, payload):
"""Handle new websocket connection."""
"""
Use auxiliary Channels function `get_user` to replace an instance of
`channels.auth.UserLazyObject` with a native Django user object (user model
instance or `AnonymousUser`) It is not necessary, but it helps to keep resolver
code simpler. Cause in both HTTP/WebSocket requests they can use
`info.context.user`, but not a wrapper. For example objects of type Graphene
Django type `DjangoObjectType` does not accept `channels.auth.UserLazyObject`
instances.
https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/23
"""
self.scope["user"] = await channels.auth.get_user(self.scope)
schema = schema
# strict_ordering = True
send_keepalive_every = 11
```
`corso/routing.py` has my ASGI application
```python
"""Channels routing for corso project."""
from channels.auth import AuthMiddlewareStack
from channels.generic.websocket import WebsocketConsumer
from channels.routing import ProtocolTypeRouter, URLRouter
from django.urls import path
import corso.schema
class EchoConsumer(WebsocketConsumer):
"""Just echo back all messages."""
def connect(self):
"""Accept all connections."""
self.accept()
def disconnect(self):
"""Do nothing."""
def receive(self, text_data: str):
"""Echo back data."""
self.send(text_data=text_data)
application = ProtocolTypeRouter(
dict(
websocket=AuthMiddlewareStack(
# https://channels.readthedocs.io/en/latest/topics/authentication.html
URLRouter(
[
path("ws/graphql/", corso.schema.GraphqlWsConsumer),
path("ws/echo/", EchoConsumer),
]
)
)
)
)
```
Haven't gotten to the point of calling any of my subscriptions, so that's a bit offtopic for now.
The "ping" I'm testing looks like this:
```python
class Query:
"""Knowledge Tree GraphQL query root.""
...
ping = graphene.Field(graphene.String)
def resolve_ping(self, info, **kwargs):
"""Return a pong."""
return "pong"
...
``` | closed | 2019-10-18T20:13:45Z | 2020-12-10T17:47:58Z | https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/28 | [] | cellofellow | 6 |
roboflow/supervision | deep-learning | 732 | The project could not run with python ultralytics_example.py. The error report is: FileNotFoundError: /root/autodl-tmp/Roboflow/supervision/examples/speed_estimation/[[[172 146 126]...... | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug

### Environment
_No response_
### Minimal Reproducible Example
python ultralytics_example.py \
> --source_video_path data/vehicles.mp4 \
> --target_video_path data/vehicles-result.mp4 \
> --confidence_threshold 0.3 \
> --iou_threshold 0.5
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-01-16T07:40:09Z | 2024-01-16T16:43:31Z | https://github.com/roboflow/supervision/issues/732 | [
"bug"
] | spring-wreath | 9 |
ultralytics/ultralytics | machine-learning | 18,869 | Yolo11-pose how to post-process output | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I inference model in ort web and processed nms
I have some questions about output.
1. Output shape is [1, 100, 56], 100 is mean 100 object are detect?
2. I set iou_threshold = 0.45, conf_threshold = 0.35 why are there still 100 bounding box, is it normal?
3. how to post-process this 100 bounding box?
4. I used the same nms as the yolo11 object detect model, is it correct?
### Additional
_No response_ | closed | 2025-01-24T17:01:20Z | 2025-02-08T17:16:44Z | https://github.com/ultralytics/ultralytics/issues/18869 | [
"question",
"pose"
] | nomi30701 | 6 |
miguelgrinberg/python-socketio | asyncio | 1,113 | BadNamespaceError / is not a connected namespace | **Describe the bug**
Sometimes it comes back to me
```
File "/usr/local/lib/python3.10/dist-packages/socketio/client.py", line 393, in emit
socketio.exceptions.BadNamespaceError / is not a connected namespace
```
**To Reproduce**
I've been trying various ways
First attempt:
```python
sio = socketio.Client(logger=False,ssl_verify=False,reconnection=True)
sio.connect('https://domain:3001' ,transports="polling")
sio.emit('debug', {'room': 'hi','msg':'Hi mate'}) #works
sio.sleep(5)
sio.emit('debug', {'room': 'hi','msg':'Hi again'}) #works again
... other tasks ...
sio.emit('debug', {'room': 'hi','msg':'Tasks finish'}) #error
```
Second attempt:
```python
sio = socketio.Client(logger=False,ssl_verify=False,reconnection=True)
sio.connect('https://domain:3001' ,transports="polling")
sio.emit('debug', {'room': 'hi','msg':'Hi mate'}) #works
sio.sleep(5)
sio.start_background_task(task(),123) #task() solves multiple tasks
sio.wait()
```
**Expected behavior**
Send message at the end of a long task
**Logs**
```
Sending packet MESSAGE data 2["debug",{"room":"hi","msg":"Hi again"}]
Received packet PING data
Sending packet PONG data
Sending polling GET request to https://domain:3001/socket.io/?transport=polling&EIO=4&sid=m_JKfkEHmf3Apu_8AAAU
Received packet CLOSE data
Sending packet CLOSE data None
disconnected from server
Waiting for write loop task to end
Unexpected status code 400 in server response, aborting
Exiting write loop task
Exiting read loop task
```
| closed | 2023-01-03T22:37:14Z | 2024-01-04T20:06:55Z | https://github.com/miguelgrinberg/python-socketio/issues/1113 | [
"question"
] | nouser000 | 4 |
SALib/SALib | numpy | 216 | Refactor command line interface | ## Problem
The command line commands used to run a sample or analysis function involves a lot of code duplication. Each module contains an `if __name__ == '__main__'` section containing an argument parser, often with duplicate arguments.
## Recommendation
Write one central command-line interface argument parser which interrogates the available methods and exposes the arguments on the command line. An an entry point for an `salib` command.
For example
```bash
$ salib sample morris
SALib: Sensitivity Analysis Library
Morris Method requires the following arguments
-n <int> number of trajectories
-o <str> filepath to write sample
...
$ salib sample morris -n 10 -o my_sample.csv
``` | closed | 2018-10-19T13:55:12Z | 2019-03-14T15:46:28Z | https://github.com/SALib/SALib/issues/216 | [
"enhancement"
] | willu47 | 12 |
vaexio/vaex | data-science | 1,973 | [FEATURE-REQUEST/ HELP REQUIRED] Getting first and last members of a group | **Description**
Hi, I would like to be able to retrieve the first and last members of a group after a groupby operation, which is straightforward to do using pandas, but I seem to be unable to do this using Vaex. Looking through the docs, I have encountered the aggregators `vaex.agg.first` and `vaex.agg.last` which appeared as though they would solve this problem, but I can't get these to work at all. However, this may be user error, as I can't find documentation or examples detailing their usage anywhere.
As a motivating example, consider the following DataFrame:

I would like to retrieve the content for the first and last rows in each 'Class'. To get the first item, I would expect that you should be able to do something along the lines of:
`df.groupby('Class').agg({'content': vaex.agg.first(df.content, df.idx)})`
or
`df.groupby('Class', agg=vaex.agg.first('content', 'idx'))`
unfortunately I can't get anything like this to work.
The version that I am using is
```
{'vaex': '4.8.0',
'vaex-core': '4.8.0',
'vaex-viz': '0.5.1',
'vaex-hdf5': '0.12.0',
'vaex-server': '0.8.1',
'vaex-astro': '0.9.0',
'vaex-jupyter': '0.7.0',
'vaex-ml': '0.17.0'}
```
If this is possible, any help would be much appreciated, and an example should be added to the docs. Alternatively, I believe that this would be useful functionality to implement.
Thanks | closed | 2022-03-16T17:07:07Z | 2022-04-11T09:35:08Z | https://github.com/vaexio/vaex/issues/1973 | [] | Chris-hughes10 | 7 |
robotframework/robotframework | automation | 5,197 | Different behaviour in local machine and pipeline | I am working on one project where i ahve setup everything and running my test cases using Pycharm which is perfectly running in my local machine and when i am trying to run that test cases on pipeline some of those xpath are not found. Why this is happening on pipeline? what is the best solution to fix that issues? is there any straight way to do that?
This is happening only on some xpath. I have tried to fix using different ways (running without headless mode) still not working.
Thanks in advance | closed | 2024-09-03T11:58:07Z | 2024-09-04T11:33:33Z | https://github.com/robotframework/robotframework/issues/5197 | [] | shantosh123 | 3 |
quokkaproject/quokka | flask | 29 | about random Content | hi,i want to get a random Content.
how do ?
| closed | 2013-08-27T07:32:27Z | 2015-07-16T02:56:57Z | https://github.com/quokkaproject/quokka/issues/29 | [] | javalurker | 2 |
iMerica/dj-rest-auth | rest-api | 367 | Passwort reset email template not found. | I recently upgraded from version `2.1.4` to version `2.2.2`. Everything works fine except my custom passwort reset email template is not found anymore. Instead of this template, the default passwort reset email template is used. The path to the template is correct, as it is detected correctly in version `2.1.4`.
Here is my CustomPasswordResetSerializer which is registered in `settings.py`:
```
from dj_rest_auth.serializers import PasswordResetSerializer
class CustomPasswordResetSerializer(PasswordResetSerializer):
def get_email_options(self):
return {
"html_email_template_name": "registration/password_reset_message.html",
}
``` | open | 2022-01-27T10:20:54Z | 2022-10-17T06:02:24Z | https://github.com/iMerica/dj-rest-auth/issues/367 | [] | gurumaxi | 5 |
mwouts/itables | jupyter | 285 | UnicodeDecodeError while set "connected=False" | I encountered a problem the first time I ran in jupyter notebook - the file contains only:
```python
from itables import init_notebook_mode
init_notebook_mode(all_interactive=True, connected=False)
```
Error:
```bash
UnicodeDecodeError Traceback (most recent call last)
Cell In[3], [line 2](vscode-notebook-cell:?execution_count=3&line=2)
[1] from itables import init_notebook_mode
----> [2] init_notebook_mode(all_interactive=True, connected=False)
[...]
File ~\AppData\Local\Programs\Python\Python310\lib\encodings\cp1250.py:23, in IncrementalDecoder.decode(self, input, final)
[22] def decode(self, input, final=False):
---> [23] return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x83 in position 194869: character maps to <undefined>
```
When set "connected=True" everything works properly.
itables version 2.1.0 | closed | 2024-06-07T11:52:38Z | 2024-06-09T09:36:13Z | https://github.com/mwouts/itables/issues/285 | [] | Samox1 | 5 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,141 | ImportError: cannot import name 'safe_indexing' | I am trying to install 'LearningCurve' from the 'yellowbrick.model_selection' module.
However, I'm facing the following problem:
```
from yellowbrick.model_selection import LearningCurve
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-30-c6ff95ce49ee> in <module>
47
48
---> 49 from yellowbrick.model_selection import LearningCurve
~/cdap/lib64/python3.6/site-packages/yellowbrick/__init__.py in <module>
37 from .anscombe import anscombe
38 from .datasaurus import datasaurus
---> 39 from .classifier import ROCAUC, ClassBalance, ClassificationScoreVisualizer
40
41 # from .classifier import crplot, rocplot
~/cdap/lib64/python3.6/site-packages/yellowbrick/classifier/__init__.py in <module>
28 from .confusion_matrix import ConfusionMatrix, confusion_matrix
29 from .rocauc import ROCAUC, roc_auc
---> 30 from .threshold import DiscriminationThreshold, discrimination_threshold
31 from .prcurve import PrecisionRecallCurve, PRCurve, precision_recall_curve
32
~/cdap/lib64/python3.6/site-packages/yellowbrick/classifier/threshold.py in <module>
28 from sklearn.model_selection import ShuffleSplit
29 from sklearn.metrics import precision_recall_curve
---> 30 from sklearn.utils import indexable, safe_indexing
31 from sklearn.utils.multiclass import type_of_target
32
ImportError: cannot import name 'safe_indexing'
```
I'm not sure why this is happening. Any help would be appreciated. Thank you! | closed | 2021-01-05T15:52:48Z | 2021-01-05T16:05:55Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1141 | [] | heydibyendu | 1 |
jupyter-book/jupyter-book | jupyter | 2,160 | top left hamburger button not working | ### Describe the bug
**context**
I have noticed that my recent builds with jb1 have their "toggle primary sidebar" button no longer working; clicking the button just does almost nothing
**expectation**
clicking the button used to collapse the left hand side toc pane, leaving more space for the main content

instead I am getting this

**problem**
This is a problem for teachers who use the left hand side area to move from one section to the other, but who all the rest of the time need the extra space
### Reproduce the bug
I am not sure exactly when this started to happen
I just build my pages the same way as usual, and apparently all my books now exhibit the same behaviour
this being said they all share the same settings and theme, so this is not conclusive
otoh I have at least reproduced the issue after I removed all my styling, so I believe the problem is upstream
I'll keep on investigating though, in order to rule out any misconfiguration on my end
any feedback from others, whether they experience the same or not with a recently created virtualenv with all the latest versions, would be helpful to me in this respect
### List your environment
jupyter-book --version
```bash
Jupyter Book : 1.0.0
External ToC : 1.0.1
MyST-Parser : 2.0.0
MyST-NB : 1.1.0
Sphinx Book Theme : 1.1.2
Jupyter-Cache : 1.0.0
NbClient : 0.10.0
``` | open | 2024-06-11T09:42:43Z | 2024-06-12T14:35:42Z | https://github.com/jupyter-book/jupyter-book/issues/2160 | [
"bug"
] | parmentelat | 6 |
mwaskom/seaborn | pandas | 3,232 | Displot histogram overlapping despite multiple="stack" when common_bins=False | I have been trying to split what would have been a very long `histplot` into several columns using `displot`.
When all columns share the same y axis (which, in this use case, is not really suitable) or when `common_bins=True`, all bins are properly generated. However, passing the parameter `common_bins=False` (to make sure each column will not display the entire y axis) results in what seems to be a glitch in the drawing of the bins, with some colors overlapping others for reasons that I fail to understand.
Here is a MWE (a bit long, sorry):
```python
import matplotlib
from matplotlib import pyplot as plt
import seaborn as sns
import pandas
import numpy
matplotlib.use("webagg")
### creating some data simulating a survey (responders and choices)
choices_list = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']
responders_list = ['ax', 'bx', 'cx', 'dx', 'ex', 'fx', 'gx', 'hx']
### creating DataFrame
choices = list(numpy.hstack([numpy.repeat(choices_list[i], len(choices_list)-i) for i in range(0, len(choices_list))]))
responders = list(numpy.tile(responders_list, 5))[0:len(choices)]
data = pandas.DataFrame({"choice": choices, "responder": responders})
### getting choices in order of frequency
choices_freq_sorted = list(data["choice"].value_counts().index)
data["choice"] = pandas.Categorical(data["choice"], categories=choices_freq_sorted, ordered=True)
### also putting responders in order
data["responder"] = pandas.Categorical(data["responder"], categories=responders_list, ordered=True)
### Adding an indicator for splitting the data into two columns based on frequency count
data["split"] = numpy.nan
split_group_size = int(numpy.ceil(len(data["choice"].unique())/2))
for i in range(1, data["choice"].count()-1, split_group_size):
data.loc[data["choice"].isin(choices_freq_sorted[i-1:i+split_group_size]), "split"] = f"{i}-{i+split_group_size-1}"
# end of for loop
data["split"] = pandas.Categorical(data["split"])
### graph
graph1 = sns.displot(data=data, y="choice", col="split", col_wrap=2, height=8, aspect=0.5, multiple="stack", hue="responder", palette=sns.color_palette("hls", 8), alpha=0.6, common_bins=True, discrete=True, facet_kws=dict(sharey=False, sharex=True))
graph1.fig.tight_layout(rect=[0, 0, 1, 0.95])
graph1.fig.suptitle("common_bins=True", size=14, y=0.97)
graph2 = sns.displot(data=data, y="choice", col="split", col_wrap=2, height=4, aspect=0.9, multiple="stack", hue="responder", palette=sns.color_palette("hls", 8), alpha=0.6, common_bins=False, discrete=True, facet_kws=dict(sharey=False, sharex=True))
graph2.fig.tight_layout(rect=[0, 0, 1, 0.91])
graph2.fig.suptitle("common_bins=False", size=14, y=0.95)
plt.show()
```
In the first graph generated with `common_bins=True` , the coloring seems fine, but `sharey=False` is ineffective.

In the second graph generated with `common_bins=False`, `sharey=False` is taken into account, but a strange overlap appears for the bins of "D" and "F".

| open | 2023-01-23T15:29:45Z | 2023-09-01T11:38:26Z | https://github.com/mwaskom/seaborn/issues/3232 | [
"bug",
"mod:distributions"
] | aufildelanuit | 9 |
ultralytics/yolov5 | deep-learning | 12,655 | Fatal Python error: Aborted | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Thread 0x000047b8 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x000005e8 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00004350 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00000f48 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00001638 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x000034b8 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00001fac (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x000043f0 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x000001d0 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x000039c0 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00002028 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00004620 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x0000130c (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00001068 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00002f68 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00002780 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x000043a8 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x0000150c (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00001a94 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00002768 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 320 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\multiprocessing\queues.py", line 231 in _feed
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 975 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Thread 0x00004734 (most recent call first):
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 324 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 622 in wait
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\site-packages\tqdm\_monitor.py", line 60 in run
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 1038 in _bootstrap_inner
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\threading.py", line 995 in _bootstrap
Main thread:
Current thread 0x000033f0 (most recent call first):
File "<__array_function__ internals>", line 200 in dot
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\transforms.py", line 2436 in get_affine
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\transforms.py", line 2409 in transform_affine
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\_api\deprecation.py", line 297 in wrapper
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\transforms.py", line 1495 in transform
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\transforms.py", line 468 in transformed
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\axis.py", line 2515 in get_tick_space
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\ticker.py", line 2086 in _raw_ticks
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\ticker.py", line 2148 in tick_values
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\ticker.py", line 2140 in __call__
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\axis.py", line 1495 in get_majorticklocs
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\axis.py", line 1275 in _update_ticks
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\axis.py", line 1424 in get_majorticklabels
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\axis.py", line 1467 in get_ticklabels
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\matplotlib\axes\_base.py", line 73 in wrapper
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\seaborn\axisgrid.py", line 1346 in __init__
File "C:\Users\12920\AppData\Roaming\Python\Python311\site-packages\seaborn\axisgrid.py", line 2119 in pairplot
File "D:\web_ruanjian\py_ruanjian\yolov5\yolov5-master\utils\plots.py", line 305 in plot_labels
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\contextlib.py", line 81 in inner
File "D:\web_ruanjian\py_ruanjian\yolov5\yolov5-master\utils\loggers\__init__.py", line 175 in on_pretrain_routine_end
File "D:\web_ruanjian\py_ruanjian\yolov5\yolov5-master\utils\callbacks.py", line 73 in run
File "d:\web_ruanjian\py_ruanjian\yolov5\yolov5-master\train.py", line 292 in train
File "d:\web_ruanjian\py_ruanjian\yolov5\yolov5-master\train.py", line 616 in main
File "d:\web_ruanjian\py_ruanjian\yolov5\yolov5-master\train.py", line 836 in <module>
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\site-packages\spyder_kernels\py3compat.py", line 356 in compat_exec
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 473 in exec_code
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 615 in _exec_file
File "D:\web_ruanjian\py_ruanjian\Anaconda\Lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 528 in runfile
File "C:\Users\12920\AppData\Local\Temp\ipykernel_1368\575683251.py", line 1 in <module>
Restarting kernel...
### Additional
_No response_ | closed | 2024-01-21T03:12:30Z | 2024-10-20T19:37:54Z | https://github.com/ultralytics/yolov5/issues/12655 | [
"question",
"Stale"
] | lhq15606085117 | 3 |
localstack/localstack | python | 11,519 | bug: invalid region doesn't return error | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
1. create a lambda on eu-west-1
2. create a lambda on us-east-1
3. using awslocal run `awslocal lambda list-functions --region eu-west-1`: return correctly the first lambda
4. using awslocal run `awslocal lambda list-functions --region us-east-1`: return correctly the second lambda
5. using awslocal run `awslocal lambda list-functions --region asodifj`: return always the second lambda; even for other regions not enabled
Same output without using awslocal but the standard awscli
### Expected Behavior
Return an error for invalid/non-existing regions
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
```yaml
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME}"
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
environment:
- EAGER_SERVICE_LOADING=1
- DEBUG=${DEBUG-}
- PERSISTENCE=${PERSISTENCE-}
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
1. using awslocal run `awslocal lambda list-functions --region eu-west-1`: return correctly the first lambda
2. using awslocal run `awslocal lambda list-functions --region us-east-1`: return correctly the second lambda
3. using awslocal run `awslocal lambda list-functions --region asodifj`: return always the second lambda; even for other regions not enabled
### Environment
```markdown
- OS: Mac, Sonoma
- LocalStack:
LocalStack version: latest
LocalStack Docker image sha:
LocalStack build date:
LocalStack build git hash:
```
### Anything else?
_No response_ | closed | 2024-09-15T13:07:24Z | 2024-11-08T08:34:47Z | https://github.com/localstack/localstack/issues/11519 | [
"type: bug",
"aws:lambda",
"status: backlog"
] | notdodo | 7 |
lyhue1991/eat_tensorflow2_in_30_days | tensorflow | 50 | Typo in 1-1,结构化数据建模流程范例 |
1-1,结构化数据建模流程范例
plot_metric(history,"AUC")
should be
plot_metric(history,"auc")
| open | 2020-06-08T12:18:57Z | 2020-06-08T12:18:57Z | https://github.com/lyhue1991/eat_tensorflow2_in_30_days/issues/50 | [] | lianghong | 0 |
mljar/mercury | jupyter | 127 | Better loading of executed notebook | Please show notebook smoothly after execution. Please show the notebook after iframe is fully loaded. | closed | 2022-07-08T19:33:30Z | 2023-02-13T14:49:12Z | https://github.com/mljar/mercury/issues/127 | [
"enhancement"
] | pplonski | 1 |
graphql-python/graphene | graphql | 1,217 | When raise exception I got graphql error | I just move from using `responder` library to `starlette`. I got the graphql error when I raise exception.
Server:
```js
app = Starlette(debug=True, routes=[
Route('/', show_version),
Route('/healthz', check_health),
Route('/graphql', GraphQLApp(schema=schema)),
])
if __name__ == '__main__':
uvicorn.run(app, host='0.0.0.0', port=9090)
```
Schema:
```
...
raise Exception('Something wrong')
...
```
Error:

What I expect is it return error message `Something wrong`. instead of error above.
- Version: 2.1.3
- Platform: Linux (Ubuntu)
| closed | 2020-06-26T11:12:40Z | 2020-07-08T10:58:28Z | https://github.com/graphql-python/graphene/issues/1217 | [
"🐛 bug"
] | Bunlong | 2 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 728 | Error while running the app | I get this error `'Tensor' object has no attribute 'repeat_interleave'`.
I have installed Pytorch and here is the command `conda install pytorch==1.0.1 torchvision==0.2.2 cudatoolkit=10.0 -c pytorch` | closed | 2021-04-07T18:36:28Z | 2021-09-25T17:29:22Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/728 | [] | 63Amir | 2 |
onnx/onnxmltools | scikit-learn | 515 | Onnx file change input size | Is it possible to change the input size of an onnx file from BCHW to BHWC in order to have the same inputs with the tensorflow file. | open | 2021-11-03T10:49:20Z | 2021-11-03T10:49:20Z | https://github.com/onnx/onnxmltools/issues/515 | [] | AlexisPapaioannou | 0 |
python-gitlab/python-gitlab | api | 2,858 | API for get and add MR to merge train is missing | ## Description of the problem, including code/CLI snippet
The API for both "new" merge-train features is missing:
* [Get the status of a merge request on a merge train](https://docs.gitlab.com/16.9/ee/api/merge_trains.html#get-the-status-of-a-merge-request-on-a-merge-train)
* [Add a merge request to a merge train](https://docs.gitlab.com/16.9/ee/api/merge_trains.html#add-a-merge-request-to-a-merge-train)
Could you please add them?
## Specifications
- python-gitlab version: 4.4.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): >= 15.11
| open | 2024-05-07T20:43:47Z | 2024-11-03T19:52:17Z | https://github.com/python-gitlab/python-gitlab/issues/2858 | [] | Viatorus | 5 |
pydata/xarray | pandas | 10,065 | More robust checks for cftime input to xr.date_range | ### What is your issue?
I recently ran into a case where the logic in `xr.date_range` isn't quite robust enough (detail below).
This is something we use in a downstream package and while I can definitely add some additional checks there I figured I'd log an issue here and see if you all might be open to addressing this in Xarray. Happy to work on a PR if that's helpful (though I'm not terribly familiar with the code).
The following example should replicate the failure:
```
import xarray as xr
times = xr.cftime_range(start='2020-01-01', end='2021-12-31', freq='D')
xr.date_range(start=times[0], end=times[-1], freq='D', calendar=times.calendar)
```
Resulting in:
```
TypeError: Cannot convert input [2020-01-01 00:00:00] of type <class 'cftime._cftime.DatetimeGregorian'> to Timestamp
```
Because of the "standard" calendar and an error that doesn't match the one Xarray is looking for it uses the pd.date_range rather than the cftime equivalent and then fails. | open | 2025-02-21T00:44:36Z | 2025-02-23T13:36:02Z | https://github.com/pydata/xarray/issues/10065 | [
"topic-cftime"
] | kafitzgerald | 1 |
yeongpin/cursor-free-vip | automation | 332 | [Bug]: bug,被封 | ### 提交前检查
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我已经查看了置顶 Issue 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues)和[已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
- [x] 我填写了简短且清晰明确的标题,以便开发者在翻阅 Issue 列表时能快速确定大致问题。而不是“一个建议”、“卡住了”等。
### 平台
Windows x32
### 版本
0.47.8
### 错误描述
Your request has been blocked as our system has detected suspicious activity from your account/ip address. If you believe this is a mistake, please contact us at hi@cursor.com.You can sign in with google, github or oauth to avoid the suspicious activity checks.
### 相关日志输出
```shell
Your request has been blocked as our system has detected suspicious activity from your account/ip address. If you believe this is a mistake, please contact us at hi@cursor.com.You can sign in with google, github or oauth to avoid the suspicious activity checks.
```
### 附加信息
_No response_ | closed | 2025-03-20T14:12:30Z | 2025-03-22T11:48:46Z | https://github.com/yeongpin/cursor-free-vip/issues/332 | [
"bug"
] | ifkd111 | 2 |
noirbizarre/flask-restplus | api | 200 | default_mediatype does not work | Hi,
I've just tested your example (https://github.com/noirbizarre/flask-restplus/blob/master/examples/xml_representation.py) and it returns json by default. I can't configure api to respond with xml as default.
What is more it would be good to have possibility to specify default mediatype by resource - I can't force my clients to send me "Accept" header but would like to respond sometimes with json and sometimes with xml as default.
| open | 2016-09-09T14:39:48Z | 2016-09-19T12:25:12Z | https://github.com/noirbizarre/flask-restplus/issues/200 | [] | cmichal | 2 |
ydataai/ydata-profiling | data-science | 1,623 | Unable to run the pandas profiling | C:\Ananconda4\Lib\site-packages\ydata_profiling\model\correlations.py:66: UserWarning: There was an attempt to calculate the auto correlation, but this failed.
To hide this warning, disable the calculation
(using `df.profile_report(correlations={"auto": {"calculate": False}})`
If this is problematic for your use case, please report this as an issue:
https://github.com/ydataai/ydata-profiling/issues
(include the error message: 'could not convert string to float: 'Phillipines'')
warnings.warn( | closed | 2024-07-13T04:11:57Z | 2024-07-15T17:25:19Z | https://github.com/ydataai/ydata-profiling/issues/1623 | [
"needs-triage"
] | kpn25 | 1 |
pywinauto/pywinauto | automation | 1,087 | Getting TypeError: UIAElementInfo object can be initialized with integer or IUIAutomationElement instance only! | Hi,
I am new to Pywinauto and getting below error
## Short Example of Code to Demonstrate the Problem
```python
from pywinauto.application import Application
import time
import pyautogui
app=Application(backend='uia').start(r"C:\Program Files (x86)\blue8\client\B8Analyser.exe")
time.sleep(5)
app=Application(backend='uia').connect(title='Main Window',timeout=10)
w1 = app.window(best_match='Main Window').wrapper_object()
w2 = app.window(handle=w1)
ctrl = w2['TreeView']
c1 = ctrl.get_item([u'xd Explorer', u'Annotation_AugCrime']).click_input(button="left", double=True)
c2 = c1.sub_elements()
```
getting below error at c1 line:
```python
TypeError: UIAElementInfo object can be initialized with integer or IUIAutomationElement instance only!
```
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.8.2
- Platform and OS: windows 10
| open | 2021-06-16T13:50:30Z | 2022-02-20T12:35:12Z | https://github.com/pywinauto/pywinauto/issues/1087 | [
"bug"
] | bharatrajs | 9 |
iterative/dvc | machine-learning | 10,177 | Exponential memory allocation caused by YAML parameter files with reused anchors | # Bug Report
YAML parameter files with enough [reused anchors](https://yaml.org/spec/1.2.2/#alias-nodes) will cause DVC to exhaust the system resources by exponentially appending duplicate elements to `list` objects.
## Minimal reproducible example
```console
$ python <<< "from dvc.repo import Repo; print(Repo().params.show())"
```
> [!NOTE]
> If you prefer [porcelain](https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain), feel free to run `dvc params diff` or call [`dvc.api.params_show()`](https://dvc.org/doc/api-reference/params_show) instead.
### Sample project files
#### `dvc.yaml`
```yaml
stages:
example:
foreach: [0, 1, 2, 3]
do:
cmd: test
params:
- custom.yaml: [references]
- custom.yaml: [references]
- custom.yaml: [references]
- custom.yaml: [references]
```
#### `custom.yaml`
```yaml
values: &anchor [0, 1, 2, 3]
references:
first: *anchor
second: *anchor
third: *anchor
fourth: *anchor
```
#### `.dvc/config`
```toml
[core]
```
## Environment information
```console
$ dvc doctor
DVC version: 3.33.4 (pip)
-------------------------
Platform: Python 3.10.8 on Linux-6.2.0-1018-azure-x86_64-with-glibc2.31
Subprojects:
dvc_data = 2.24.0
dvc_objects = 2.0.1
dvc_render = 1.0.0
dvc_task = 0.3.0
scmrepo = 1.6.0
Supports:
azure (adlfs = 2023.10.0, knack = 0.11.0, azure-identity = 1.15.0),
gdrive (pydrive2 = 1.18.0),
gs (gcsfs = 2023.12.2.post1),
hdfs (fsspec = 2023.12.2, pyarrow = 14.0.1),
http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
oss (ossfs = 2023.12.0),
s3 (s3fs = 2023.12.2, boto3 = 1.33.13),
ssh (sshfs = 2023.10.0),
webdav (webdav4 = 0.9.8),
webdavs (webdav4 = 0.9.8),
webhdfs (fsspec = 2023.12.2)
Config:
Global: /home/codespace/.config/dvc
System: /etc/xdg/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: None
Workspace directory: ext4 on /dev/loop3
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/847c03a095ad0b42b23e20ceb6c386f0
``` | closed | 2023-12-16T06:54:27Z | 2023-12-16T14:23:46Z | https://github.com/iterative/dvc/issues/10177 | [
"bug",
"A: params"
] | 0x2b3bfa0 | 0 |
desec-io/desec-stack | rest-api | 318 | password blacklist | Such as locally hosted haveibeenpwned | open | 2020-03-23T16:13:29Z | 2024-10-07T17:11:25Z | https://github.com/desec-io/desec-stack/issues/318 | [
"enhancement",
"api",
"prio: low"
] | peterthomassen | 2 |
pyppeteer/pyppeteer | automation | 167 | Page freezes when webpage contains JS exception | Initially I encountered page hanging after navigating a handful of links. Fixed the issue by setting `dumpio=True` as suggested in https://github.com/miyakogi/pyppeteer/issues/167. However now the page completely hangs if webpage contains JS exception and times out with `InvalidStateError`
URL trying to access: http://www4.law.cornell.edu/uscode/42/3604.html
JS exception:
```
Uncaught TypeError: Cannot read property 'getElementsByClassName' of null
at HTMLDocument.<anonymous> (lii_scriptinator.min.js:1)
at l (jquery-3.3.1.min.js:2)
at c (jquery-3.3.1.min.js:2)
```
### Pyppeteer exception
```
[0828/164428.989667:WARNING:cors_url_loader_factory.cc(212)] |cors_exempt_headers| contains unexpected key: Purpose
[0828/164428.989994:WARNING:cors_url_loader_factory.cc(212)] |cors_exempt_headers| contains unexpected key: Purpose
[0828/164429.169955:INFO:CONSOLE(1)] "Uncaught TypeError: Cannot read property 'getElementsByTagName' of undefined", source: https://www.law.cornell.edu/staticsite_scripts/lii_scriptinator.min.js (1)
[0828/164429.333329:INFO:CONSOLE(1)] "set lii_fundraiser disabled", source: https://www.law.cornell.edu/staticsite_scripts/lii_scriptinator.min.js (1)
[0828/164429.333329:INFO:CONSOLE(1)] "set lii_fundraiser disabled", source: https://www.law.cornell.edu/staticsite_scripts/lii_scriptinator.min.js (1)
Failed to load url: http://www4.law.cornell.edu/uscode/42/3604.html. Navigation Timeout Exceeded: 30000 ms exceeded.
[E:pyppeteer.connection] connection unexpectedly closed
Task exception was never retrieved
future: <Task finished coro=<Connection._async_send() done, defined at /opt/python/pyppeteer/connection.py:69> exception=InvalidStateError('invalid state')>
Traceback (most recent call last):
File "/opt/python/websockets/protocol.py", line 827, in transfer_data
message = await self.read_message()
File "/opt/python/websockets/protocol.py", line 895, in read_message
frame = await self.read_data_frame(max_size=self.max_size)
File "/opt/python/websockets/protocol.py", line 971, in read_data_frame
frame = await self.read_frame(max_size)
File "/opt/python/websockets/protocol.py", line 1051, in read_frame
extensions=self.extensions,
File "/opt/python/websockets/framing.py", line 105, in read
data = await reader(2)
File "/var/lang/lib/python3.7/asyncio/streams.py", line 677, in readexactly
raise IncompleteReadError(incomplete, n)
asyncio.streams.IncompleteReadError: 0 bytes read on a total of 2 expected bytes
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/python/pyppeteer/connection.py", line 73, in _async_send
await self.connection.send(msg)
File "/opt/python/websockets/protocol.py", line 555, in send
await self.ensure_open()
File "/opt/python/websockets/protocol.py", line 803, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: code = 1006 (connection closed abnormally [internal]), no reason
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/python/pyppeteer/connection.py", line 79, in _async_send
await self.dispose()
File "/opt/python/pyppeteer/connection.py", line 170, in dispose
await self._on_close()
File "/opt/python/pyppeteer/connection.py", line 153, in _on_close
f'Protocol error {cb.method}: Target closed.', # type: ignore
asyncio.base_futures.InvalidStateError: invalid state
``` | closed | 2020-08-28T17:32:35Z | 2020-08-31T16:30:40Z | https://github.com/pyppeteer/pyppeteer/issues/167 | [] | csreddy | 1 |
httpie/cli | api | 650 | how to run this payload? | http POST :8088/registration payload:='{ email=email@email, password=secret }' | closed | 2018-02-02T13:38:29Z | 2018-02-22T11:26:02Z | https://github.com/httpie/cli/issues/650 | [] | carvalhoviniciusluiz | 1 |
mljar/mljar-supervised | scikit-learn | 270 | Stacking available only for cross-validation | We should disable stacking for:
- validation with split
- and for repeated validation | closed | 2020-12-10T09:43:21Z | 2020-12-11T11:53:06Z | https://github.com/mljar/mljar-supervised/issues/270 | [
"enhancement"
] | pplonski | 1 |
MagicStack/asyncpg | asyncio | 441 | Document exceptions | Please add the asyncpg exceptions to the documentation. I have no trouble reading source code, so was pleasantly surprised to find how rich the included exception classes are.
```python
>>> exceptions = sorted((o for o in vars(asyncpg).values() if isinstance(o, type) and issubclass(o, Exception)), key=lambda o: o.__name__)
>>> for exc in exceptions:
... if getattr(exc, 'sqlstate', None): continue # looong list otherwise
... print(exc.__name__)
... if exc.__doc__: print(exc.__doc__)
... print()
...
FatalPostgresError
A fatal error that should result in server disconnection.
InterfaceError
An error caused by improper use of asyncpg API.
InterfaceWarning
A warning caused by an improper use of asyncpg API.
InternalClientError
All unexpected errors not classified otherwise.
OutdatedSchemaCacheError
A value decoding error caused by a schema change before row fetching.
PostgresError
Base class for all Postgres errors.
ProtocolError
Unexpected condition in the handling of PostgreSQL protocol input.
UnknownPostgresError
An error with an unknown SQLSTATE code.
>>> # generated exceptions for all possible sqlstate values
...
>>> sum(1 for exc in exceptions if getattr(exc, 'sqlstate', None))
240
``` | open | 2019-05-08T15:26:39Z | 2019-05-08T15:26:39Z | https://github.com/MagicStack/asyncpg/issues/441 | [] | mjpieters | 0 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 496 | How do I declare unique constraint on multiple columns? | closed | 2017-05-09T23:13:41Z | 2020-12-05T20:21:46Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/496 | [] | xiangfeidongsc | 2 | |
minimaxir/textgenrnn | tensorflow | 55 | Generate text, how to specify keywords | open | 2018-07-27T06:14:46Z | 2018-07-27T06:14:46Z | https://github.com/minimaxir/textgenrnn/issues/55 | [] | tryinggo | 0 | |
holoviz/panel | jupyter | 7,531 | "global-loading-spinner" doesnt work when pn.serve | "--global-loading-spinner" works as expected when I serve multiple apps with the command :
panel serve app1.py app2.py app3.py --index=index.html --static-dirs assets=./assets thumbnails=./assets/thumbnails --global-loading-spinner
But when I try to serve my apps with pn.serve such as:
pn.serve(ROUTES, port=5006, index=file_path_for_index, global_loading_spinner=True, static_dirs={'assets': file_path_for_assets,'thumbnails': file_path_for_thumbnails})
it doesnt work. Except for that, everything works. Is there a missing part here? I dont get any error about this issue. | closed | 2024-12-03T19:57:24Z | 2025-01-20T21:40:30Z | https://github.com/holoviz/panel/issues/7531 | [] | sansal54 | 1 |
TracecatHQ/tracecat | fastapi | 33 | Create dynamic KV pair UI component | # Motivation
1. Need a nice KV UI component for better object/dict UX.
2. This can support action forms
3. This can support adding secrets in general and for integrations
# Scope
Only needs to support flat kv hierarchies. Don't need anything more complex for now.
| closed | 2024-04-10T22:49:31Z | 2024-04-12T04:24:35Z | https://github.com/TracecatHQ/tracecat/issues/33 | [
"enhancement",
"frontend"
] | daryllimyt | 1 |
deepspeedai/DeepSpeed | deep-learning | 6,945 | DeepSpeed Installation Fails During Docker Build (NVML Initialization Issue) | Hello,
I encountered an issue while building a Docker image for deep learning model training, specifically when attempting to install DeepSpeed.
**Issue**
When building the Docker image, the DeepSpeed installation fails with a warning that NVML initialization is not possible.
However, if I create a container from the same image and install DeepSpeed inside the container, the installation works without any issues.
**Environment**
Base Image: `nvcr.io/nvidia/pytorch:23.01-py3`
DeepSpeed Version: `0.16.2`
**Build Log**
[docker_build.log](https://github.com/user-attachments/files/18396187/docker_build.log)
**Additional Context**
The problem does not occur with the newer base image `nvcr.io/nvidia/pytorch:24.05-py3`.
Thank you. | closed | 2025-01-13T11:39:39Z | 2025-01-24T00:32:39Z | https://github.com/deepspeedai/DeepSpeed/issues/6945 | [] | asdfry | 8 |
aminalaee/sqladmin | sqlalchemy | 131 | Ability to choose format of datetimes in list/table view | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
Currently the values for datetime columns seem to be rendered as ISO-8601 with millisecond precision which is unnecessary (eg `2022-04-04 22:43:30.027151`)
### Describe the solution you would like.
It'd be great if we had a way to pass some sort of custom formatter, ideally a callable that takes the DB value and returns the value that should be passed to the renderer.
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | closed | 2022-04-06T19:00:02Z | 2022-05-01T06:59:30Z | https://github.com/aminalaee/sqladmin/issues/131 | [
"enhancement",
"good first issue"
] | lovetoburnswhen | 1 |
scrapy/scrapy | web-scraping | 6,418 | Why are Scrapy logs outputted as ERROR on Google Cloud Batch? | Hi, I tried to run `scrapy crawl xx` on Google Cloud Batch.
However, all of the logs are being output as ERROR logs, as shown below:
```
severity: "ERROR".
```
Did I misconfigure something?
<img width="657" alt="image" src="https://github.com/scrapy/scrapy/assets/90693309/919bcdc4-8bf2-4ddb-b075-f226835d3f19">
<img width="605" alt="image" src="https://github.com/scrapy/scrapy/assets/90693309/8154992a-5a4c-4c45-8f54-f4d859a410e0">
| closed | 2024-06-27T09:29:12Z | 2024-06-27T09:33:00Z | https://github.com/scrapy/scrapy/issues/6418 | [] | liaocyintl | 1 |
iperov/DeepFaceLab | machine-learning | 837 | Preview window is frozen | Preview window is not updating. Im using Quick96 as im rocking a GTX 970 with 4GB of ram and it seems to be working fine, except for that the preview window is stuck. If i save and reboot and window updates and displays the stats from where i last saved. But it stays there without updating. Did i skip a setting somewhere or is it just bugged on my pc? | closed | 2020-07-17T23:20:57Z | 2020-07-18T00:03:08Z | https://github.com/iperov/DeepFaceLab/issues/837 | [] | olejohanovreas | 1 |
zappa/Zappa | flask | 1,200 | Remove patron link from README | Currently the zappa README contains a [Patrons](https://github.com/zappa/Zappa#patrons) section and link maintained by the original author of the package.
Since he has left the project this issue proposes to remove the Patron section in it's entirety.
| closed | 2022-12-01T10:40:01Z | 2023-05-19T06:06:33Z | https://github.com/zappa/Zappa/issues/1200 | [
"documentation",
"next-release-candidate"
] | monkut | 3 |
tqdm/tqdm | jupyter | 775 | "leave=False" in tqdm_notebook doesn't work on Google Colab | "leave=False" in tqdm_notebook and in tqdm doesn't work on Google Colab when multithread is implemented.
See print screens:
Here I use two cycles, two tqdm_notebook and **leave=False** in the last tqdm_notebook:

Here I use the architucture and **leave=True** in the last tqdm_notebook:

When using tqdm instead of tqdm_notebook it gets worse:

The same but with **leave=True**

| open | 2019-07-18T09:39:18Z | 2019-07-18T22:21:44Z | https://github.com/tqdm/tqdm/issues/775 | [
"help wanted 🙏",
"invalid ⛔",
"p2-bug-warning ⚠",
"submodule-notebook 📓"
] | IaroslavS | 1 |
PokeAPI/pokeapi | graphql | 943 | Missing 'order' entry for Gen 9 Pokémon moves | The 'order' data is missing for Pokémon in the SV main games, despite being specified for Pokémon in the DLC that learn multiple moves at the same level.
Due to this omission, the move data for lower level Pokémon does not display as it does in the actual game.
Does the data already exist but has not been entered?
I've noticed that move orders on Serebii.net seems accurate. Should I manually input this and submit a pull request? | open | 2023-10-17T07:46:41Z | 2023-10-28T07:15:37Z | https://github.com/PokeAPI/pokeapi/issues/943 | [] | 14239 | 1 |
rio-labs/rio | data-visualization | 211 | Forms Generation | Why don't you add form generation for tables like in django. main feature of battery included. | open | 2025-01-28T02:53:55Z | 2025-02-02T19:53:17Z | https://github.com/rio-labs/rio/issues/211 | [
"needs response"
] | gokulakkannan | 4 |
chaoss/augur | data-visualization | 2,633 | Issues New metric API | The canonical definition is here: https://chaoss.community/?p=3634 | open | 2023-11-30T18:06:02Z | 2024-05-28T21:40:52Z | https://github.com/chaoss/augur/issues/2633 | [
"API",
"first-timers-only"
] | sgoggins | 1 |
AutoGPTQ/AutoGPTQ | nlp | 289 | [BUG] Building from source `hip_runtime_api.h` is not found | `hip_runtime_api.h` isn't found during compilation, even though it's a part of `hip-runtime-amd` and is located in `opt/rocm/hip/include/hip/hip_runtime_api.h`.
```
EXTENSION_NAME=autogptq_cuda_64 -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17
In file included from /usr/include/c10/hip/HIPGraphsC10Utils.h:4,
from /usr/include/c10/hip/HIPCachingAllocator.h:5,
from /usr/include/c10/hip/impl/HIPGuardImpl.h:10,
from /usr/include/ATen/hip/impl/HIPGuardImplMasqueradingAsCUDA.h:6,
from /home/luna/stuff/Models/AutoGPTQ/autogptq_extension/hip_64/autogptq_hip_64.cpp:4:
/usr/include/c10/hip/HIPStream.h:7:10: fatal error: hip/hip_runtime_api.h: No such file or directory
7 | #include <hip/hip_runtime_api.h>
| ^~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
```
**Hardware details**
CPU: AMD 3900X
GPU: AMD 7900 XTX
RAM: 4 sticks, 8Gb each
**Software version**
OS: Arch Linux
Python: 3.11.3 inside a venv from automatic's local install
Pytorch: python-pytorch-opt-rocm 2.0.1-7 system package
**To Reproduce**
1. Clone and cd into the repo
2. Run `env ROCM_VERSION=5.6 pip install -v .`
**Expected behavior**
Compiled AutoGPTQ
| closed | 2023-08-26T12:58:04Z | 2023-09-03T08:55:28Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/289 | [
"bug"
] | Vintodrimmer | 14 |
mirumee/ariadne | graphql | 687 | Add Hacktoberfest label | I think it would be cool for this project to take part in the global Hacktoberfest event. Hacktoberfest celebrates open source and encourages participation in the open source community.
More details here: https://hacktoberfest.digitalocean.com/ | closed | 2021-10-01T19:02:20Z | 2021-10-18T08:56:25Z | https://github.com/mirumee/ariadne/issues/687 | [] | KristobalJunta | 1 |
gunthercox/ChatterBot | machine-learning | 1,927 | "AttributeError: 'str' object has no attribute 'isoformat'" when using chatterbot with MySQL database and charset set to utf8 | Hello. When trying to use Chatterbot with Persian language and setting charset to UTF-8, I get this error when using MySQL instead of SQLite:
```
Traceback (most recent call last):
File "example.py", line 5, in <module>
print(C.get_response("سلام"))
File "/home/farooqkz/.local/lib/python3.6/site-packages/chatterbot/chatterbot.py", line 121, in get_response
response = self.generate_response(input_statement, additional_response_selection_parameters)
File "/home/farooqkz/.local/lib/python3.6/site-packages/chatterbot/chatterbot.py", line 154, in generate_response
output = adapter.process(input_statement, additional_response_selection_parameters)
File "/home/farooqkz/.local/lib/python3.6/site-packages/chatterbot/logic/best_match.py", line 29, in process
closest_match = next(search_results, input_statement)
File "/home/farooqkz/.local/lib/python3.6/site-packages/chatterbot/search.py", line 74, in search
for statement in statement_list:
File "/home/farooqkz/.local/lib/python3.6/site-packages/chatterbot/storage/sql_storage.py", line 175, in filter
yield self.model_to_object(statement)
File "/home/farooqkz/.local/lib/python3.6/site-packages/chatterbot/storage/sql_storage.py", line 71, in model_to_object
return StatementObject(**statement.serialize())
File "/home/farooqkz/.local/lib/python3.6/site-packages/chatterbot/conversation.py", line 33, in serialize
'created_at': self.created_at.isoformat().split('+', 1)[0],
AttributeError: 'str' object has no attribute 'isoformat'
```
Here's my code:
```python
import chatterbot
db_uri = "mysql://user:password@localhost/mydb?charset=utf8"
adapter = "chatterbot.storage.SQLStorageAdapter"
C = chatterbot.ChatBot("Joe", database_uri = db_uri, storage_adapter = adapter)
print(C.get_response("سلام"))
``` | closed | 2020-03-06T09:07:38Z | 2025-03-09T18:54:50Z | https://github.com/gunthercox/ChatterBot/issues/1927 | [] | farooqkz | 1 |
TracecatHQ/tracecat | fastapi | 271 | [FEATURE IDEA] Multi-client integrations within same workflow | **Is your feature request related to a problem? Please describe.**
I want to use multiple instances (i.e. different inputs / credentials) of the same type of integration (e.g. list SIEM alerts) in a workflow.
> Example: real world example would be that I am an M365 customer, and I acquire another business that is also an M365 customer. We are very likely not going to be able to transition all mailboxes and users off of that environment on day 1, but if I'm using a workflow where on a certain type of alert I want to reset a user's password and then contact them, how would I interact with both M365 environments in the same workflow since all users won't exist in one or the other?
_Originally posted by @mattdurant in https://github.com/TracecatHQ/tracecat/issues/268#issuecomment-2251099046_
| closed | 2024-07-25T20:40:01Z | 2024-09-09T01:09:57Z | https://github.com/TracecatHQ/tracecat/issues/271 | [
"enhancement",
"engine",
"triage",
"priority:medium"
] | topher-lo | 1 |
mouredev/Hello-Python | fastapi | 544 | 网上平台被黑提款审核维护怎么办 | 被黑了不能出款找我们大力出黑团队多年经验,如果你被黑了请联系我们帮忙把损失降到最低/微x:<>《微信zdn200手机号18469854912》前期出款不收任何费用,团队我们都是先出款/后收费。不成功收费、
当网不能提款的问题怎么办呢
首先我们应该保持冷静?如果赢现被拒绝不重复点了,不能跟平台客服称切切代理人有任何争执,一旦激怒对方,极有可能账号冻结之类的情况,这样问题就很难处理了,此时对方的理由或者我们要表示相信,并迅速处理,在稳定住对方后。
第一时间联系专业出黑款团队,通过藏分锁卡等手段分批出款,这样问题我们就顺利解决了,如果您目前遇到网赢不了钱提款的,请及时联系专业出黑款团队为您处理↓↓↓↓↓
 | closed | 2025-03-19T06:18:36Z | 2025-03-19T06:31:44Z | https://github.com/mouredev/Hello-Python/issues/544 | [] | wdbhgzmb | 0 |
pydantic/FastUI | pydantic | 131 | `FastAPI` dependency | In the `Readme.md` we have this:
> While it works well with [FastAPI](https://fastapi.tiangolo.com/) it doesn't depend on FastAPI, and most of it could be used with any python web framework.
But in RL we can't run the project when `fastapi` is not installed:

I'm working on a built-in admin-panel for the [Panther](https://github.com/alirn76/panther), and it would be awesome if we could fix this requirement,
and ofc I'm ready to help you with this issue, but I'm not sure what scenario you have in your mind.
- We can have our own
1. `UploadFile` model instead of `ds.UploadFile`
2. `Depends` model instead of `fastapi.params.Depends`
3. `HTTPException` instead of `fastapi.HTTPException`
4. `FormData` instead of `ds.FormData`
5. And in the `fastui.forms.fastui_form()` we can get the final form data instead of request, so we can use this function with any type of form in other frameworks
- Or we can just change the location of the imports so these things won't load on the init
Btw, I think this is a useful project and I would be happy if I could help with it ✋🏽
| closed | 2023-12-27T23:35:15Z | 2023-12-28T00:22:30Z | https://github.com/pydantic/FastUI/issues/131 | [] | AliRn76 | 2 |
MaxHalford/chime | streamlit | 28 | Path Issue For Default Themes | Hello! I ran into an issue using this package with hatch and darwin.
Hatch would install my package to a directory like... `/Users/<user>/Library/Application Support/hatch/env/...`. This unfortunately breaks the calls to `afplay` as two paths are passed to it. It seems like the `{path}' values in the templates below could have single strings and the code would work for paths that have spaces in it.
https://github.com/MaxHalford/chime/blob/083654b8886463acaff64f72224081a10260373b/chime.py#L123-L145 | open | 2025-02-02T03:52:42Z | 2025-02-02T03:52:42Z | https://github.com/MaxHalford/chime/issues/28 | [] | cameres | 0 |
dropbox/PyHive | sqlalchemy | 257 | hive LDAP with SSL not working | hi Guys,
I have two environment, Dev and PROD. In Dev we don't have SSL enabled for LDAP and so I am able to connect and run query.
However in PROD we have LDAP with SSL and whenever I tried to connect it gives me error.
**packages**
```
sasl==0.2.1
thrift==0.11.0
thrift-sasl==0.3.0
PyHive==0.6.1
```
**Code:**
```
hive.connect(host='Host',
port=10000,
database='default',
username='gaurang.shah',
password="MyPassword",
auth='LDAP')
```
Following is part of the configuration.
```
<property>
<name>hive.server2.transport.mode</name>
<value>binary</value>
</property>
<property>
<name>hive.server2.use.SSL</name>
<value>true</value>
</property>
```
Following is the **stacktrace**:
```
Traceback (most recent call last):
File "/Users/gaurang.shah/Documents/ctc/code/emsr/libs/hive.py", line 63, in <module>
Hive().connect()
File "/Users/gaurang.shah/Documents/ctc/code/emsr/libs/hive.py", line 45, in connect
auth='LDAP'
File "/Users/gaurang.shah/Documents/personal/code/migration/.venv/lib/python2.7/site-packages/pyhive/hive.py", line 94, in connect
return Connection(*args, **kwargs)
File "/Users/gaurang.shah/Documents/personal/code/migration/.venv/lib/python2.7/site-packages/pyhive/hive.py", line 192, in __init__
self._transport.open()
File "/Users/gaurang.shah/Documents/personal/code/migration/.venv/lib/python2.7/site-packages/thrift_sasl/__init__.py", line 87, in open
status, payload = self._recv_sasl_message()
File "/Users/gaurang.shah/Documents/personal/code/migration/.venv/lib/python2.7/site-packages/thrift_sasl/__init__.py", line 108, in _recv_sasl_message
payload = self._trans.readAll(length)
File "/Users/gaurang.shah/Documents/personal/code/migration/.venv/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 60, in readAll
chunk = self.read(sz - have)
File "/Users/gaurang.shah/Documents/personal/code/migration/.venv/lib/python2.7/site-packages/thrift/transport/TSocket.py", line 132, in read
message='TSocket read 0 bytes')
thrift.transport.TTransport.TTransportException: TSocket read 0 bytes
```
| open | 2018-11-28T18:37:30Z | 2020-04-23T16:42:13Z | https://github.com/dropbox/PyHive/issues/257 | [] | Gaurang033 | 5 |
SALib/SALib | numpy | 111 | build without git repo | it seems salib does not build outside of git
we'd like to build from the tarball for integration in anaconda, is it possible ?
https://github.com/conda-forge/staged-recipes/pull/1753 | closed | 2016-11-04T18:21:10Z | 2019-03-15T09:49:54Z | https://github.com/SALib/SALib/issues/111 | [
"question"
] | jschueller | 11 |
KaiyangZhou/deep-person-reid | computer-vision | 146 | install error | Hi,
When I ran the install code "python3 set_up.py install", the following error occured, can you tell me what to do to install torchreid in perspective?
user@user:~/deep-person-reid$ sudo python3 setup.py install
[sudo] password for user:
/home/user/deep-person-reid/torchreid/metrics/rank.py:17: UserWarning: Cython evaluation (very fast so highly recommended) is unavailable, now use python evaluation.
'Cython evaluation (very fast so highly recommended) is '
Traceback (most recent call last):
File "setup.py", line 39, in <module>
version=find_version(),
File "setup.py", line 16, in find_version
exec(compile(f.read(), version_file, 'exec'))
File "torchreid/__init__.py", line 8, in <module>
from torchreid import (
File "/home/user/deep-person-reid/torchreid/__init__.py", line 8, in <module>
from torchreid import (
File "/home/user/deep-person-reid/torchreid/engine/__init__.py", line 4, in <module>
from .engine import Engine
File "/home/user/deep-person-reid/torchreid/engine/engine.py", line 20, in <module>
class Engine(object):
File "/home/user/deep-person-reid/torchreid/engine/engine.py", line 205, in Engine
use_metric_cuhk03=False, ranks=[1, 5, 10, 20]):
TypeError: 'no_grad' object is not callable
I have already installed pytorch and cython by "sudo apt-get install cython".
Thanks! | closed | 2019-04-09T07:59:21Z | 2022-11-17T12:54:05Z | https://github.com/KaiyangZhou/deep-person-reid/issues/146 | [] | zhenght17 | 8 |
flairNLP/flair | pytorch | 3,274 | [Question]: AttributeError: 'tuple' object has no attribute 'get_labels' | ### Question
N_WAY = 12 # Number of classes in a task
N_SHOT = 5 # Number of images per class in the support set
N_QUERY = 10 # Number of images per class in the query set
N_EVALUATION_TASKS = 100
# The sampler needs a dataset with a "get_labels" method. Check the code if you have any doubt!
test_set.get_labels = lambda: [
instance[1] for instance in test_set._flat_character_images
]
test_sampler = TaskSampler(
test_set, n_way=N_WAY, n_shot=N_SHOT, n_query=N_QUERY, n_tasks=N_EVALUATION_TASKS
)
test_loader = DataLoader(
test_set,
batch_sampler=test_sampler,
num_workers=12,
pin_memory=True,
collate_fn=test_sampler.episodic_collate_fn,
)
After running the above code in a cell in jupyter notebook i get the error **"AttributeError: 'tuple' object has no attribute 'get_labels' "** | open | 2023-06-21T13:39:23Z | 2023-06-26T09:11:47Z | https://github.com/flairNLP/flair/issues/3274 | [
"question"
] | DebamSahaCS | 1 |
huggingface/peft | pytorch | 1,963 | Training with Multiple LoRAs | Hi @BenjaminBossan and other contributors or maintainers, I would like to train one backbone model with two LoRAs like this:
```python
class MyModel(nn.Module):
def __init__(...):
...
self.model = AutoModel.from_pretrained('...')
self.encoder = get_peft_model(self.model, lora_config)
self.decoder = get_peft_model(self.model, lora_config)
def forward(...):
hidden_states = self.encoder(...)
output = self.decoder(hidden_states, ...)
return output
```
However, I found both encoder and decoder share same LoRA during training. Is there any solutions?
Thanks!
| closed | 2024-07-29T07:32:24Z | 2024-09-19T15:04:05Z | https://github.com/huggingface/peft/issues/1963 | [] | Jyonn | 5 |
InstaPy/InstaPy | automation | 5,831 | Couldn't follow 'liveTv'! ~user is inaccessible | session.set_do_follow(enabled=True, percentage=100, times=1)
session.follow_by_tags(["bmw]",amount=4)
Couldn't follow 'Dubai'! ~user is inaccessible
| closed | 2020-10-16T10:30:19Z | 2020-12-13T01:49:20Z | https://github.com/InstaPy/InstaPy/issues/5831 | [
"wontfix"
] | mohsen6492003 | 1 |
scrapy/scrapy | python | 6,739 | Add count to spider_exceptions stats | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your pull request, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#writing-patches and https://doc.scrapy.org/en/latest/contributing.html#submitting-patches
-->
## Summary
Today, we only increase the specific exception under `spider_exceptions`, e.g., [`spider_exceptions/AttributeError`](https://github.com/scrapy/scrapy/blob/master/scrapy/core/scraper.py#L250). We should also have a total count like we have for downloader, e.g., [`downloader/exception_count`](https://github.com/scrapy/scrapy/blob/master/scrapy/downloadermiddlewares/stats.py#L80).
## Motivation
This information is useful when creating visualizations of stats.
| closed | 2025-03-19T14:40:33Z | 2025-03-20T10:49:45Z | https://github.com/scrapy/scrapy/issues/6739 | [
"enhancement",
"good first issue"
] | Laerte | 0 |
pyro-ppl/numpyro | numpy | 1,208 | Request for Euler Maruyama features in numpyro | Dear Numpyro developers,
Please develop Euler Maruyama features in numpyro similar to features found in PyMC.
Thanks alot. | closed | 2021-10-30T05:58:18Z | 2022-12-16T23:00:26Z | https://github.com/pyro-ppl/numpyro/issues/1208 | [
"good first issue",
"Tutorials/Examples"
] | GMCobraz | 12 |
LAION-AI/Open-Assistant | python | 3,702 | Can't open new chat. | Tried opening a new chat. Nothing happens. Cleared cache and cookies, several different browsers, no change. | closed | 2023-09-27T01:38:19Z | 2023-10-01T14:40:16Z | https://github.com/LAION-AI/Open-Assistant/issues/3702 | [] | NK-UT | 1 |
encode/httpx | asyncio | 2,670 | Percent sign not encoded when present in query params value | Since httpx 0.24 the behavior regarding query params changed, and possibly contains a bug.
Use case: passing a pre-signed storage url to some webservice which then downloads the file.
The presigned url will contain some percent-escaped sequences.
With the new httpx 0.24, when something like `client.get('http://webservice', params={"u": "http://example.com?q=foo%2Fa"}) ` is called, the query params are handed to httpx._urls.urlencode:
```
from httpx._urls import urlencode
urlencode((('u', "http://example.com?q=foo%2Fa"),))
# -> 'u=http%3A//example.com%3Fq%3Dfoo%2Fa'
```
The resulting query string still has %2F in it, which should have been percent encoded as %252F
Here is the same thing using python urllib.parse (which was used pre-0.24)
```
from urllib.parse import urlencode
urlencode((('u', "http://example.com?q=foo%2Fa"),))
# -> 'u=http%3A%2F%2Fexample.com%3Fq%3Dfoo%252Fa'
```
Here the resulting query string is correctly encoded.
Another way to look at it is as follows:
```
from urllib.parse importparse_qs
from httpx._urls import urlencode
parse_qs( urlencode((('u', "http://example.com?q=foo%2Fa%20b%2Ac"),)))
# -> {'u': ['http://example.com?q=foo/a b*c']}
# Incorrect url decoded from the query string
from urllib.parse import urlencode
parse_qs( urlencode((('u', "http://example.com?q=foo%2Fa%20b%2Ac"),)))
# -> {'u': ['http://example.com?q=foo%2Fa%20b%2Ac']}
# Correct original url decoded from the query string
``` | closed | 2023-04-19T08:26:05Z | 2023-05-21T11:54:19Z | https://github.com/encode/httpx/issues/2670 | [] | rslinckx | 7 |
redis/redis-om-python | pydantic | 228 | RediSearch and RedisJson checks do not fire against redis:latest | When testing with `make test_oss` we get `TypeError: 'NoneType' object is not subscriptable` with Traceback in redis-py not in redis-om:
```
tests_sync/test_json_model.py:29: in <module>
if not has_redis_json():
redis_om/checks.py:17: in has_redis_json
command_exists = check_for_command(conn, "json.set")
redis_om/checks.py:9: in check_for_command
cmd_info = conn.execute_command("command", "info", cmd)
../../.cache/pypoetry/virtualenvs/redis-om-2-PyqfHp-py3.8/lib/python3.8/site-packages/redis/client.py:1218: in execute_command
return conn.retry.call_with_retry(
../../.cache/pypoetry/virtualenvs/redis-om-2-PyqfHp-py3.8/lib/python3.8/site-packages/redis/retry.py:45: in call_with_retry
return do()
../../.cache/pypoetry/virtualenvs/redis-om-2-PyqfHp-py3.8/lib/python3.8/site-packages/redis/client.py:1219: in <lambda>
lambda: self._send_command_parse_response(
../../.cache/pypoetry/virtualenvs/redis-om-2-PyqfHp-py3.8/lib/python3.8/site-packages/redis/client.py:1195: in _send_command_parse_response
return self.parse_response(conn, command_name, **options)
../../.cache/pypoetry/virtualenvs/redis-om-2-PyqfHp-py3.8/lib/python3.8/site-packages/redis/client.py:1240: in parse_response
return self.response_callbacks[command_name](response, **options)
../../.cache/pypoetry/virtualenvs/redis-om-2-PyqfHp-py3.8/lib/python3.8/site-packages/redis/client.py:553: in parse_command
cmd_name = str_if_bytes(command[0])
TypeError: 'NoneType' object is not subscriptable
```
You can replicate this against redis:latest with
```python
import redis
redis.StrictRedis().execute_command("command", "info", "json.set")
```
There is a problem in redis-py with parsing the response from redis. (It's nill)
Maybe it's related to: https://github.com/redis/redis-py/blob/09a52dba48221353eafa8188d73ab97e8f4ccc49/redis/commands/core.py#L740-L743
```python
def command_info(self, **kwargs) -> None:
raise NotImplementedError(
"COMMAND INFO is intentionally not implemented in the client."
)
```
So when I try to use redis-om against `redis:latest`:
```python
from redis_om import HashModel, Migrator, get_redis_connection, Field
class TestModel(HashModel):
test_field: int = Field(index=True)
class Meta:
database = get_redis_connection(port=6381) # redis:latest
Migrator().run()
TestModel.find()
```
I get this:
```
Traceback (most recent call last):
File "/home/lamar/PycharmProjects/redis-om-python-flask-skeleton-app/check.py", line 12, in <module>
TestModel.find()
File "/home/lamar/.env/redis-om-python-flask-skeleton-app/lib/python3.8/site-packages/redis_om/model/model.py", line 1168, in find
return FindQuery(expressions=expressions, model=cls)
File "/home/lamar/.env/redis-om-python-flask-skeleton-app/lib/python3.8/site-packages/redis_om/model/model.py", line 347, in __init__
if not has_redisearch(model.db()):
File "/home/lamar/.env/redis-om-python-flask-skeleton-app/lib/python3.8/site-packages/redis_om/checks.py", line 25, in has_redisearch
if has_redis_json(conn):
File "/home/lamar/.env/redis-om-python-flask-skeleton-app/lib/python3.8/site-packages/redis_om/checks.py", line 17, in has_redis_json
command_exists = check_for_command(conn, "json.set")
...
File "/home/lamar/.env/redis-om-python-flask-skeleton-app/lib/python3.8/site-packages/redis/client.py", line 530, in parse_command
cmd_name = str_if_bytes(command[0])
TypeError: 'NoneType' object is not subscriptable
```
instead of https://github.com/redis/redis-om-python/blob/00ed537fcaf43d93ea26e0332d5cb2f1a4c1c4a1/aredis_om/model/model.py#L347-L352
So maybe we can test with
```python
try:
db = redis.StrictRedis()
db.json().set("tmp_key", ".", {})
db.delete("tmp_key")
except redis.ResponseError:
raise RedisModelError("...")
```
here https://github.com/redis/redis-om-python/blob/00ed537fcaf43d93ea26e0332d5cb2f1a4c1c4a1/aredis_om/checks.py#L14 and something similar for RediSearch here https://github.com/redis/redis-om-python/blob/00ed537fcaf43d93ea26e0332d5cb2f1a4c1c4a1/aredis_om/checks.py#L22
| open | 2022-05-02T21:03:09Z | 2022-05-07T07:26:22Z | https://github.com/redis/redis-om-python/issues/228 | [
"bug"
] | moznuy | 2 |
ultralytics/ultralytics | deep-learning | 19,642 | xywhr in OBB result have changed | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
Recently I updated my environment to the latest version of ultralytics and found that the xywhr result for the OBB task is somehow broken. I was using it to get the centre position of each detection, the width and height and the rotation. Before the update I always had the width as the largest dimension and the height as the smaller, and the rotation relative to the width.
After the update, I found that sometimes the width is the smaller dimension and the height is the larger one, so the rotation is also skewed by 90 degrees.
Going back to older version fix the problem.
### Environment
Ultralytics 8.3.87 🚀 Python-3.12.7 torch-2.6.0+cpu CPU (Intel Core(TM) i7-10850H 2.70GHz)
Setup complete ✅ (12 CPUs, 31.7 GB RAM, 609.9/952.5 GB disk)
OS Windows-11-10.0.22631-SP0
Environment Windows
Python 3.12.7
Install git
Path C:\Users\my_user\Desktop\yolo_test\.venv\Lib\site-packages\ultralytics
RAM 31.73 GB
Disk 609.9/952.5 GB
CPU Intel Core(TM) i7-10850H 2.70GHz
CPU count 12
GPU None
GPU count None
CUDA None
numpy ✅ 2.1.1<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.1>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.6.0>=1.8.0
torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.21.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 7.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
Input image

# Ultralitics 8.3.87
With ultralytics package version 8.3.87 run this code:
```
from pathlib import Path
import torch
from ultralytics import YOLO
model_file = Path("yolo11x-obb.pt")
model = YOLO(model_file, task="OBB")
img = Path("./images/P0006.png")
results = model.predict(
source=img,
conf=0.8,
imgsz=640,
device=torch.cuda.current_device() if torch.cuda.device_count() > 0 else "CPU",
)
for res in results:
for i, pts in enumerate(res.obb):
if int(pts.xywhr[0][2]) < int(pts.xywhr[0][3]):
print(
f"{i})\tcenter x {int(pts.xywhr[0][0])}\tcenter y {int(pts.xywhr[0][1])}\twidth {int(pts.xywhr[0][2])}\theight {int(pts.xywhr[0][3])}\trotation {pts.xywhr[0][4]}"
)
```
## Output:
```
# 3) center x 322 center y 796 width 24 height 104 rotation 0.7707133889198303
# 9) center x 329 center y 428 width 25 height 96 rotation 0.8094407916069031
# 11) center x 325 center y 470 width 24 height 95 rotation 0.8357727527618408
# 12) center x 324 center y 835 width 25 height 96 rotation 0.7769593000411987
# 16) center x 329 center y 387 width 26 height 96 rotation 0.7910935282707214
# 17) center x 586 center y 361 width 24 height 102 rotation 0.8455972075462341
# 19) center x 582 center y 1246 width 24 height 93 rotation 0.8130463361740112
# 20) center x 332 center y 345 width 25 height 95 rotation 0.8040628433227539
# 21) center x 593 center y 394 width 25 height 104 rotation 0.8151286244392395
# 22) center x 575 center y 1212 width 24 height 101 rotation 0.808992326259613
# 23) center x 325 center y 674 width 24 height 96 rotation 0.8091808557510376
# 24) center x 592 center y 673 width 25 height 102 rotation 0.8232788443565369
# 25) center x 591 center y 798 width 25 height 104 rotation 0.8026906847953796
# 35) center x 321 center y 878 width 25 height 93 rotation 0.787294864654541
# 40) center x 585 center y 763 width 24 height 103 rotation 0.7789315581321716
# 41) center x 590 center y 839 width 25 height 102 rotation 0.8089277148246765
# 44) center x 318 center y 920 width 26 height 103 rotation 0.8125594854354858
# 47) center x 592 center y 435 width 24 height 101 rotation 0.8140760660171509
# 50) center x 325 center y 512 width 25 height 100 rotation 0.8281809091567993
# 51) center x 312 center y 964 width 25 height 97 rotation 0.7953415513038635
# 54) center x 333 center y 1306 width 24 height 99 rotation 0.854654848575592
# 55) center x 317 center y 758 width 25 height 101 rotation 0.7970958948135376
# 60) center x 318 center y 1119 width 25 height 104 rotation 0.8262746930122375
# 66) center x 329 center y 708 width 24 height 94 rotation 0.8186021447181702
# 68) center x 316 center y 1160 width 24 height 102 rotation 0.8195444345474243
# 69) center x 583 center y 1286 width 24 height 100 rotation 0.8252146244049072
# 77) center x 585 center y 722 width 24 height 103 rotation 0.7846349477767944
# 79) center x 315 center y 1200 width 24 height 101 rotation 0.8189680576324463
# 81) center x 147 center y 1238 width 21 height 58 rotation 1.5592427253723145
# 82) center x 857 center y 1059 width 22 height 58 rotation 1.5689480304718018
# 84) center x 312 center y 1282 width 24 height 98 rotation 0.8073341846466064
# 88) center x 316 center y 1240 width 24 height 99 rotation 0.802503764629364
# 89) center x 313 center y 1004 width 26 height 105 rotation 0.8169794082641602
# 90) center x 582 center y 1163 width 23 height 98 rotation 0.8214647173881531
# 91) center x 148 center y 1295 width 22 height 60 rotation 1.564757227897644
# 92) center x 850 center y 1001 width 21 height 52 rotation 1.564372181892395
# 95) center x 588 center y 881 width 26 height 103 rotation 0.7998103499412537
# 96) center x 586 center y 1041 width 24 height 103 rotation 0.8159875273704529
# 98) center x 853 center y 1029 width 23 height 60 rotation 1.5595061779022217
# 100) center x 587 center y 959 width 25 height 103 rotation 0.8381613492965698
# 103) center x 583 center y 1084 width 23 height 101 rotation 0.8027064204216003
# 107) center x 603 center y 1310 width 24 height 101 rotation 0.8306283354759216
# 108) center x 589 center y 920 width 26 height 104 rotation 0.8245478272438049
# 109) center x 586 center y 1001 width 24 height 101 rotation 0.8359718322753906
# 110) center x 319 center y 1038 width 24 height 102 rotation 0.8157163858413696
# 113) center x 161 center y 364 width 22 height 66 rotation 1.5607739686965942
# 114) center x 851 center y 1088 width 22 height 61 rotation 1.5662956237792969
# 115) center x 581 center y 1124 width 23 height 101 rotation 0.8175578713417053
# 116) center x 316 center y 1080 width 25 height 103 rotation 0.8084856271743774
```
# Ultralitics 8.3.32
## yolo checks
Ultralytics 8.3.32 🚀 Python-3.12.7 torch-2.6.0+cpu CPU (Intel Core(TM) i7-10850H 2.70GHz)
Setup complete ✅ (12 CPUs, 31.7 GB RAM, 613.0/952.5 GB disk)
OS Windows-11-10.0.22631-SP0
Environment Windows
Python 3.12.7
Install git
RAM 31.73 GB
Disk 613.0/952.5 GB
CPU Intel Core(TM) i7-10850H 2.70GHz
CPU count 12
GPU None
GPU count None
CUDA None
numpy ✅ 2.1.1>=1.23.0
matplotlib ✅ 3.10.1>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.6.0>=1.8.0
torchvision ✅ 0.21.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 7.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
numpy ✅ 2.1.1<2.0.0; sys_platform == "darwin"
torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
Running same code as above no output, so no results with width < height as expected.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-03-11T14:21:07Z | 2025-03-15T13:03:36Z | https://github.com/ultralytics/ultralytics/issues/19642 | [
"bug",
"OBB"
] | MarcelloCuoghi | 7 |
automl/auto-sklearn | scikit-learn | 1,467 | How can I get/export a production model from a trained model (after refit) with autosklearn? | Hello team,
I'm doing some tests with autosklearn and I would like to know I can get/export a trained autosklearn model for production use.
Could you please explain which is the process I have to follow?
Currently I am doing this:
```python
automl = autosklearn.classification.AutoSklearnClassifier(
time_left_for_this_task=3600,
per_run_time_limit=1800,
n_jobs=2,
tmp_folder='/workspace/automl/unito/tmp',
delete_tmp_folder_after_terminate=False,
memory_limit=15240,
resampling_strategy='cv',
resampling_strategy_arguments={'folds':10},
metric=autosklearn.metrics.f1_macro,
scoring_functions=[
autosklearn.metrics.accuracy,
autosklearn.metrics.precision,
autosklearn.metrics.recall,
autosklearn.metrics.f1_macro
]
include={'classifier': ['random_forestt']},
)
(...)
automl.fit(X_train, y_train, X_test, y_test, dataset_name='mydataset')
y_hat = automl.predict(X_test)
print("Accuracy score", sklearn.metrics.accuracy_score(y_test, y_hat))
print("F1 score", sklearn.metrics.f1_score(y_test, y_hat, average='macro'))
automl.refit(X, Y)
y_hat=automl.predict(X_test)
print("F1 score", sklearn.metrics.f1_score(y_test, y_hat, average='macro'))
print("Accuracy score", sklearn.metrics.accuracy_score(y_test, y_hat))
```
And then I'm using "automl "as a sklearn model and I'm logging it on mlflow but It is rather big (>1 GB).
Is this the right way or there are better ways?
For now I don't need an automatic way, a manual process is sufficient for my tests
Thanks a lot
Sergio
# Short Question Description
A clear single sentence question we can try to help with?
With some extra context to follow it up. This way the question is clear for both you and us without it being lost in the paragraph.
Some useful information to help us with your question:
* How did this question come about?
* Would a small code snippet help?
* What have you already looked at?
Before you ask, please have a look at the
* [Documentation](https://automl.github.io/auto-sklearn/master/manual.html)
* If it's related but not clear, please include it in your question with a link, we'll try to make it better!
* [Examples](https://automl.github.io/auto-sklearn/master/examples/index.html)
* Likewise, an example can answer many questions! However we can't cover all question with examples but if you think your question would benefit from an example, let us know!
* [Issues](https://github.com/automl/auto-sklearn/issues?q=label%3Aquestion+)
* We try to label all questions with the label `Question`, maybe someone has already asked. If the question is about a feature, try searching more of the issues. If you find something related but doesn't directly answer your question, please link to it with #(issue number)!
# System Details (if relevant)
* Which version of `auto-sklearn` are you using?
* Are you running this on Linux / Mac / ... ?
* auto-sklearn==0.14.6
* linux centos 7
| open | 2022-05-10T08:34:09Z | 2022-08-09T11:14:45Z | https://github.com/automl/auto-sklearn/issues/1467 | [
"enhancement",
"question"
] | seraus | 3 |
tartiflette/tartiflette | graphql | 320 | Provide a manylinux and macos bdist_wheel | Modify the setup.py so it produce bdist_wheel for manylinux including the libgraphqlparser.so/a
Modify the build process so this wheel is uploaded to pypi
Modify the cffi part of tartiflette/parser so it loads this lib instead of the local one.
| closed | 2019-10-09T18:25:47Z | 2021-08-03T08:26:37Z | https://github.com/tartiflette/tartiflette/issues/320 | [
"enhancement",
"help wanted",
"good first issue"
] | abusi | 4 |
RobertCraigie/prisma-client-py | pydantic | 142 | Support raw query methods with MongoDB | # Problem
The internal `executeRaw` and `queryRaw` methods are not available when using MongoDB, we should not include them in the generated client. | open | 2021-11-22T20:53:13Z | 2023-08-10T12:12:53Z | https://github.com/RobertCraigie/prisma-client-py/issues/142 | [
"kind/improvement",
"level/advanced",
"priority/medium"
] | RobertCraigie | 2 |
ShishirPatil/gorilla | api | 914 | [BFCL] Multi-turn evaluation problem. | For `multi_turn_base_186` at turn 3, the user's question is
> There has been a problem with my booking and I previously reached out to support without any feedback yet. Kindly contact customer support on my behalf, emphasizing the smooth facilitation of my travel arrangements.
The model is supposed to call the `customer_support_message` function with a specific `user message`. I didn't find any definition of the standard message, as I understand it, the meesage parameter is a generated by the model. The evaluation process considerd the only correct message as the ground truth. I'm wondering if my gernerated message is also correct.
My result:
```
"contact_customer_support(booking_id='3426812',message='I am experiencing an issue with my booking and require immediate assistance to ensure the smooth facilitation of my travel arrangements. Please address this matter urgently.')"
```
groud_truth:
```
"contact_customer_support(booking_id='3426812', message='No feedback yet on my inquiry regarding my flight arrangements. Please expedite the process.')"
```
| open | 2025-02-20T06:54:42Z | 2025-02-20T23:21:44Z | https://github.com/ShishirPatil/gorilla/issues/914 | [
"BFCL-Dataset"
] | lucenzhong | 2 |
zappa/Zappa | django | 974 | Error loading psycopg2 module: No module named 'psycopg2._psycopg' | Python 3.8
I installed:
psycopg2==2.8.6
psycopg2-binary==2.8.6
The database uses:
'ENGINE': 'django.db.backends.postgresql_psycopg2',
After deploying zappa I keep getting this error with zappa tail dev | closed | 2021-05-16T12:10:55Z | 2023-01-20T12:31:53Z | https://github.com/zappa/Zappa/issues/974 | [] | viktor-idenfy | 2 |
ivy-llc/ivy | pytorch | 28,383 | Fix Ivy Failing Test: paddle - shape.shape__ge__ | closed | 2024-02-22T03:51:15Z | 2024-02-22T07:32:29Z | https://github.com/ivy-llc/ivy/issues/28383 | [
"Sub Task"
] | fnhirwa | 0 | |
babysor/MockingBird | pytorch | 451 | 'pip' 不是内部或外部命令,也不是可运行的程序 或批处理文件。 | **Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
pip install -r requirements.txt 导入不了
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
C:\Users\1\Desktop\al\MockingBird-main>pip install -r requirements.txt
'pip' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
**Screenshots[截图(如有)]**
If applicable, add screenshots to help

| open | 2022-03-12T14:51:00Z | 2022-03-17T12:16:20Z | https://github.com/babysor/MockingBird/issues/451 | [] | WUKELEI | 2 |
microsoft/MMdnn | tensorflow | 697 | tensorflow tf.layers.batch_normalization can't convert successful | Hi,
when i convert tensorflow model to IR, i get a error. and i referenced this [https://github.com/Microsoft/MMdnn/wiki/Tensorflow-parser-problem-(Slim-and-tf.layers)](url)
,using tf.layers.batch_normalization, but this error is still here. detail information at following,
by the way, the following url need copy to browser.............
Platform (like ubuntu 16.04/win10): win10
Python version: python3.6.5
mmdnn version: 0.2.5
Source framework with version (like Tensorflow 1.4.1 with GPU): tensorflow 1.8 with GPU, but use CPU
Destination framework with version (like CNTK 2.3 with GPU): keras 2.1.3
I have used the code at the link [https://github.com/jackyfaster/convert_model/blob/master/model.py](url) to create and save model in tensorflow.
I am converting the saved tensorflow model to keras , At the first step 'python -m mmdnn.conversion._script.convertToIR -f tensorflow -w xxx.ckpt -n xxx.ckpt.meta -o xxx -node Relu_18', I am getting error, detail infromation at following:
Parse file [D:\Pycharm_workspace\MMdnn_Play\tensorflow2keras\model\bn\vgg19.ckpt.meta] with binary format successfully.
Tensorflow model file [D:\Pycharm_workspace\MMdnn_Play\tensorflow2keras\model\bn\vgg19.ckpt.meta] loaded successfully.
Tensorflow checkpoint file [D:\Pycharm_workspace\MMdnn_Play\tensorflow2keras\model\bn\vgg19.ckpt] loaded successfully. [244] variables loaded.
2019-07-18 11:12:14.259916: I T:\src\github\tensorflow\tensorflow\tools\graph_transforms\transform_graph.cc:264] Applying fold_constants
2019-07-18 11:12:14.375731: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_1/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_2/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_3/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_4/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_5/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_6/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_7/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_8/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_9/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_10/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_11/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_12/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_13/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_14/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_15/cond/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_1/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_1/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_1/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_1/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_1/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_1/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_2/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_2/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_2/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_2/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_2/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_2/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_3/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_3/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_3/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_3/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_3/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_3/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_4/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_4/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_4/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_4/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_4/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_4/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_5/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_5/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_5/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_5/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_5/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_5/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_6/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_6/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_6/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_6/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_6/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_6/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_7/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_7/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_7/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_7/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_7/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_7/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_8/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_8/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_8/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_8/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_8/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_8/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_9/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_9/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_9/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_9/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_9/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_9/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_10/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_10/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_10/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_10/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_10/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_10/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_11/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_11/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_11/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_11/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_11/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_11/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_12/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_12/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_12/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_12/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_12/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_12/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_13/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_13/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_13/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_13/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_13/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_13/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_14/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_14/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_14/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_14/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_14/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_14/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_15/cond/FusedBatchNorm/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_15/cond/FusedBatchNorm_1/Switch_1].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_15/cond/FusedBatchNorm/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_15/cond/FusedBatchNorm_1/Switch_2].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_15/cond/FusedBatchNorm_1/Switch_3].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization_15/cond/FusedBatchNorm_1/Switch_4].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization/cond/FusedBatchNorm/Switch].
TensorflowEmitter has not supported operator [Switch] with name [batch_normalization/cond/FusedBatchNorm_1/Switch].
Traceback (most recent call last):
File "D:/Pycharm_workspace/MMdnn-master/mmdnn/conversion/_script/convertToIR.py", line 210, in <module>
_main()
File "D:/Pycharm_workspace/MMdnn-master/mmdnn/conversion/_script/convertToIR.py", line 204, in _main
ret = _convert(args)
File "D:/Pycharm_workspace/MMdnn-master/mmdnn/conversion/_script/convertToIR.py", line 115, in _convert
parser.run(args.dstPath)
File "D:\Pycharm_workspace\MMdnn-master\mmdnn\conversion\common\DataStructure\parser.py", line 22, in run
self.gen_IR()
File "D:\Pycharm_workspace\MMdnn-master\mmdnn\conversion\tensorflow\tensorflow_parser.py", line 425, in gen_IR
func(current_node)
File "D:\Pycharm_workspace\MMdnn-master\mmdnn\conversion\tensorflow\tensorflow_parser.py", line 801, in rename_FusedBatchNorm
self.set_weight(source_node.name, 'scale', self.ckpt_data[scale.name])
KeyError: 'batch_normalization/gamma/read'
Process finished with exit code 1
I am thinking the wrong is at batch_normalization layer, and I used tensorboard to visualize the graph, the following link is the printscreen
[https://github.com/jackyfaster/convert_model/blob/master/printscreen.png](url)
I am a new in tensorflow and machine learning field, Now i do not know how to fix this wrong, please help, thanks!
| open | 2019-07-18T06:12:03Z | 2019-07-18T06:12:03Z | https://github.com/microsoft/MMdnn/issues/697 | [] | jackyfaster | 0 |
simple-login/app | flask | 1,051 | [Feature Request] Connect multiple mailboxes to alias or domain | Hello,
I have structured this feature request using a user story:
**As a** user who often needs to forward incoming emails to multiple recipients
**I want to** be able to connect multiple mailboxes to an alias or domain
**So that** emails that are sent to an alias are automatically forwarded by SimpleLogin to multiple mailboxes and all mailboxes will be able to reply to the incoming message
**Acceptance criteria**
1. Be able to specify two or more mailboxes for each alias, or assigning two or more mailboxes to a domain (including all associated aliases)
2. All associated mailboxes can utilize the reverse-alias for two-way communication
3. Outgoing emails using the reverse-alias sent by one associated mailbox are also made available to the other associated mailboxes | closed | 2022-06-06T01:13:47Z | 2022-06-06T01:34:17Z | https://github.com/simple-login/app/issues/1051 | [] | ghost | 1 |
plotly/dash | jupyter | 2,351 | [BUG] block access to images |
```
dash = "^2.7.0"
dash-auth = "^1.4.1"
Flask = "^2.2.2"
Flask-Admin = "^1.6.0"
```
After applying 'dash-auth', the application closes access to images.

Unable to view uploaded images in 'flask-admin'

| closed | 2022-12-03T20:29:48Z | 2023-07-13T14:16:04Z | https://github.com/plotly/dash/issues/2351 | [] | A-V-tor | 2 |
coleifer/sqlite-web | flask | 117 | New version (strict table support) | Hey, I'm using your web interface with sqlite table that uses strict tables [1].
The current docker image does not support them (throws a parsing error on startup).
But your pewee library supports them. Currently I simply build the docker container locally ad pushed it on my own registry.
If you find time you could build a new docker container (which uses the new version of pewee) to support strict tables.
Regards
~ Mike
[1] https://www.sqlite.org/stricttables.html | closed | 2023-06-09T14:44:15Z | 2023-06-09T14:49:32Z | https://github.com/coleifer/sqlite-web/issues/117 | [] | Mikescher | 1 |
ultralytics/ultralytics | python | 19,688 | Yolo11 `export` to tflite/tfjs not working on Windows | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Export
### Bug
I'm trying to export a yolo11n model I trained earlier today on a custom training dataset. Running `detect` works and it's accurate in detecting the objects I've specified against test images.
When running `export`, even after a clean install following the first few simple steps -- the `export` function is not properly executing to either `tflite` or `tfjs`. I suspect there might be a weird dependency issue happening, but as you can see from my environment setup, it's very, very basic.
### Environment
```
Ultralytics 8.3.89 🚀 Python-3.11.9 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 4090 Laptop GPU, 16376MiB)
Setup complete ✅ (32 CPUs, 95.8 GB RAM, 842.4/1844.6 GB disk)
OS Windows-10-10.0.26100-SP0
Environment Windows
Python 3.11.9
Install pip
Path C:\Users\eric\source\yolo\venv\Lib\site-packages\ultralytics
RAM 95.77 GB
Disk 842.4/1844.6 GB
CPU Intel Core(TM) i9-14900HX
CPU count 32
GPU NVIDIA GeForce RTX 4090 Laptop GPU, 16376MiB
GPU count 1
CUDA 12.6
numpy ✅ 2.1.1<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.1>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.0.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.6.0+cu126>=1.8.0
torch ✅ 2.6.0+cu126!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.21.0+cu126>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 7.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
My initial setup is very simple,
- Python 3.11.9 64-bit for Windows
- Create a new virtual environment (`venv` for this example)
- Install pytorch with CUDA 12.6
- `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126`
- Install ultralytics
- `pip install ultralytics`
I can confirm my model is working properly and that I'm able to run inference against my model with a test image that has the object detected successfully:
```
(venv) C:\Users\eric\source\yolo>yolo detect predict model=c:\temp\best.pt source=c:\Users\eric\Desktop\test.jpg
Ultralytics 8.3.89 🚀 Python-3.11.9 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 4090 Laptop GPU, 16376MiB)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs
image 1/1 c:\Users\eric\Desktop\test.jpg: 640x480 1 object, 35.9ms
Speed: 2.5ms preprocess, 35.9ms inference, 56.9ms postprocess per image at shape (1, 3, 640, 480)
Results saved to runs\detect\predict
💡 Learn more at https://docs.ultralytics.com/modes/predict
```
From this point, I have my model (`best.pt`), and I'm going to export it to `tflite` or `tfjs`.
First Command: `yolo export model=c:\temp\best.pt format=tflite`
```
(venv) C:\Users\eric\source\yolo>yolo version
8.3.89
(venv) C:\Users\eric\source\yolo>yolo export model=c:\temp\best.pt format=tflite
Ultralytics 8.3.89 🚀 Python-3.11.9 torch-2.6.0+cu126 CPU (Intel Core(TM) i9-14900HX)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs
PyTorch: starting from 'c:\temp\best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (5.3 MB)
requirements: Ultralytics requirement ['tensorflow-cpu>=2.0.0'] not found, attempting AutoUpdate...
(venv) C:\Users\eric\source\yolo>ERROR: Pipe to stdout was broken
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='cp1252'>
OSError: [Errno 22] Invalid argument
```
Second time running: `yolo export model=c:\temp\best.pt format=tflite`
```
(venv) C:\Users\eric\source\yolo>yolo export model=c:\temp\best.pt format=tflite
Ultralytics 8.3.89 🚀 Python-3.11.9 torch-2.6.0+cu126 CPU (Intel Core(TM) i9-14900HX)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs
PyTorch: starting from 'c:\temp\best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (5.3 MB)
requirements: Ultralytics requirements ['tf_keras', 'sng4onnx>=1.0.1', 'onnx_graphsurgeon>=0.3.26', 'onnx>=1.12.0', 'onnx2tf>1.17.5,<=1.26.3', 'onnxslim>=0.1.31', 'tflite_support', 'onnxruntime'] not found, attempting AutoUpdate...
(venv) C:\Users\eric\source\yolo>ERROR: Pipe to stdout was broken
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='cp1252'>
OSError: [Errno 22] Invalid argument
```
After manually installing the required packages via `pip`:
`pip install --no-cache-dir "tf_keras" "sng4onnx>=1.0.1" "onnx_graphsurgeon>=0.3.26" "onnx>=1.12.0" "onnx2tf>1.17.5,<=1.26.3" "onnxslim>=0.1.31" "tflite_support" "onnxruntime" --extra-index-url https://pypi.ngc.nvidia.com`
When I run the export command again, the process dies quietly:
```
(venv) C:\Users\eric\source\yolo>yolo export model=c:\temp\best.pt format=tflite
Ultralytics 8.3.89 🚀 Python-3.11.9 torch-2.6.0+cu126 CPU (Intel Core(TM) i9-14900HX)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs
PyTorch: starting from 'c:\temp\best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (5.3 MB)
TensorFlow SavedModel: starting export with tensorflow 2.19.0...
(venv) C:\Users\eric\source\yolo>
```
I've reinstalled and tried this process over four times now with the exact same results.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-03-14T00:18:27Z | 2025-03-16T09:15:55Z | https://github.com/ultralytics/ultralytics/issues/19688 | [
"bug",
"dependencies",
"exports"
] | enusbaum | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.