repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
davidsandberg/facenet | tensorflow | 524 | Wrong file name format when validate on LFW | When I try to run the [validation on LFW](https://github.com/davidsandberg/facenet/wiki/Validate-on-lfw), it shows no files found.
The file name in LFW data set is like `Aaron_Eckhart_0001.jpg`, while after alignment it is like `Aaron_Eckhart_0001_0.jpg`. I guess the trailing number is to denote different faces in one picture.
The script that produces this result is `src/align/align_dataset_mtcnn.py`.
But in `lfw.py`, the filename is assumed as the original format, thus the validation script cannot fetch any file from the aligned directory, and leads to an error.
Possible fixes IMO are trying to enumerate files in folder rather than trying fixed filename, or change the filename format in `lfw.py`. If someone can confirm this issue you have my gratitude, then we can fix this.
| closed | 2017-11-10T03:31:20Z | 2017-11-12T14:36:43Z | https://github.com/davidsandberg/facenet/issues/524 | [] | YF-Tung | 1 |
huggingface/datasets | deep-learning | 6,561 | Document YAML configuration with "data_dir" | See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference | open | 2024-01-05T14:03:33Z | 2024-01-05T14:06:18Z | https://github.com/huggingface/datasets/issues/6561 | [
"documentation"
] | severo | 1 |
huggingface/pytorch-image-models | pytorch | 2,284 | [BUG] SwinTransformer Padding Backwards in PatchMerge | **Describe the bug**
In [this line](https://github.com/huggingface/pytorch-image-models/blob/ee5b1e8217134e9f016a0086b793c34abb721216/timm/models/swin_transformer.py#L438) the padding for H/W is backwards. I found this out by passing in an image size of (648,888) during validation but it's obvious from the torch docs and the code.
```python
class PatchMerging(nn.Module):
""" Patch Merging Layer.
"""
def __init__(
self,
dim: int,
out_dim: Optional[int] = None,
norm_layer: Callable = nn.LayerNorm,
):
"""
Args:
dim: Number of input channels.
out_dim: Number of output channels (or 2 * dim if None)
norm_layer: Normalization layer.
"""
super().__init__()
self.dim = dim
self.out_dim = out_dim or 2 * dim
self.norm = norm_layer(4 * dim)
self.reduction = nn.Linear(4 * dim, self.out_dim, bias=False)
def forward(self, x):
B, H, W, C = x.shape
pad_values = (0, 0, 0, W % 2, 0, H % 2) # Originally (0, 0, 0, H % 2, 0, W % 2) which is wrong
x = nn.functional.pad(x, pad_values)
_, H, W, _ = x.shape
x = x.reshape(B, H // 2, 2, W // 2, 2, C).permute(0, 1, 3, 4, 2, 5).flatten(3)
x = self.norm(x)
x = self.reduction(x)
return x
```
Since the input is B, H, W, C, the padding should be in reverse order like (C_front, C_back, W_front, W_back, H_front, H_back).
Thanks,
-Collin
| closed | 2024-09-21T20:15:01Z | 2024-09-22T00:42:00Z | https://github.com/huggingface/pytorch-image-models/issues/2284 | [
"bug"
] | collinmccarthy | 2 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 387 | 使用完整模型进行摘要生成,出现问题 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
其他问题
### 基础模型
Chinese-Alpaca-2 (7B/13B)
### 操作系统
Windows
### 详细描述问题
上次说是模型不一样,换了一个模型使用的是

```
下载后直接使用
# 使用教程
https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/langchain_zh
# 中的摘要出现了问题
python langchain_sum.py --model_path .\chinese-alpaca-2-7b-hf\ --file_path doc.txt --gpu_id 0
```
### 依赖情况(代码类问题务必提供)
```
peft 0.6.0.dev0
torch 2.1.0+cu121
transformers 4.31.0
sentencepiece 0.1.97
bitsandbytes 0.41.0
langchain 0.0.146
sentence-transformers 2.2.2
pydantic 1.10.8
faiss 1.7.1
```
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
Namespace(file_path='mydoc.txt', model_path='chinese-alpaca-2-7b-hf', gpu_id='0', chain_type='refine')
loading LLM...
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
Loading checkpoint shards: 100%|██████████| 2/2 [01:19<00:00, 39.64s/it]
Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers
pip install xformers.
Traceback (most recent call last):
File "C:\Users\zx-cent\PycharmProjects\pythonaimodel\my_test.py", line 54, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "C:\Users\zx-cent\.conda\envs\pythonaivenv\lib\site-packages\langchain\llms\huggingface_pipeline.py", line 130, in from_model_id
return cls(
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for HuggingFacePipeline
pipeline_kwargs
extra fields not permitted (type=value_error.extra)
``` | closed | 2023-11-03T01:26:42Z | 2023-11-03T02:26:26Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/387 | [] | 1042312930 | 2 |
tensorpack/tensorpack | tensorflow | 981 | Sharping mAP decrease after the realization of DoReFa to the ssd-vgg16 network | Sorry for not writing according to the template as this problem is more relevant to DoReFa quantization than to the tensorpack itself and I cannot find a more suitable place to post.
I am trying to realize the DoReFa quantization on the ssd-vgg16 network using the VOC dataset for the classification and regression tasks. The results however shows a unreasonably sharping decrease (from normal mAP 77% to around 10%) under the 8 8 8 DoReFa configuration. I am wondering if similar works have been done and the same trouble encountered?
| closed | 2018-11-15T01:52:22Z | 2018-11-15T03:04:23Z | https://github.com/tensorpack/tensorpack/issues/981 | [
"unrelated"
] | asjmasjm | 1 |
miguelgrinberg/python-socketio | asyncio | 928 | Threading error when using python-socketio for a few mintutes. | **Describe the bug**
Whenever I use python-socketio on a raspberry PI, eventually after a few minutes sending a lot of packets causes a threading error which crashes the bot, which is unacceptable when working with robots.
**Exception**
```
Exception in thread Thread-3:
Traceback (most recent call last)
File "/usr/lib/python3.7/threading.py...
engineio/client.py, line 367
RuntimeError: can't start new thread
packet queue is empty, aborting
```
Do note, that this program is sending max 30 packets every second.
| closed | 2022-05-24T19:06:45Z | 2022-05-24T19:29:38Z | https://github.com/miguelgrinberg/python-socketio/issues/928 | [] | LeoDog896 | 1 |
twopirllc/pandas-ta | pandas | 398 | Panda-ta's VPVR shows diff. result | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
0.3.30b0
```
**Upgrade.**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Describe the bug**
I have used the same time frame here, and the same start and end date, and Exchange(binance), and same numbers of columns. Here the .csv file of ohlcv starting from January 21 2021 [BTC-USDT.csv](https://github.com/twopirllc/pandas-ta/files/7243057/BTC-USDT.csv).
**Python code that shows VPVR of BTC-USDT
```python
import pandas as pd
import numpy as np
from scipy import stats, signal
import plotly.express as px
import plotly.graph_objects as go
import pandas_ta as ta
# Fetch OHLCV data
data = pd.read_csv('ohlcv/BTC-USDT.csv')
vp=ta.vp(data['close'],data['volume'],24)
# Set the index to mean_close and in ascending order
vp['mean_close'] = round(vp['mean_close'], 2)
vp.set_index('mean_close', inplace=True)
vp.sort_index(ascending=True, inplace=True)
#print("Sorted by mean_close\n", vp, "\n") # Visual Table Check
# Take the last three columns and plot them with horizontal bars
vp[vp.columns[-3:]].plot(
kind='barh',
figsize=(5, 8),
title="BTCUSDT",
color=['green', 'red', 'silver'],
alpha=0.45,
stacked=True,
)
```
** Screenshoots of pandas-ta shows 46k has the highest volume**

while on tradingview 34k is the highest value

| closed | 2021-09-28T10:26:49Z | 2023-11-22T05:06:39Z | https://github.com/twopirllc/pandas-ta/issues/398 | [
"enhancement",
"help wanted"
] | pwneddesal | 5 |
voila-dashboards/voila | jupyter | 1,047 | Deploying to Heroku not working on v0.3 | ## Description
I've been struggling to get Voila to deploy to Heroku recently, and have just figured that I am only having issues where I'm using version 0.3. For example, using the template repo (https://github.com/voila-dashboards/voila-heroku) now doesn't work on Heroku as it installs the 0.3 version (I had to update the runtime.txt to use 3.8.10 as 3.7.5 isn't available on the default Heroku-20 stack).
Specifically I get a timeout error in the Heroku logs. It looks like the Voila server starts up okay but doesn't manage to send anything when a page is requested:
```
2021-12-18T13:09:03.238707+00:00 heroku[web.1]: Starting process with command `voila --port=26507 --no-browser --template=material --enable_nbextensions=True notebooks/bqplot.ipynb`
2021-12-18T13:09:05.949936+00:00 app[web.1]: [Voila] Using /tmp to store connection files
2021-12-18T13:09:05.950308+00:00 app[web.1]: [Voila] Storing connection files in /tmp/voila_l56jo0cr.
2021-12-18T13:09:05.950375+00:00 app[web.1]: [Voila] Serving static files from /app/.heroku/python/lib/python3.8/site-packages/voila/static.
2021-12-18T13:09:06.180485+00:00 app[web.1]: [Voila] Voilà is running at:
2021-12-18T13:09:06.180486+00:00 app[web.1]: http://localhost:26507/
2021-12-18T13:09:13.000000+00:00 app[api]: Build succeeded
2021-12-18T13:10:03.342378+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2021-12-18T13:10:03.368529+00:00 heroku[web.1]: Stopping process with SIGKILL
2021-12-18T13:10:03.528533+00:00 heroku[web.1]: Process exited with status 137
2021-12-18T13:10:03.609381+00:00 heroku[web.1]: State changed from starting to crashed
```
If I change the `requirements.txt` to `voila==0.2.16`, it works as expected. Running Heroku locally also works (`heroku local`) as expected, even with version 0.3. I tried to set max boot time to 180 secs and it still didn't work.
## Reproduce
To reproduce, follow the instructions in https://github.com/voila-dashboards/voila-heroku, but change the `runtime.txt` file to `python-3.8.10` (or anything else on Heroku-20).
| closed | 2021-12-18T13:16:27Z | 2021-12-19T13:04:16Z | https://github.com/voila-dashboards/voila/issues/1047 | [
"bug"
] | samharrison7 | 2 |
BlinkDL/RWKV-LM | pytorch | 239 | Flash Attention | Hi,
Thanks for releasing RWKV! I got an error saying RWKV doesn't support Flash Attention. Is Flash Attention support planned?
Thank you! | closed | 2024-04-21T21:55:15Z | 2024-04-24T22:54:57Z | https://github.com/BlinkDL/RWKV-LM/issues/239 | [] | fakerybakery | 2 |
ageitgey/face_recognition | python | 1,244 | [QUESTION] Recognize face with machine learning of *multiple* images of face? | * face_recognition version: Newest on Nov 11 2020
* Python version: 3
* Operating System: Ubuntu
Is it possible to load multiple images to get a better facial recognition result?
I want to load images of my face in different lighting conditions, looking in different directions etc. to see if I can achieve a better effect. | closed | 2020-11-12T00:14:14Z | 2020-11-13T15:45:57Z | https://github.com/ageitgey/face_recognition/issues/1244 | [] | Hat000 | 1 |
taverntesting/tavern | pytest | 683 | Documentation contains reference to non-existent --tavern-beta-new-traceback flag | See: https://tavern.readthedocs.io/en/latest/debugging.html?highlight=tavern-beta-new-traceback | closed | 2021-04-30T18:19:56Z | 2021-05-08T13:32:19Z | https://github.com/taverntesting/tavern/issues/683 | [] | jsfehler | 0 |
wkentaro/labelme | computer-vision | 810 | [BUG] OSError: cannot write mode RGBA as JPEG | When I am going to second image via next button or try to save current one the python crashes and labelme is closed.
the error is
in _save raise OSError(f"cannot write mode {im.mode} as JPEG") from e
OSError: cannot write mode RGBA as JPEG

Please help me soon, I have a deadline.
| closed | 2020-12-12T12:56:54Z | 2023-04-05T14:55:01Z | https://github.com/wkentaro/labelme/issues/810 | [
"issue::bug",
"priority: high"
] | ApoorvaOjha | 4 |
litestar-org/litestar | api | 3,356 | Docs: Update `usage/security/guards` chapter | ### Summary
Follow-up from discussion: https://github.com/orgs/litestar-org/discussions/3355
Suggesting changes to chapter: https://docs.litestar.dev/latest/usage/security/guards.html
* Guards take a `Request` object as first argument, not an `ASGIConnection`. Needs correction throughout the chapter.
* Add examples for how to access path parameters, query parameters, and query body from within a guard. | closed | 2024-04-09T10:18:04Z | 2025-03-20T15:54:34Z | https://github.com/litestar-org/litestar/issues/3356 | [
"Documentation :books:"
] | aranvir | 4 |
tflearn/tflearn | data-science | 961 | How to get tensorflow session by tflearn? | Hello,
I just want to get the running session(in tensorflow) by tflearn.
I saw that [code](https://github.com/tflearn/tflearn/blob/master/examples/basics/weights_persistence.py) line65 in tflearn example:
`with model.session.as_default()`
Is that model.session equal to the running tensorflow session?
| open | 2017-11-20T02:02:56Z | 2017-11-20T02:02:56Z | https://github.com/tflearn/tflearn/issues/961 | [] | polar99 | 0 |
django-cms/django-cms | django | 7,571 | [DOC] The suggested aldryn-search package does not work anymore with django>=4 |
## Description
https://docs.django-cms.org/en/latest/topics/searchdocs.html
The above documentation page suggest `aldryn-search` package which does not work anymore with django >4.0 (bug with Signal using `providing_args`)
Since django-cms 3.11 is announced to be compatible with django 4 the documentation should no recommend to use this package.
* [ ] Yes, I want to help fix this issue and I will join #workgroup-documentation on [Slack](https://www.django-cms.org/slack) to confirm with the team that a PR is welcome.
* [x] No, I only want to report the issue.
| closed | 2023-05-31T10:33:28Z | 2024-07-31T06:48:52Z | https://github.com/django-cms/django-cms/issues/7571 | [
"good first issues",
"component: documentation",
"needs contribution",
"Easy pickings"
] | fabien-michel | 15 |
piskvorky/gensim | nlp | 2,775 | xml.etree.cElementTree was deprecated and removed in Python 3.9 in favor of ElementTree | #### Problem description
xml.etree.cElementTree was deprecated and removed in Python 3.9 in favor of ElementTree
Ref : https://github.com/python/cpython/pull/19108
#### Versions
Python 3.9
I will raise a PR for this issue. | closed | 2020-03-29T14:43:18Z | 2020-04-24T19:54:33Z | https://github.com/piskvorky/gensim/issues/2775 | [] | tirkarthi | 0 |
CTFd/CTFd | flask | 2,148 | Return solves/fails in Challenge plugin class | I found a comment in a plugin that suggested that we should return the solve/fail object in Challenge plugins. I agree with this and it doesn't seem to introduce a breaking change so might as well do it.
We should return the solve object in this function:
https://github.com/CTFd/CTFd/blob/a2c81cb03a398f3ca1819642b8e8dba181dccb22/CTFd/plugins/challenges/__init__.py#L132 | open | 2022-06-24T16:00:55Z | 2022-06-24T16:00:55Z | https://github.com/CTFd/CTFd/issues/2148 | [] | ColdHeat | 0 |
graphql-python/graphene-django | django | 1,383 | TypeError: Object of type OperationType is not JSON serializable when there's an Exception | *My code has not changed, I simply upgraded Graphene (graphene-python, graphene-django)*
* **What is the current behavior?**
Since upgrading to the latest versions (on Django 3.2.16), I have started getting the following errors when an exception is raised in my resolvers and mutations:
```py
platform | Traceback (most recent call last):
platform | File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
platform | response = get_response(request)
platform | File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
platform | response = wrapped_callback(request, *callback_args, **callback_kwargs)
platform | File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
platform | return view_func(*args, **kwargs)
platform | File "/usr/local/lib/python3.9/site-packages/django/views/generic/base.py", line 70, in view
platform | return self.dispatch(request, *args, **kwargs)
platform | File "/usr/local/lib/python3.9/site-packages/django/utils/decorators.py", line 43, in _wrapper
platform | return bound_method(*args, **kwargs)
platform | File "/usr/local/lib/python3.9/site-packages/django/utils/decorators.py", line 130, in _wrapped_view
platform | response = view_func(request, *args, **kwargs)
platform | File "/usr/local/lib/python3.9/site-packages/graphene_django/views.py", line 179, in dispatch
platform | result, status_code = self.get_response(request, data, show_graphiql)
platform | File "/usr/local/lib/python3.9/site-packages/graphene_django/views.py", line 224, in get_response
platform | result = self.json_encode(request, response, pretty=show_graphiql)
platform | File "/usr/local/lib/python3.9/site-packages/graphene_django/views.py", line 235, in json_encode
platform | return json.dumps(d, separators=(",", ":"))
platform | File "/usr/local/lib/python3.9/json/__init__.py", line 234, in dumps
platform | return cls(
platform | File "/usr/local/lib/python3.9/json/encoder.py", line 199, in encode
platform | chunks = self.iterencode(o, _one_shot=True)
platform | File "/usr/local/lib/python3.9/json/encoder.py", line 257, in iterencode
platform | return _iterencode(o, 0)
platform | File "/usr/local/lib/python3.9/json/encoder.py", line 179, in default
platform | raise TypeError(f'Object of type {o.__class__.__name__} '
platform | TypeError: Object of type OperationType is not JSON serializable
```
Everything was working fine before, until I upgraded to these versions:
```
Django==3.2.16
...
graphene==3.2.1
graphene-django==3.0.0
graphene-file-upload==1.3.0
graphql-core==3.2.3
graphql-relay==3.2.0
...
```
* **What is the expected behavior?**
I expected everything to work as it used to before since I didn't change application code.
* **What is the motivation / use case for changing the behavior?**
I was actually trying to get the latest version of the `Graphiql` GUI when I started. But then I noticed that my dependencies were very outdated.
* **Please tell us about your environment:**
- Version: 3.0.0
- Platform: Python 3.10.9, macOS Ventura (Chip M1 Pro)
| open | 2023-01-17T06:44:00Z | 2023-04-25T18:32:27Z | https://github.com/graphql-python/graphene-django/issues/1383 | [
"🐛bug"
] | sithembiso | 1 |
netbox-community/netbox | django | 17,772 | My CI/CD breaks as Tag v4.1.4 doesn't point to PR merge commit for master branch. It points to commit in develop. | ### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.1.3
### Python Version
3.12
### Steps to Reproduce
I have a GitHub workflow in my repo to track NetBox master branch everyday. It checks the tags and find out the latest release by parsing the latest tag. Then I do code review again and merged to my local main branch.
It works well for previous releases. It stopped working with v4.1.4.
```
UPSTREAM_TAG=$(git ls-remote --tags | grep -h $(git rev-parse --short ${{ env.upstream_remote_name }}/${{ env.upstream_branch_name }} ) | awk 'END{print}' | awk '{print $2}' | cut -d'/' -f 3)
UPSTREAM_HEAD_COMMIT=$(git rev-parse --short ${{ env.upstream_remote_name }}/${{ env.upstream_branch_name }} )
```
The tag for latest version v4.1.4 doesn't point the commit of "Merge pull request" in master branch. Or the latest commit in master branch is not in the repo tag list. There is inconsistence.
With `git ls-remote --tags` or `git ls-remote --tags https://github.com/netbox-community/netbox.git` the tags can be feteched.
The tag v4.1.4 points to d2cbdfe7d742f0d2db7989ed27cde466c8366dea in develop branch
While other tags pointed to "Merge pull request" in master branch.
Tag v4.1.4 should point to 6ea0c0c in master branch
```
7bc0d34196323ac992d7ec80b1caa48e6094d88d refs/tags/v4.1.0
Merge pull request #17350 from netbox-community/develop <--- commit in master branch
0e34fba92223348e0bf4375b8d380324ff5e1beb refs/tags/v4.1.1
Merge pull request #17478 from netbox-community/develop <--- commit in master branch
ead6e637f4aecc4717b10c71f9140c94040da264 refs/tags/v4.1.2
Merge pull request #17626 from netbox-community/develop <--- commit in master branch
6ea0c0c3e910d1104fd0fbe5e6cd07198862d1fa refs/tags/v4.1.3
Merge pull request #17658 from netbox-community/develop <--- commit in master branch
d2cbdfe7d742f0d2db7989ed27cde466c8366dea refs/tags/v4.1.4
Release v4.1.4 <--- commit in develop branch
```
### Expected Behavior
Tag v4.1.4 should point to 6ea0c0c in master branch
### Observed Behavior
Tag v4.1.4 points to d2cbdfe7d742f0d2db7989ed27cde466c8366dea in develop branch | closed | 2024-10-16T10:05:01Z | 2024-10-16T11:52:28Z | https://github.com/netbox-community/netbox/issues/17772 | [] | marsteel | 0 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,518 | [Bug]: Missing scrollbar on extra networks tabs with tree view enabled | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I have lots of Loras in various subfolders. With the dir view enabled ("Extra Networks directory view style" is set to dirs), the folder names fill up the entire available space on the Loras tab and get cut off. Folder names that don't fit on the screen get cut off and the lora thumbnails aren't visible either. I can drag the resize handle down until all the folders, and the thumbnails below them are visible but this is cumbersome compared to the previous behavior.
I think this broke when the resize handle was introduced and the extra-network-pane divs were switched to flexbox.
I'm not good enough with CSS to fix this but removing the `height: calc(100vh - 24rem);` rule from `.extra-network-pane` in style.css alleviates the problem by making the extra networks pane big enough that all content fits without scrolling (but this is probably not ideal either since it can get very large).
Edit: edited the title of the issue and the description above to clarify that this happens only when "Extra Networks directory view style" is set to dirs. Tree view is fine.
### Steps to reproduce the problem
1. Have lots of Loras sorted into many folders (enough that the list of folders doesn't fit on a single screen on the Lora tab)
2. Go to Lora tab
3. Observe that list of folders is cut off, thumbnails below the folder list are not visible
### What should have happened?
Either the list of folders should be big enough so no folders are cut off or there should be a scrollbar. The thumbnails should be visible below the list of folders even if the list of folders takes up more than a screen.
### What browsers do you use to access the UI ?
Brave
### Sysinfo
[sysinfo.txt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/14972152/sysinfo.txt)
### Console logs
```Shell
not relevant, it's a CSS styling issue
```
### Additional information
_No response_ | open | 2024-04-14T21:25:31Z | 2024-04-23T20:59:21Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15518 | [
"bug"
] | thatfuckingbird | 5 |
Urinx/WeixinBot | api | 226 | 微信网页版采用mmtls协议了 | 是不是这个程序就跑不了了?我现在运行_py3下的robot,已经获取不到联系人信息了,接口返回一个错误码,然后内容就为空了。 | closed | 2017-08-25T05:00:54Z | 2017-08-25T05:02:48Z | https://github.com/Urinx/WeixinBot/issues/226 | [] | daimon99 | 0 |
axnsan12/drf-yasg | django | 168 | Cache feature breaks when using the Redis cache backend | I followed the docs to set up `drf-yasg` but i kept getting this error. I even tried removing all my paths to leave only the schema paths and I'm still getting the error.
Here are my deps:
```
celery[redis]==4.2.1
channels==2.1.2
channels_redis==2.2.1
django[argon2]==2.0.7
django-cors-headers==2.2.0
django-debug-toolbar==1.9.1
django-filter==2.0.0
django-redis==4.9.0
djangorestframework==3.8.2
djangorestframework_simplejwt==3.2.3
drf-yasg[validation]==1.9.1
gunicorn==19.9.0
Markdown==2.6.11
pika==0.12.0
psycopg2==2.7.5
Pygments==2.2.0
rethinkdb==2.3.0.post6
uvicorn==0.2.17
```
And here is the error log:
```
Environment:
Request Method: GET
Request URL: http://localhost/docs
Django Version: 2.0.7
Python Version: 3.6.5
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'channels',
'django_filters',
'drf_yasg',
'rest_framework',
'windspeed.taskapp.celery.CeleryConfig',
'windspeed.common.apps.CommonConfig',
'windspeed.accounts.apps.AccountsConfig',
'windspeed.authentication.apps.AuthenticationConfig',
'debug_toolbar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'debug_toolbar.middleware.DebugToolbarMiddleware']
Traceback:
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
35. response = get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
158. response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
156. response = response.render()
File "/usr/local/lib/python3.6/site-packages/django/template/response.py" in render
108. newretval = post_callback(retval)
File "/usr/local/lib/python3.6/site-packages/django/utils/decorators.py" in callback
156. return middleware.process_response(request, response)
File "/usr/local/lib/python3.6/site-packages/django/middleware/cache.py" in process_response
102. lambda r: self.cache.set(cache_key, r, timeout)
File "/usr/local/lib/python3.6/site-packages/django/template/response.py" in add_post_render_callback
93. callback(self)
File "/usr/local/lib/python3.6/site-packages/django/middleware/cache.py" in <lambda>
102. lambda r: self.cache.set(cache_key, r, timeout)
File "/usr/local/lib/python3.6/site-packages/debug_toolbar/panels/cache.py" in wrapped
33. value = method(self, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/debug_toolbar/panels/cache.py" in set
79. return self.cache.set(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/django_redis/cache.py" in _decorator
32. return method(self, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/django_redis/cache.py" in set
67. return self.client.set(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/django_redis/client/default.py" in set
114. nvalue = self.encode(value)
File "/usr/local/lib/python3.6/site-packages/django_redis/client/default.py" in encode
326. value = self._serializer.dumps(value)
File "/usr/local/lib/python3.6/site-packages/django_redis/serializers/json.py" in dumps
14. return json.dumps(value, cls=DjangoJSONEncoder).encode()
File "/usr/local/lib/python3.6/json/__init__.py" in dumps
238. **kw).encode(obj)
File "/usr/local/lib/python3.6/json/encoder.py" in encode
199. chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.6/json/encoder.py" in iterencode
257. return _iterencode(o, 0)
File "/usr/local/lib/python3.6/site-packages/django/core/serializers/json.py" in default
104. return super().default(o)
File "/usr/local/lib/python3.6/json/encoder.py" in default
180. o.__class__.__name__)
Exception Type: TypeError at /docs
Exception Value: Object of type 'Response' is not JSON serializable
```
Would appreciate any help! :) | closed | 2018-07-23T09:54:57Z | 2018-08-06T11:04:57Z | https://github.com/axnsan12/drf-yasg/issues/168 | [] | thomasjiangcy | 4 |
Asabeneh/30-Days-Of-Python | numpy | 296 | Typo in Intro | Right in the first paragraph you mention "month pythons..." the comedy skit. I believe you meant "Monty" | closed | 2022-08-24T12:08:53Z | 2023-07-08T22:16:19Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/296 | [] | nickocruzm | 0 |
sinaptik-ai/pandas-ai | pandas | 1,408 | Docker Compose Build issue | ### System Info
Hey I am trying to run the platform locally and using the intructions which were given in this thread.
https://docs.pandas-ai.com/platform
when i am building the docker using the docker compose commmand i am getting error and service stop running.
pandabi-backend | ERROR: Application startup failed. Exiting.
### Platform Details:
Linux "Ubuntu 22.04.4 LTS"
Docker version 27.3.1
### 🐛 Describe the bug
### Build Logs
Creating network "pandas-ai_pandabi-network" with driver "bridge"
Creating pandas-ai_postgresql_1 ... done
Creating pandabi-frontend ... done
Creating pandabi-backend ... done
Attaching to pandabi-frontend, pandas-ai_postgresql_1, pandabi-backend
pandabi-backend | startup.sh: line 6: log: command not found
postgresql_1 |
postgresql_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgresql_1 |
postgresql_1 | 2024-10-24 09:58:10.417 UTC [1] LOG: starting PostgreSQL 14.2 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
postgresql_1 | 2024-10-24 09:58:10.417 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgresql_1 | 2024-10-24 09:58:10.417 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgresql_1 | 2024-10-24 09:58:10.422 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgresql_1 | 2024-10-24 09:58:10.428 UTC [21] LOG: database system was shut down at 2024-10-24 06:37:08 UTC
postgresql_1 | 2024-10-24 09:58:10.433 UTC [1] LOG: database system is ready to accept connections
pandabi-frontend |
pandabi-frontend | > client@0.1.0 start
pandabi-frontend | > next start
pandabi-frontend |
pandabi-frontend | ⚠ You are using a non-standard "NODE_ENV" value in your environment. This creates inconsistencies in the project and is strongly advised against. Read more: https://nextjs.org/docs/messages/non-standard-node-env
pandabi-frontend | ▲ Next.js 14.2.3
pandabi-frontend | - Local: http://localhost:3000
pandabi-frontend |
pandabi-frontend | ✓ Starting...
pandabi-frontend | ✓ Ready in 747ms
pandabi-backend | Resolving dependencies...
pandabi-backend | Warning: The locked version 3.9.1 for matplotlib is a yanked version. Reason for being yanked: The Windows wheels, under some conditions, caused segfaults in unrelated user code. Due to this we deleted the Windows wheels to prevent these segfaults, however this caused greater disruption as pip then began to try (and fail) to build 3.9.1 from the sdist on Windows which impacted far more users. Yanking the whole release is the only tool available to eliminate these failures without changes to on the user side. The sdist, OSX wheel, and manylinux wheels are all functional and there are no critical bugs in the release. Downstream packagers should not yank their builds of Matplotlib 3.9.1. See https://github.com/matplotlib/matplotlib/issues/28551 for details.
pandabi-backend | poetry install
pandabi-backend | Installing dependencies from lock file
pandabi-backend |
pandabi-backend | No dependencies to install or update
pandabi-backend |
pandabi-backend | Installing the current project: pandasai-server (0.1.0)
pandabi-backend |
pandabi-backend | Warning: The current project could not be installed: No file/folder found for package pandasai-server
pandabi-backend | If you do not want to install the current project use --no-root.
pandabi-backend | If you want to use Poetry only for dependency management but not for packaging, you can disable package mode by setting package-mode = false in your pyproject.toml file.
pandabi-backend | In a future version of Poetry this warning will become an error!
pandabi-backend | wait-for-it.sh: 4: shift: can't shift that many
pandabi-backend | export DEBUG='1'
pandabi-backend | export ENVIRONMENT='development'
pandabi-backend | export GPG_KEY='A035C8C19219BA821ECEA86B64E628F8D684696D'
pandabi-backend | export HOME='/root'
pandabi-backend | export HOSTNAME='99c683c72737'
pandabi-backend | export LANG='C.UTF-8'
pandabi-backend | export MAKEFLAGS=''
pandabi-backend | export MAKELEVEL='1'
pandabi-backend | export MFLAGS=''
pandabi-backend | export PANDASAI_API_KEY='$2a$10$eLN.4Ut5vlqZi9V6OIQBkOAyMA42AxgV9lwgwkxnT5bCoWBSzFt/q'
pandabi-backend | export PATH='/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/bin:/root/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
pandabi-backend | export POSTGRES_URL='postgresql+asyncpg://pandasai:password123@postgresql:5432/pandasai-db'
pandabi-backend | export PS1='(pandasai-server-py3.11) '
pandabi-backend | export PWD='/app'
pandabi-backend | export PYTHON_SHA256='07a4356e912900e61a15cb0949a06c4a05012e213ecd6b4e84d0f67aabbee372'
pandabi-backend | export PYTHON_VERSION='3.11.10'
pandabi-backend | export SHLVL='1'
pandabi-backend | export SHOW_SQL_ALCHEMY_QUERIES='0'
pandabi-backend | export TEST_POSTGRES_URL='postgresql+asyncpg://pandasai:password123@postgresql:5432/pandasai-db'
pandabi-backend | export VIRTUAL_ENV='/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11'
pandabi-backend | export VIRTUAL_ENV_PROMPT='pandasai-server-py3.11'
pandabi-backend | export _='/usr/bin/make'
pandabi-backend | poetry run alembic upgrade head
pandabi-backend | Traceback (most recent call last):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/bin/alembic", line 8, in <module>
pandabi-backend | sys.exit(main())
pandabi-backend | ^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/config.py", line 636, in main
pandabi-backend | CommandLine(prog=prog).main(argv=argv)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/config.py", line 626, in main
pandabi-backend | self.run_cmd(cfg, options)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/config.py", line 603, in run_cmd
pandabi-backend | fn(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/command.py", line 406, in upgrade
pandabi-backend | script.run_env()
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/script/base.py", line 582, in run_env
pandabi-backend | util.load_python_file(self.dir, "env.py")
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 95, in load_python_file
pandabi-backend | module = load_module_py(module_id, path)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 113, in load_module_py
pandabi-backend | spec.loader.exec_module(module) # type: ignore
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
pandabi-backend | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
pandabi-backend | File "/app/migrations/env.py", line 10, in <module>
pandabi-backend | from app.models import Base
pandabi-backend | ModuleNotFoundError: No module named 'app'
pandabi-backend | make: *** [Makefile:52: migrate] Error 1
pandabi-backend | export DEBUG='1'
pandabi-backend | export ENVIRONMENT='development'
pandabi-backend | export GPG_KEY='A035C8C19219BA821ECEA86B64E628F8D684696D'
pandabi-backend | export HOME='/root'
pandabi-backend | export HOSTNAME='99c683c72737'
pandabi-backend | export LANG='C.UTF-8'
pandabi-backend | export MAKEFLAGS=''
pandabi-backend | export MAKELEVEL='1'
pandabi-backend | export MFLAGS=''
pandabi-backend | export PANDASAI_API_KEY='$2a$10$eLN.4Ut5vlqZi9V6OIQBkOAyMA42AxgV9lwgwkxnT5bCoWBSzFt/q'
pandabi-backend | export PATH='/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/bin:/root/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
pandabi-backend | export POSTGRES_URL='postgresql+asyncpg://pandasai:password123@postgresql:5432/pandasai-db'
pandabi-backend | export PS1='(pandasai-server-py3.11) '
pandabi-backend | export PWD='/app'
pandabi-backend | export PYTHON_SHA256='07a4356e912900e61a15cb0949a06c4a05012e213ecd6b4e84d0f67aabbee372'
pandabi-backend | export PYTHON_VERSION='3.11.10'
pandabi-backend | export SHLVL='1'
pandabi-backend | export SHOW_SQL_ALCHEMY_QUERIES='0'
pandabi-backend | export TEST_POSTGRES_URL='postgresql+asyncpg://pandasai:password123@postgresql:5432/pandasai-db'
pandabi-backend | export VIRTUAL_ENV='/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11'
pandabi-backend | export VIRTUAL_ENV_PROMPT='pandasai-server-py3.11'
pandabi-backend | export _='/usr/bin/make'
pandabi-backend | poetry run python main.py
pandabi-backend | INFO: Will watch for changes in these directories: ['/app']
pandabi-backend | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
pandabi-backend | INFO: Started reloader process [58] using StatReload
pandabi-backend | INFO: Started server process [62]
pandabi-backend | INFO: Waiting for application startup.
pandabi-backend | 2024-10-24 09:58:31,386 INFO sqlalchemy.engine.Engine select pg_catalog.version()
pandabi-backend | 2024-10-24 09:58:31,386 INFO sqlalchemy.engine.Engine [raw sql] ()
pandabi-backend | 2024-10-24 09:58:31,388 INFO sqlalchemy.engine.Engine select current_schema()
pandabi-backend | 2024-10-24 09:58:31,388 INFO sqlalchemy.engine.Engine [raw sql] ()
pandabi-backend | 2024-10-24 09:58:31,389 INFO sqlalchemy.engine.Engine show standard_conforming_strings
pandabi-backend | 2024-10-24 09:58:31,389 INFO sqlalchemy.engine.Engine [raw sql] ()
pandabi-backend | 2024-10-24 09:58:31,391 INFO sqlalchemy.engine.Engine BEGIN (implicit)
pandabi-backend | 2024-10-24 09:58:31,400 INFO sqlalchemy.engine.Engine SELECT anon_1.id, anon_1.email, anon_1.first_name, anon_1.created_at, anon_1.password, anon_1.verified, anon_1.last_name, anon_1.features, organization_1.id AS id_1, organization_1.name, organization_1.url, organization_1.is_default, organization_1.settings, organization_membership_1.id AS id_2, organization_membership_1.user_id, organization_membership_1.organization_id, organization_membership_1.role, organization_membership_1.verified AS verified_1
pandabi-backend | FROM (SELECT "user".id AS id, "user".email AS email, "user".first_name AS first_name, "user".created_at AS created_at, "user".password AS password, "user".verified AS verified, "user".last_name AS last_name, "user".features AS features
pandabi-backend | FROM "user"
pandabi-backend | LIMIT $1::INTEGER OFFSET $2::INTEGER) AS anon_1 LEFT OUTER JOIN organization_membership AS organization_membership_1 ON anon_1.id = organization_membership_1.user_id LEFT OUTER JOIN organization AS organization_1 ON organization_1.id = organization_membership_1.organization_id
pandabi-backend | 2024-10-24 09:58:31,400 INFO sqlalchemy.engine.Engine [generated in 0.00031s] (1, 0)
postgresql_1 | 2024-10-24 09:58:31.401 UTC [28] ERROR: relation "user" does not exist at character 700
postgresql_1 | 2024-10-24 09:58:31.401 UTC [28] STATEMENT: SELECT anon_1.id, anon_1.email, anon_1.first_name, anon_1.created_at, anon_1.password, anon_1.verified, anon_1.last_name, anon_1.features, organization_1.id AS id_1, organization_1.name, organization_1.url, organization_1.is_default, organization_1.settings, organization_membership_1.id AS id_2, organization_membership_1.user_id, organization_membership_1.organization_id, organization_membership_1.role, organization_membership_1.verified AS verified_1
postgresql_1 | FROM (SELECT "user".id AS id, "user".email AS email, "user".first_name AS first_name, "user".created_at AS created_at, "user".password AS password, "user".verified AS verified, "user".last_name AS last_name, "user".features AS features
postgresql_1 | FROM "user"
postgresql_1 | LIMIT $1::INTEGER OFFSET $2::INTEGER) AS anon_1 LEFT OUTER JOIN organization_membership AS organization_membership_1 ON anon_1.id = organization_membership_1.user_id LEFT OUTER JOIN organization AS organization_1 ON organization_1.id = organization_membership_1.organization_id
pandabi-backend | 2024-10-24 09:58:31,402 INFO sqlalchemy.engine.Engine ROLLBACK
pandabi-backend | ERROR: Traceback (most recent call last):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 514, in _prepare_and_execute
pandabi-backend | prepared_stmt, attributes = await adapt_connection._prepare(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 760, in _prepare
pandabi-backend | prepared_stmt = await self._connection.prepare(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/asyncpg/connection.py", line 636, in prepare
pandabi-backend | return await self._prepare(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/asyncpg/connection.py", line 654, in _prepare
pandabi-backend | stmt = await self._get_statement(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/asyncpg/connection.py", line 433, in _get_statement
pandabi-backend | statement = await self._protocol.prepare(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "asyncpg/protocol/protocol.pyx", line 166, in prepare
pandabi-backend | asyncpg.exceptions.UndefinedTableError: relation "user" does not exist
pandabi-backend |
pandabi-backend | The above exception was the direct cause of the following exception:
pandabi-backend |
pandabi-backend | Traceback (most recent call last):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
pandabi-backend | self.dialect.do_execute(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute
pandabi-backend | cursor.execute(statement, parameters)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 572, in execute
pandabi-backend | self._adapt_connection.await_(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
pandabi-backend | return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
pandabi-backend | value = await result
pandabi-backend | ^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 550, in _prepare_and_execute
pandabi-backend | self._handle_exception(error)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 501, in _handle_exception
pandabi-backend | self._adapt_connection._handle_exception(error)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 784, in _handle_exception
pandabi-backend | raise translated_error from error
pandabi-backend | sqlalchemy.dialects.postgresql.asyncpg.AsyncAdapt_asyncpg_dbapi.ProgrammingError: <class 'asyncpg.exceptions.UndefinedTableError'>: relation "user" does not exist
pandabi-backend |
pandabi-backend | The above exception was the direct cause of the following exception:
pandabi-backend |
pandabi-backend | Traceback (most recent call last):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 671, in lifespan
pandabi-backend | async with self.lifespan_context(app):
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 566, in __aenter__
pandabi-backend | await self._router.startup()
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 648, in startup
pandabi-backend | await handler()
pandabi-backend | File "/app/core/server.py", line 145, in on_startup
pandabi-backend | await init_database()
pandabi-backend | File "/app/core/server.py", line 113, in init_database
pandabi-backend | user = await init_user()
pandabi-backend | ^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/server.py", line 81, in init_user
pandabi-backend | await controller.create_default_user()
pandabi-backend | File "/app/core/database/transactional.py", line 40, in decorator
pandabi-backend | raise exception
pandabi-backend | File "/app/core/database/transactional.py", line 27, in decorator
pandabi-backend | result = await self._run_required_new(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/database/transactional.py", line 53, in _run_required_new
pandabi-backend | result = await function(*args, **kwargs)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/app/controllers/user.py", line 21, in create_default_user
pandabi-backend | users = await self.get_all(limit=1, join_={"memberships"})
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/controller/base.py", line 69, in get_all
pandabi-backend | response = await self.repository.get_all(skip, limit, join_)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/repository/base.py", line 48, in get_all
pandabi-backend | return await self._all_unique(query)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/app/core/repository/base.py", line 124, in _all_unique
pandabi-backend | result = await self.session.execute(query)
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/ext/asyncio/scoping.py", line 589, in execute
pandabi-backend | return await self._proxied.execute(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/ext/asyncio/session.py", line 461, in execute
pandabi-backend | result = await greenlet_spawn(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 201, in greenlet_spawn
pandabi-backend | result = context.throw(*sys.exc_info())
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2351, in execute
pandabi-backend | return self._execute_internal(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2236, in _execute_internal
pandabi-backend | result: Result[Any] = compile_state_cls.orm_execute_statement(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement
pandabi-backend | result = conn.execute(
pandabi-backend | ^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1418, in execute
pandabi-backend | return meth(
pandabi-backend | ^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 515, in _execute_on_connection
pandabi-backend | return connection._execute_clauseelement(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1640, in _execute_clauseelement
pandabi-backend | ret = self._execute_context(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context
pandabi-backend | return self._exec_single_context(
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context
pandabi-backend | self._handle_dbapi_exception(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2353, in _handle_dbapi_exception
pandabi-backend | raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
pandabi-backend | self.dialect.do_execute(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute
pandabi-backend | cursor.execute(statement, parameters)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 572, in execute
pandabi-backend | self._adapt_connection.await_(
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
pandabi-backend | return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
pandabi-backend | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
pandabi-backend | value = await result
pandabi-backend | ^^^^^^^^^^^^
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 550, in _prepare_and_execute
pandabi-backend | self._handle_exception(error)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 501, in _handle_exception
pandabi-backend | self._adapt_connection._handle_exception(error)
pandabi-backend | File "/root/.cache/pypoetry/virtualenvs/pandasai-server-9TtSrW0h-py3.11/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 784, in _handle_exception
pandabi-backend | raise translated_error from error
pandabi-backend | sqlalchemy.exc.ProgrammingError: (sqlalchemy.dialects.postgresql.asyncpg.ProgrammingError) <class 'asyncpg.exceptions.UndefinedTableError'>: relation "user" does not exist
pandabi-backend | [SQL: SELECT anon_1.id, anon_1.email, anon_1.first_name, anon_1.created_at, anon_1.password, anon_1.verified, anon_1.last_name, anon_1.features, organization_1.id AS id_1, organization_1.name, organization_1.url, organization_1.is_default, organization_1.settings, organization_membership_1.id AS id_2, organization_membership_1.user_id, organization_membership_1.organization_id, organization_membership_1.role, organization_membership_1.verified AS verified_1
pandabi-backend | FROM (SELECT "user".id AS id, "user".email AS email, "user".first_name AS first_name, "user".created_at AS created_at, "user".password AS password, "user".verified AS verified, "user".last_name AS last_name, "user".features AS features
pandabi-backend | FROM "user"
pandabi-backend | LIMIT $1::INTEGER OFFSET $2::INTEGER) AS anon_1 LEFT OUTER JOIN organization_membership AS organization_membership_1 ON anon_1.id = organization_membership_1.user_id LEFT OUTER JOIN organization AS organization_1 ON organization_1.id = organization_membership_1.organization_id]
pandabi-backend | [parameters: (1, 0)]
pandabi-backend | (Background on this error at: https://sqlalche.me/e/20/f405)
pandabi-backend |
pandabi-backend | ERROR: Application startup failed. Exiting. | closed | 2024-10-24T10:12:29Z | 2024-12-19T14:34:42Z | https://github.com/sinaptik-ai/pandas-ai/issues/1408 | [
"duplicate"
] | hamxahbhatti | 3 |
thewhiteh4t/pwnedOrNot | api | 18 | UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 36: ordinal not in range(128) | **Describe the bug**
An error is displayed in the latest version : `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 36: ordinal not in range(128)`
**To Reproduce**
Steps to reproduce the behavior:
`docker run -it thewhiteh4t/pwnedornot ./pwnedornot.py -e test@yopmail.com`
**Expected behavior**
No errors.
**Desktop (please complete the following information):**
- docker version : `18.09.5-ce`
- pwnedOrNot version : `1.1.7` | closed | 2019-04-18T10:42:00Z | 2019-04-22T16:54:24Z | https://github.com/thewhiteh4t/pwnedOrNot/issues/18 | [] | johackim | 8 |
widgetti/solara | jupyter | 913 | Compatible with pytest-playwright 0.6.2 | Because we pin playwright, we use an old pytest-playwright:
```
...
Collecting pytest-playwright (from pytest-ipywidgets==1.42.0->pytest-ipywidgets==1.42.0)
Downloading pytest_playwright-0.5.2-py3-none-any.whl.metadata (1.5 kB)
...
```
However, the latest version (0.6.2) is not compatible with out pytest-ipywidgets library. | open | 2024-12-06T13:47:44Z | 2025-01-30T09:57:23Z | https://github.com/widgetti/solara/issues/913 | [] | maartenbreddels | 1 |
saulpw/visidata | pandas | 2,138 | [sidebar] Sidebar crashes if screen is resized | **Small description**
Sidebar crashes if screen is resized
**Expected result**
No crashes.
**Actual result with screenshot**
```
Traceback (most recent call last):
File "/usr/bin/vd", line 6, in <module>
visidata.main.vd_cli()
File "/usr/lib/python3.9/site-packages/visidata/main.py", line 378, in vd_cli
rc = main_vd()
File "/usr/lib/python3.9/site-packages/visidata/main.py", line 338, in main_vd
run(vd.sheets[0])
File "/usr/lib/python3.9/site-packages/visidata/vdobj.py", line 33, in _vdfunc
return getattr(visidata.vd, func.__name__)(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/visidata/extensible.py", line 65, in wrappedfunc
return oldfunc(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/visidata/extensible.py", line 65, in wrappedfunc
return oldfunc(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/visidata/mainloop.py", line 303, in run
ret = vd.mainloop(scr)
File "/usr/lib/python3.9/site-packages/visidata/extensible.py", line 65, in wrappedfunc
return oldfunc(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/visidata/mainloop.py", line 178, in mainloop
self.draw_all()
File "/usr/lib/python3.9/site-packages/visidata/mainloop.py", line 129, in draw_all
vd.drawSidebar(vd.scrFull, vd.activeSheet)
File "/usr/lib/python3.9/site-packages/visidata/sidebar.py", line 45, in drawSidebar
return sheet.drawSidebarText(scr, text=sheet.current_sidebar, overflowmsg=overflowmsg, bottommsg=bottommsg)
File "/usr/lib/python3.9/site-packages/visidata/sidebar.py", line 110, in drawSidebarText
clipdraw(sidebarscr, h-1, winw-dispwidth(bottommsg)-4, '|'+bottommsg+'|[:]', cattr)
File "/usr/lib/python3.9/site-packages/visidata/cliptext.py", line 206, in clipdraw
assert x >= 0, x
AssertionError: -1
```
https://asciinema.org/a/U6nrWmSZxAVnMxPBeAaWG4I25
**Steps to reproduce with sample data and a .vd**
Open VisiData with the sidebar on. Then resize the screen. in this case horizontally, it can have this stack trace.
**Additional context**
Please include the version of VisiData and Python. Latest develop. Python 3.9.2
| closed | 2023-11-27T01:02:20Z | 2023-11-28T00:04:09Z | https://github.com/saulpw/visidata/issues/2138 | [
"bug",
"fixed"
] | frosencrantz | 4 |
vitalik/django-ninja | pydantic | 1,347 | [BUG] Same path apis with different method and async sync are mixed then all considered as async when testing | **Describe the bug**
if same path with different method and async sync are mixed then they are all considered as async when testing
i use async for (GET) operation
and use sync for (POST,DELETE,PUT,PATCH) operations
but i got error when testing
example code is below
```python
def test_bug():
router = Router()
@router.get("/test/")
async def test_get(request):
return {"test": "test"}
@router.post("/test/")
def test_post(request):
return {"test": "test"}
client = TestClient(router)
response = client.post("/test/")
```
and it throws an error says
```
AttributeError sys:1: RuntimeWarning: coroutine 'PathView._async_view' was never awaited
```
so i found PathView._async_view from
```python
client.urls[0].callback # also for client.urls[1].callback
```
but i found that both callbacks are all PathView._async_view , even for the sync view (POST method)
and the reason is that when operations are added to Router()
for same path , then even if one operation is async , then all considered async
```python
class PathView:
def __init__(self) -> None:
self.operations: List[Operation] = []
self.is_async = False # if at least one operation is async - will become True <---------- Here
self.url_name: Optional[str] = None
def add_operation(
self,
path: str,
methods: List[str],
view_func: Callable,
*,
auth: Optional[Union[Sequence[Callable], Callable, NOT_SET_TYPE]] = NOT_SET,
throttle: Union[BaseThrottle, List[BaseThrottle], NOT_SET_TYPE] = NOT_SET,
response: Any = NOT_SET,
operation_id: Optional[str] = None,
summary: Optional[str] = None,
description: Optional[str] = None,
tags: Optional[List[str]] = None,
deprecated: Optional[bool] = None,
by_alias: bool = False,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
url_name: Optional[str] = None,
include_in_schema: bool = True,
openapi_extra: Optional[Dict[str, Any]] = None,
) -> Operation:
if url_name:
self.url_name = url_name
OperationClass = Operation
if is_async(view_func):
self.is_async = True # <----------------------- Here
OperationClass = AsyncOperation
operation = OperationClass(
path,
methods,
view_func,
auth=auth,
throttle=throttle,
response=response,
operation_id=operation_id,
summary=summary,
description=description,
tags=tags,
deprecated=deprecated,
by_alias=by_alias,
exclude_unset=exclude_unset,
exclude_defaults=exclude_defaults,
exclude_none=exclude_none,
include_in_schema=include_in_schema,
url_name=url_name,
openapi_extra=openapi_extra,
)
self.operations.append(operation)
view_func._ninja_operation = operation # type: ignore
return operation
```
i'm having a trouble because of that
is this a bug? or is there any purpose for that?
**Versions (please complete the following information):**
- Python version: 3.11
- Django version: 4.2.5
- Django-Ninja version: 1.2.2
- Pydantic version: 2.8.2
| open | 2024-11-27T07:07:27Z | 2024-12-06T10:02:13Z | https://github.com/vitalik/django-ninja/issues/1347 | [] | LeeJB-48 | 3 |
PokeAPI/pokeapi | graphql | 275 | wormadam name mismatch from evolution chain | The name for wormadam are inconsistent between the /pokemon and /evolution-chain endpoints. The name is shown as wormadam in the evolution chain but at pokemon/413 shows wormadam-plant. I was building an evolution tree and get a 404 when requesting pokemon/wormadam.
see:
[http://pokeapi.co/api/v2/evolution-chain/213/](url)
[http://pokeapi.co/api/v2/pokemon/413](url)
| closed | 2016-10-27T18:38:19Z | 2016-10-28T13:44:50Z | https://github.com/PokeAPI/pokeapi/issues/275 | [] | hshtgbrendo | 1 |
youfou/wxpy | api | 396 | 发送图片的时候出现错误 | File "<console>", line 1, in <module>
File "C:\Users\[UserName]\AppData\Local\Programs\Python\Python37\lib\site-packages\wxpy\api\chats\chat.py", line 54, in wrapped
ret = do_send()
File "C:\Users\[UserName]\AppData\Local\Programs\Python\Python37\lib\site-packages\wxpy\utils\misc.py", line 72, in wrapped
smart_map(check_response_body, ret)
File "C:\Users\[UserName]\AppData\Local\Programs\Python\Python37\lib\site-packages\wxpy\utils\misc.py", line 207, in smart_map
return func(i, *args, **kwargs)
File "C:\Users\[UserName]\AppData\Local\Programs\Python\Python37\lib\site-packages\wxpy\utils\misc.py", line 53, in check_response_body
raise ResponseError(err_code=err_code, err_msg=err_msg)
wxpy.exceptions.ResponseError: err_code: 1; err_msg: | open | 2019-07-02T07:32:39Z | 2019-10-24T06:21:05Z | https://github.com/youfou/wxpy/issues/396 | [] | remiliacn | 4 |
littlecodersh/ItChat | api | 451 | syncCheck return retcode=0, and selector=3 | 我实现一个定时发布图片和文字功能,我能够看到数据成功发送,但是调用syncCheck之后,返回retcode=0, and selector=3。这个问题导致每次调用syncCheck之后都是立即返回,而不是等待25秒。请问有selector的可能值吗?或者知道什么原因吗?多谢 | closed | 2017-07-17T21:41:55Z | 2017-09-20T02:48:40Z | https://github.com/littlecodersh/ItChat/issues/451 | [
"question"
] | tryuefang | 2 |
sergree/matchering | numpy | 28 | Hardware assisted virtualization and data execution protection must be enabled | Hi, I get this error, but I do have virtualization turned on
Because I'm running VMware and Bluestacks on my own computer
https://imgur.com/a/DMgrSjn | closed | 2021-02-04T04:56:01Z | 2022-08-14T09:28:05Z | https://github.com/sergree/matchering/issues/28 | [] | johnnygodsa | 1 |
matplotlib/mplfinance | matplotlib | 156 | Tight does not affect the headline | The new layout is certainly more compact, but somehow that doesn't seem to apply to the headline.
```
mpf.plot(stock,
type='candle',
volume=volume,
addplot=add_plots,
title=index_name + ' : ' + ticker + ' (' + datum + ')',
ylabel='Kurs',
ylabel_lower='Volumen',
style='yahoo',
figscale=3.0,
savefig=save,
tight_layout=True,
closefig=True)
```

| closed | 2020-06-08T06:57:35Z | 2020-06-08T18:33:06Z | https://github.com/matplotlib/mplfinance/issues/156 | [
"enhancement",
"question",
"released"
] | fxhuhn | 2 |
FactoryBoy/factory_boy | sqlalchemy | 1,085 | Default kwargs of SubFactory does not work as expected | #### Description
When instantiating Factory object with nested SubFactory, in some circumstances, the input provided by user does not override default values as it should.
#### To Reproduce
##### Model / Factory code
```python
from dataclasses import dataclass
from factory import Factory, SubFactory, SelfAttribute
@dataclass
class Company:
name: str
@dataclass
class Department:
name: str
company: Company
@dataclass
class Employee:
name: str
department: Department
company: Company
class CompanyFactory(Factory):
class Meta:
model = Company
name = "company"
class DepartmentFactory(Factory):
class Meta:
model = Department
company = SubFactory(CompanyFactory)
name = "department"
class EmployeeFactory(Factory):
class Meta:
model = Employee
company = SubFactory(CompanyFactory)
department = SubFactory(DepartmentFactory, company=SelfAttribute("..company"))
name = "employee"
```
##### The issue
Overriding company's name does not work, the result object still links to the default company instead.
```pycon
>>> # This does not work
>>> print(EmployeeFactory(department__company__name="company_2"))
Employee(name='employee', department=Department(name='department', company=Company(name='company')), company=Company(name='company'))
>>> # This still works
>>> print(EmployeeFactory(department__company=CompanyFactory.create(name="company=2"))
Employee(name='employee', department=Department(name='department', company=Company(name='company-2')), company=Company(name='company'))
```
Removing the `company=SelfAttribute("..company")` default attribute then it'll work as usual.
#### Notes
I think the problem lies in how SubFactory resolves default values, it only overrides default values by user-provided values only when there's an exact match:
```python
def unroll_context(self, instance, step, context):
full_context = dict()
full_context.update(self._defaults)
full_context.update(context)
...
```
We might be able to fix this behavior by remove all default keys that are substring of any key exists in context, like this:
```python
def unroll_context(self, instance, step, context):
full_context = dict()
full_context.update(self._defaults)
full_context.update(context)
full_context = {
k: v for k, v in full_context.items()
if not k in self._defaults or not any(ck.startswith(k) for ck in context)
}
...
``` | closed | 2024-08-15T02:21:25Z | 2024-08-19T01:58:01Z | https://github.com/FactoryBoy/factory_boy/issues/1085 | [] | tu-pm | 2 |
jupyter/nbviewer | jupyter | 431 | Unicode notebook rendering pretty bad | Notebooks with unicode are rendering poorly. This doesn't occur with straight `nbconvert`, so must be on this side.
Cursory exploration indicates that just making [this line](https://github.com/jupyter/nbviewer/blob/master/nbviewer/handlers.py#L530):
``` python
html = self.render_template(
"formats/%s.html" % format,
body=u'{}'.format(nbhtml),
```
will fix the issue.
> Hi Dami\u00e1n Avila ;-)
>
> That is really cool, but I noticed that you used unicode representations to get a properly rendered version of your HTML. Did you do that by hand or do you have an automated way of substituting the accents for unicode code?
>
> Without that the non-English slides look awful:
>
> http://nbviewer.ipython.org/format/slides/gist/ocefpaf/cf023a8db7097bd9fe92
>
> Thanks,
>
> -Filipe
| closed | 2015-03-20T20:59:32Z | 2015-03-22T03:02:43Z | https://github.com/jupyter/nbviewer/issues/431 | [] | bollwyvl | 4 |
elliotgao2/toapi | api | 68 | Minor bug when passing url port data to flask | Bug output
~~~ bash
$ toapi run
2017/12/17 19:07:28 [Serving ] OK http://127.0.0.1:5000
Traceback (most recent call last):
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/bin/toapi", line 11, in <module>
sys.exit(cli())
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/lib/python3.5/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/lib/python3.5/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/lib/python3.5/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/lib/python3.5/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/lib/python3.5/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/lib/python3.5/site-packages/toapi/cli.py", line 61, in run
app.api.serve(ip=ip, port=port)
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/lib/python3.5/site-packages/toapi/api.py", line 42, in serve
self.app.run(ip, port, debug=False, **options)
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/lib/python3.5/site-packages/flask/app.py", line 841, in run
run_simple(host, port, self, **options)
File "/home/user/.local/share/virtualenvs/toapi_test-UdiKVlKi/lib/python3.5/site-packages/werkzeug/serving.py", line 733, in run_simple
raise TypeError('port must be an integer')
TypeError: port must be an integer
~~~ | closed | 2017-12-17T18:11:50Z | 2017-12-18T01:15:03Z | https://github.com/elliotgao2/toapi/issues/68 | [] | Daniel-at-github | 0 |
dynaconf/dynaconf | django | 303 | [RFC] support for embedding settings in a library | I'm trying to right now but struggling to understand how to configure it
The [list of options here](https://dynaconf.readthedocs.io/en/latest/guides/configuration.html#configuration-options) is quite confusing (as well as: the table extends off the rhs of the page but Chrome does not show a horizontal scrollbar).
in `clitool/conf/__init__.py` I have instantiated:
```python
settings = LazySettings(
ENVVAR_PREFIX_FOR_DYNACONF='CLITOOL',
SETTINGS_FILE_FOR_DYNACONF="clitool.ini",
)
settings.update({
"MAX_LEVEL": 10,
"ALLOW_THINGS": False,
})
```
I can now:
```python
from clitool.conf import settings
settings.MAX_LEVEL
>>> 10
```
I would then like users of `clitool` to be able to create a `clitool.ini` in the root of their own project and have values loaded from there override the defaults I set directly in `clitool.conf.settings` when they run `clitool`.
I tried creating such a file but it does not seem to be loaded.
I'm sure this must be possible with dynaconf but it's not clear how from the docs.
One problem I have is to know what the file paths for `SETTINGS_FILE_FOR_DYNACONF` are relative to. Are these files in my project (which users will install from pypi)? Or relative to the CWD when the code is run?
How to specify the format of the file? By file extension only? Or can I have a `.clitool` file in TOML format for example?
taken from #155 | closed | 2020-02-27T21:16:46Z | 2020-09-12T04:13:58Z | https://github.com/dynaconf/dynaconf/issues/303 | [
"Not a Bug",
"RFC",
"HIGH",
"redhat"
] | rochacbruno | 1 |
pytest-dev/pytest-cov | pytest | 581 | Debug advice on intermittent FileNotFound error when collecting distributed coverage data in Jenkins | # Summary
We have some random Jenkins build failures. Some pytest runs fail (after all actual tests succeed) with an `INTERNALERROR`, when coverage data is collected from the `pytest-xdist` workers. I don't know how to proceed, or what I can do to get more debug output. Thanks in advance!
## Expected vs actual result
Some times the result is as expected (coverage data is collected just fine), some times it is not.
# Reproducer
## Versions
We use Python 3.9.10 on GNU/Linux. Excerpt from our `setup.cfg`:
```cfg
[options.extras_require]
test =
coverage >= 7.1, < 7.2
pytest >= 7.2, < 7.3
pytest-metadata != 2.0.0
pytest-cov >= 4.0, < 4.1
pytest-html >= 3.2, < 3.3
pytest-sugar >= 0.9, < 0.10
pytest-xdist >= 3.1, < 3.2
```
## Config
Excerpt from our (anonymized) `pyproject.toml`
```toml
[tool.pytest.ini_options]
addopts = "-q --self-contained-html --css=tests/fixtures/report.css"
markers = [
"unit: marks tests as unit tests. Will be added automatically if not integration, verification or validation test.",
"integration: marks tests as integration test.",
"verification: marks tests as verification test.",
"validation: marks tests as validation test.",
"slow: marks tests as slow (runtime > 5min).",
"external: marks tests that require external inputs.",
]
testpaths = [
"tests",
]
[tool.coverage.run]
omit = [
"src/proj/__init__.py",
"src/proj/_version.py",
]
source = [
"src/proj/",
]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"def __str__",
]
```
## Code
Unfortunately, I can't provide code. But here's the (anonymized) stack trace from the error:
```
INTERNALERROR> config = <_pytest.config.Config object at 0x7fb707e12340>
INTERNALERROR> doit = <function _main at 0x7fb708b32dc0>
INTERNALERROR>
INTERNALERROR> def wrap_session(
INTERNALERROR> config: Config, doit: Callable[[Config, "Session"], Optional[Union[int, ExitCode]]]
INTERNALERROR> ) -> Union[int, ExitCode]:
INTERNALERROR> """Skeleton command line program."""
INTERNALERROR> session = Session.from_config(config)
INTERNALERROR> session.exitstatus = ExitCode.OK
INTERNALERROR> initstate = 0
INTERNALERROR> try:
INTERNALERROR> try:
INTERNALERROR> config._do_configure()
INTERNALERROR> initstate = 1
INTERNALERROR> config.hook.pytest_sessionstart(session=session)
INTERNALERROR> initstate = 2
INTERNALERROR> > session.exitstatus = doit(config, session) or 0
INTERNALERROR>
INTERNALERROR> .env/lib/python3.9/site-packages/_pytest/main.py:270:
INTERNALERROR> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
INTERNALERROR>
INTERNALERROR> config = <_pytest.config.Config object at 0x7fb707e12340>
INTERNALERROR> session = <Session job exitstatus=<ExitCode.INTERNAL_ERROR: 3> testsfailed=0 testscollected=949>
INTERNALERROR>
INTERNALERROR> def _main(config: Config, session: "Session") -> Optional[Union[int, ExitCode]]:
INTERNALERROR> """Default command line protocol for initialization, session,
INTERNALERROR> running tests and reporting."""
INTERNALERROR> config.hook.pytest_collection(session=session)
INTERNALERROR> > config.hook.pytest_runtestloop(session=session)
INTERNALERROR>
INTERNALERROR> .env/lib/python3.9/site-packages/_pytest/main.py:324:
INTERNALERROR> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
INTERNALERROR>
INTERNALERROR> self = <_HookCaller 'pytest_runtestloop'>, args = ()
INTERNALERROR> kwargs = {'session': <Session job exitstatus=<ExitCode.INTERNAL_ERROR: 3> testsfailed=0 testscollected=949>}
INTERNALERROR> argname = 'session', firstresult = True
INTERNALERROR>
INTERNALERROR> def __call__(self, *args, **kwargs):
INTERNALERROR> if args:
INTERNALERROR> raise TypeError("hook calling supports only keyword arguments")
INTERNALERROR> assert not self.is_historic()
INTERNALERROR>
INTERNALERROR> # This is written to avoid expensive operations when not needed.
INTERNALERROR> if self.spec:
INTERNALERROR> for argname in self.spec.argnames:
INTERNALERROR> if argname not in kwargs:
INTERNALERROR> notincall = tuple(set(self.spec.argnames) - kwargs.keys())
INTERNALERROR> warnings.warn(
INTERNALERROR> "Argument(s) {} which are declared in the hookspec "
INTERNALERROR> "can not be found in this hook call".format(notincall),
INTERNALERROR> stacklevel=2,
INTERNALERROR> )
INTERNALERROR> break
INTERNALERROR>
INTERNALERROR> firstresult = self.spec.opts.get("firstresult")
INTERNALERROR> else:
INTERNALERROR> firstresult = False
INTERNALERROR>
INTERNALERROR> > return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR>
INTERNALERROR> .env/lib/python3.9/site-packages/pluggy/_hooks.py:265:
INTERNALERROR> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
INTERNALERROR>
INTERNALERROR> self = <_pytest.config.PytestPluginManager object at 0x7fb712d80730>
INTERNALERROR> hook_name = 'pytest_runtestloop'
INTERNALERROR> methods = [<HookImpl plugin_name='main', plugin=<module '_pytest.main' from '/opt/data/jenkins/project/workspace/job...b7078ea6d0>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7fb6a435c700>>]
INTERNALERROR> kwargs = {'session': <Session job exitstatus=<ExitCode.INTERNAL_ERROR: 3> testsfailed=0 testscollected=949>}
INTERNALERROR> firstresult = True
INTERNALERROR>
INTERNALERROR> def _hookexec(self, hook_name, methods, kwargs, firstresult):
INTERNALERROR> # called from all hookcaller instances.
INTERNALERROR> # enable_tracing will set its own wrapping function at self._inner_hookexec
INTERNALERROR> > return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR>
INTERNALERROR> .env/lib/python3.9/site-packages/pluggy/_manager.py:80:
INTERNALERROR> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
INTERNALERROR>
INTERNALERROR> self = <pytest_cov.plugin.CovPlugin object at 0x7fb7078ea6d0>
INTERNALERROR> session = <Session job exitstatus=<ExitCode.INTERNAL_ERROR: 3> testsfailed=0 testscollected=949>
INTERNALERROR>
INTERNALERROR> @pytest.hookimpl(hookwrapper=True)
INTERNALERROR> def pytest_runtestloop(self, session):
INTERNALERROR> yield
INTERNALERROR>
INTERNALERROR> if self._disabled:
INTERNALERROR> return
INTERNALERROR>
INTERNALERROR> compat_session = compat.SessionWrapper(session)
INTERNALERROR>
INTERNALERROR> self.failed = bool(compat_session.testsfailed)
INTERNALERROR> if self.cov_controller is not None:
INTERNALERROR> > self.cov_controller.finish()
INTERNALERROR>
INTERNALERROR> .env/lib/python3.9/site-packages/pytest_cov/plugin.py:297:
INTERNALERROR> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
INTERNALERROR>
INTERNALERROR> self = <pytest_cov.engine.DistMaster object at 0x7fb7078b8160>, args = ()
INTERNALERROR> kwargs = {}
INTERNALERROR> original_cwd = '/opt/data/jenkins/project/workspace/job'
INTERNALERROR>
INTERNALERROR> @functools.wraps(meth)
INTERNALERROR> def ensure_topdir_wrapper(self, *args, **kwargs):
INTERNALERROR> try:
INTERNALERROR> original_cwd = os.getcwd()
INTERNALERROR> except OSError:
INTERNALERROR> # Looks like it's gone, this is non-ideal because a side-effect will
INTERNALERROR> # be introduced in the tests here but we can't do anything about it.
INTERNALERROR> original_cwd = None
INTERNALERROR> os.chdir(self.topdir)
INTERNALERROR> try:
INTERNALERROR> > return meth(self, *args, **kwargs)
INTERNALERROR>
INTERNALERROR> .env/lib/python3.9/site-packages/pytest_cov/engine.py:44:
INTERNALERROR> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
INTERNALERROR>
INTERNALERROR> self = <pytest_cov.engine.DistMaster object at 0x7fb7078b8160>
INTERNALERROR>
INTERNALERROR> @_ensure_topdir
INTERNALERROR> def finish(self):
INTERNALERROR> """Combines coverage data and sets the list of coverage objects to report on."""
INTERNALERROR>
INTERNALERROR> # Combine all the suffix files into the data file.
INTERNALERROR> > self.cov.stop()
INTERNALERROR>
INTERNALERROR> .env/lib/python3.9/site-packages/pytest_cov/engine.py:338:
INTERNALERROR> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
INTERNALERROR>
INTERNALERROR> self = <coverage.control.Coverage object at 0x7fb7078b8220>, data_paths = None
INTERNALERROR> strict = False, keep = False
INTERNALERROR>
INTERNALERROR> def combine(
INTERNALERROR> self,
INTERNALERROR> data_paths: Optional[Iterable[str]] = None,
INTERNALERROR> strict: bool = False,
INTERNALERROR> keep: bool = False
INTERNALERROR> ) -> None:
INTERNALERROR> """Combine together a number of similarly-named coverage data files.
INTERNALERROR>
INTERNALERROR> All coverage data files whose name starts with `data_file` (from the
INTERNALERROR> coverage() constructor) will be read, and combined together into the
INTERNALERROR> current measurements.
INTERNALERROR>
INTERNALERROR> `data_paths` is a list of files or directories from which data should
INTERNALERROR> be combined. If no list is passed, then the data files from the
INTERNALERROR> directory indicated by the current data file (probably the current
INTERNALERROR> directory) will be combined.
INTERNALERROR>
INTERNALERROR> If `strict` is true, then it is an error to attempt to combine when
INTERNALERROR> there are no data files to combine.
INTERNALERROR>
INTERNALERROR> If `keep` is true, then original input data files won't be deleted.
INTERNALERROR>
INTERNALERROR> .. versionadded:: 4.0
INTERNALERROR> The `data_paths` parameter.
INTERNALERROR>
INTERNALERROR> .. versionadded:: 4.3
INTERNALERROR> The `strict` parameter.
INTERNALERROR>
INTERNALERROR> .. versionadded: 5.5
INTERNALERROR> The `keep` parameter.
INTERNALERROR> """
INTERNALERROR> self._init()
INTERNALERROR> self._init_data(suffix=None)
INTERNALERROR> self._post_init()
INTERNALERROR> self.get_data()
INTERNALERROR>
INTERNALERROR> > combine_parallel_data(
INTERNALERROR> self._data,
INTERNALERROR> aliases=self._make_aliases(),
INTERNALERROR> data_paths=data_paths,
INTERNALERROR> strict=strict,
INTERNALERROR> keep=keep,
INTERNALERROR> message=self._message,
INTERNALERROR> )
INTERNALERROR>
INTERNALERROR> .env/lib/python3.9/site-packages/coverage/control.py:790:
INTERNALERROR> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
INTERNALERROR>
INTERNALERROR> data = <CoverageData @0x7fb6f419b970 _no_disk=False _basename='/opt/data/jenkins/project/workspace/job..._have_used=True _has_lines=False _has_arcs=True _current_context=None _current_context_id=None _query_context_ids=None>
INTERNALERROR> aliases = <coverage.files.PathAliases object at 0x7fb6f419b3d0>
INTERNALERROR> data_paths = None, strict = False, keep = False
INTERNALERROR> message = <bound method Coverage._message of <coverage.control.Coverage object at 0x7fb7078b8220>>
INTERNALERROR>
INTERNALERROR> def combine_parallel_data(
INTERNALERROR> data: CoverageData,
INTERNALERROR> aliases: Optional[PathAliases] = None,
INTERNALERROR> data_paths: Optional[Iterable[str]] = None,
INTERNALERROR> strict: bool = False,
INTERNALERROR> keep: bool = False,
INTERNALERROR> message: Optional[Callable[[str], None]] = None,
INTERNALERROR> ) -> None:
INTERNALERROR> """Combine a number of data files together.
INTERNALERROR>
INTERNALERROR> `data` is a CoverageData.
INTERNALERROR>
INTERNALERROR> Treat `data.filename` as a file prefix, and combine the data from all
INTERNALERROR> of the data files starting with that prefix plus a dot.
INTERNALERROR>
INTERNALERROR> If `aliases` is provided, it's a `PathAliases` object that is used to
INTERNALERROR> re-map paths to match the local machine's.
INTERNALERROR>
INTERNALERROR> If `data_paths` is provided, it is a list of directories or files to
INTERNALERROR> combine. Directories are searched for files that start with
INTERNALERROR> `data.filename` plus dot as a prefix, and those files are combined.
INTERNALERROR>
INTERNALERROR> If `data_paths` is not provided, then the directory portion of
INTERNALERROR> `data.filename` is used as the directory to search for data files.
INTERNALERROR>
INTERNALERROR> Unless `keep` is True every data file found and combined is then deleted
INTERNALERROR> from disk. If a file cannot be read, a warning will be issued, and the
INTERNALERROR> file will not be deleted.
INTERNALERROR>
INTERNALERROR> If `strict` is true, and no files are found to combine, an error is
INTERNALERROR> raised.
INTERNALERROR>
INTERNALERROR> `message` is a function to use for printing messages to the user.
INTERNALERROR>
INTERNALERROR> """
INTERNALERROR> files_to_combine = combinable_files(data.base_filename(), data_paths)
INTERNALERROR>
INTERNALERROR> if strict and not files_to_combine:
INTERNALERROR> raise NoDataError("No data to combine")
INTERNALERROR>
INTERNALERROR> file_hashes = set()
INTERNALERROR> combined_any = False
INTERNALERROR>
INTERNALERROR> for f in files_to_combine:
INTERNALERROR> if f == data.data_filename():
INTERNALERROR> # Sometimes we are combining into a file which is one of the
INTERNALERROR> # parallel files. Skip that file.
INTERNALERROR> if data._debug.should('dataio'):
INTERNALERROR> data._debug.write(f"Skipping combining ourself: {f!r}")
INTERNALERROR> continue
INTERNALERROR>
INTERNALERROR> try:
INTERNALERROR> rel_file_name = os.path.relpath(f)
INTERNALERROR> except ValueError:
INTERNALERROR> # ValueError can be raised under Windows when os.getcwd() returns a
INTERNALERROR> # folder from a different drive than the drive of f, in which case
INTERNALERROR> # we print the original value of f instead of its relative path
INTERNALERROR> rel_file_name = f
INTERNALERROR>
INTERNALERROR> > with open(f, "rb") as fobj:
INTERNALERROR> E FileNotFoundError: [Errno 2] No such file or directory: '/opt/data/jenkins/project/workspace/job/.coverage.myhost.25783.340214'
INTERNALERROR>
INTERNALERROR> .env/lib/python3.9/site-packages/coverage/data.py:148: FileNotFoundError
```
# What has been tried to solve the problem
Adding `--full-trace` option to pytest in our Jenkinsfile.
| open | 2023-02-13T15:15:55Z | 2025-01-28T12:09:18Z | https://github.com/pytest-dev/pytest-cov/issues/581 | [] | derhintze | 2 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 494 | Error: Model Files not found | Might be a silly issue but I'm somewhat new to this. I've managed to get all the repositories running properly but I've run into a roadblock where my toolbox doesn't recognize the pre-trained models I've downloaded and placed into the repo's root folder. I'm not sure if I've placed them in the wrong location or if I'm doing something else incorrectly.
I've downloaded the model .zip and placed the "saved_models" from the zip's encoder folder inside the root's encoder folder. I've done the same with the synthesizer and vocoder. However, the toolbox never seems to recognize that they are there. Should I be organizing the files differently? | closed | 2020-08-14T14:54:25Z | 2020-08-14T15:20:54Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/494 | [] | malewpro | 2 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,300 | Document how to enable FUSE safely | In https://github.com/pangeo-data/pangeo/issues/190, @yuvipanda has documented how to enable FUSE safely in the z2jh context. That issue has served as a reference for the Pangeo project and @yuvipanda suggested I transfer it here for greater visibility.
The first part of the issue from @yuvipanda is below👇
## Use a daemonset with rshared mounts to mount FUSE
Currently, each user mounts fuse themselves. This has negative security consequences, since they require privileged containers to do this.
Long term, the solution is to implement a [Container Storage Interface driver](http://blog.kubernetes.io/2018/01/introducing-container-storage-interface.html) for GCS FUSE. The CSI standard has wide adoption across multiple projects (mesos can also use it, for example), while FlexVolumes are kubernetes specific. FlexVolumes are also deprecated in Kubernetes now, and will be removed in a (far future) release. CSI is more flexible.
For the near term, it would be great to do something that lets us let go of GCS Fuse.
I'm assuming the following conditions are true for the FUSE usage:
1. Everyone has same access to the entire FUSE space (read/write)
1. We can upgrade to Kubernetes 1.10 (which should be on GKE in a few weeks)
We can use the new support for [rshared mounts](https://medium.com/kokster/kubernetes-mount-propagation-5306c36a4a2d) in kubernetes 1.10 to do the following:
1. Make a container that has all the software for doing the GCS Mount.
1. Run this container as a privileged daemonset - this makes it run on all nodes.
1. Mount GCSFuse as /data/gcsfuse on the host machine, via rshared mounts.
1. For each user node, mount /data/gcsfuse with hostPath into their user pod. They can use this for accessing GCSFuse without needing privileged access.
How does this sound?
------
An alternative if we want to do this earlier is:
1. Switch node type to Ubuntu in GKE
1. Run something like https://github.com/berkeley-dsep-infra/data8xhub/tree/master/images/mounter in a daemonset. In that example, we run this script: https://github.com/berkeley-dsep-infra/data8xhub/blob/master/images/mounter/mounter.py on the host. We can instead run something that mounts GCS FUSE instead.
This can happen today if needed. | open | 2023-12-16T06:04:23Z | 2023-12-16T17:48:02Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3300 | [
"enhancement"
] | jhamman | 2 |
JaidedAI/EasyOCR | machine-learning | 809 | Persian number can't detect well | Hey guys
I have weird problem with easyocr .Im going to extract these persian number from images
for example for this image easyOCR should extract '١٤٠١٢٢٠٣٧٩٢٤۶٩٣'

I found out easyOCR detect my numbers as 2-3 groups of numbers and
add extra space between numbers and changed order of number

I copied and pasted the exact number result here (١٢٢٠٣٧٩٢٤٥٩٣ ١٤٠) , if you noticed and check this number by hexeditor realize its not correct number ,
its ( ١٢٢٠٣٧٩٢٤۶٩٣ + space + ١٤٠ ) but in output python showed its as different

when I paste the result in vscode environment there is this error
Invalid character "\u661" in tokenPylance

how can I fix this weird error ? | open | 2022-08-03T23:02:32Z | 2022-09-04T07:10:12Z | https://github.com/JaidedAI/EasyOCR/issues/809 | [] | arzmaster | 2 |
facebookresearch/fairseq | pytorch | 5,238 | Fine-tune a pretrained fairseq translation model on a new language pair | Hi team,
Is it possible to fine-tune any pretrained fairseq multilingual translation model on a new language pair, with one language already seen (let it be English) and the other not seen in a pretrained model?
If yes, is all the procedure the same, i.e., create a dataset and implement a training/fine-tuning script?
How about a transformer-based subset of fairseq models and, in particular, NLLB? | closed | 2023-07-05T09:11:20Z | 2023-09-13T10:44:06Z | https://github.com/facebookresearch/fairseq/issues/5238 | [
"question",
"needs triage"
] | molokanov50 | 4 |
sinaptik-ai/pandas-ai | data-science | 1,669 | Addition of LLM base models | ### 🚀 The feature
Instead of calling a LLM via API, I want the library to be capable of leveraging base models(Llama, Deepseek etc.) installed in the local machine.
### Motivation, pitch
Hi! I was trying out the library but found myself running out of tokens pretty quickly. I believe that adding an option to add the base models can be really effective for users who want to leverage their computational resources for the task and build their applications.
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2025-03-11T15:47:56Z | 2025-03-14T16:47:12Z | https://github.com/sinaptik-ai/pandas-ai/issues/1669 | [] | SnehalBhartiya | 3 |
freqtrade/freqtrade | python | 11,538 | Telegram bot signal bug | <!--
Have you searched for similar issues before posting it?
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: Linux-6.8.0-52-generic-x86_64-with-glibc2.36
* Python Version: 3.12.9 (`python -V`)
* CCXT version: 4.4.62 (`pip freeze | grep ccxt`)
* Freqtrade Version: 2025.2 (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
Telegram bot signal seems wrong:
Binance: New Trade (#3)
Pair: ALPACA/USDT
Enter Tag: buy
Amount: 1661.91
Direction: Long
Open Rate: 0.06217 USDT
Current Rate: 0.05957 USDT
Total: 99 USDT
Seems that Open Rate and Current Rate are replaced and wrong. Open rate should be 0.05957 and Current Rate is 0.06217 in this case.
In freqUI:
 | closed | 2025-03-21T20:12:40Z | 2025-03-22T12:38:14Z | https://github.com/freqtrade/freqtrade/issues/11538 | [
"Question"
] | EKebriaei | 2 |
sanic-org/sanic | asyncio | 2,115 | Allow an alternate configuration class or object to be passed to application objects | It is currently difficult to extend the `Config` class, and have an `Sanic` instance actually use that configuration class throughout its entire lifecycle. This is because the `Sanic` class's `__init__` method is hard-coded to use `sanic.config.Config`. Anyone wishing to use a different class must do one of:
- Patch and replace `sanic.config.Config`.
- Re-implement `Sanic.__init__` in a sub-class, duplicating most of the base implementation.
- Assign a different value to `Sanic.config`.
The first solution is inelegant, and hard to do correctly, the second involves redundant code duplication, and the third means that users are only able to introduce custom behavior post-init.
A simple fix for this is to modify `Sanic.__init__` to allow passing in a custom configuration class. This is already done for the class used to represent requests, and users can also pass in custom router and error handler instances. | closed | 2021-04-16T20:06:50Z | 2021-05-31T21:21:32Z | https://github.com/sanic-org/sanic/issues/2115 | [] | Varriount | 1 |
explosion/spaCy | nlp | 12,068 | import spacy - NVRTCError: NVRTC_ERROR_INVALID_OPTION (5) | (base) PS C:\Windows\system32> python
Python 3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import spacy
Traceback (most recent call last):
File "e:\ProgramData\Anaconda3\lib\site-packages\cupy\cuda\compiler.py", line 681, in compile
nvrtc.compileProgram(self.ptr, options)
File "cupy_backends\cuda\libs\nvrtc.pyx", line 139, in cupy_backends.cuda.libs.nvrtc.compileProgram
File "cupy_backends\cuda\libs\nvrtc.pyx", line 151, in cupy_backends.cuda.libs.nvrtc.compileProgram
File "cupy_backends\cuda\libs\nvrtc.pyx", line 67, in cupy_backends.cuda.libs.nvrtc.check_status
cupy_backends.cuda.libs.nvrtc.NVRTCError: NVRTC_ERROR_INVALID_OPTION (5)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\deru\AppData\Roaming\Python\Python39\site-packages\spacy\__init__.py", line 6, in <module>
from .errors import setup_default_warnings
File "C:\Users\deru\AppData\Roaming\Python\Python39\site-packages\spacy\errors.py", line 2, in <module>
from .compat import Literal
File "C:\Users\deru\AppData\Roaming\Python\Python39\site-packages\spacy\compat.py", line 38, in <module>
from thinc.api import Optimizer # noqa: F401
File "C:\Users\deru\AppData\Roaming\Python\Python39\site-packages\thinc\api.py", line 2, in <module>
from .initializers import normal_init, uniform_init, glorot_uniform_init, zero_init
File "C:\Users\deru\AppData\Roaming\Python\Python39\site-packages\thinc\initializers.py", line 4, in <module>
from .backends import Ops
File "C:\Users\deru\AppData\Roaming\Python\Python39\site-packages\thinc\backends\__init__.py", line 8, in <module>
from .cupy_ops import CupyOps
File "C:\Users\deru\AppData\Roaming\Python\Python39\site-packages\thinc\backends\cupy_ops.py", line 5, in <module>
from . import _custom_kernels
File "C:\Users\deru\AppData\Roaming\Python\Python39\site-packages\thinc\backends\_custom_kernels.py", line 83, in <module>
clipped_linear_kernel_float = _get_kernel("clipped_linear<float>")
File "C:\Users\deru\AppData\Roaming\Python\Python39\site-packages\thinc\backends\_custom_kernels.py", line 71, in _get_kernel
return KERNELS.get_function(name)
File "cupy\_core\raw.pyx", line 470, in cupy._core.raw.RawModule.get_function
File "cupy\_core\raw.pyx", line 394, in cupy._core.raw.RawModule.module.__get__
File "cupy\_core\raw.pyx", line 402, in cupy._core.raw.RawModule._module
File "cupy\_util.pyx", line 67, in cupy._util.memoize.decorator.ret
File "cupy\_core\raw.pyx", line 547, in cupy._core.raw._get_raw_module
File "cupy\_core\core.pyx", line 2064, in cupy._core.core.compile_with_cache
File "cupy\_core\core.pyx", line 2124, in cupy._core.core.compile_with_cache
File "e:\ProgramData\Anaconda3\lib\site-packages\cupy\cuda\compiler.py", line 488, in _compile_module_with_cache
return _compile_with_cache_cuda(
File "e:\ProgramData\Anaconda3\lib\site-packages\cupy\cuda\compiler.py", line 531, in _compile_with_cache_cuda
base = _preprocess('', options, arch, backend)
File "e:\ProgramData\Anaconda3\lib\site-packages\cupy\cuda\compiler.py", line 420, in _preprocess
result, _ = prog.compile(options)
File "e:\ProgramData\Anaconda3\lib\site-packages\cupy\cuda\compiler.py", line 698, in compile
raise CompileException(log, self.src, self.name, options,
cupy.cuda.compiler.CompileException: nvrtc: error: invalid value for --gpu-architecture (-arch) | closed | 2023-01-08T13:11:15Z | 2023-01-09T10:37:32Z | https://github.com/explosion/spaCy/issues/12068 | [
"install"
] | videru | 1 |
vitalik/django-ninja | rest-api | 388 | Generate response schema from pydantic model | I have a library with some functions that return pydantic models. I'd like to have some simple way to convert those models to ninja response schema without copying all their attributes. I tried creating the schema by inheriting both the pydantic model and `ninja.Schema`, but that doesn't work at all. Is there an easy way to do this?
Alternatively, I'd also be kind of OK with not validating response schema at all. However, it seems that this is not possible if I want to return more than one status code from the view (e.g. 200 and 404). When I do want to output different status codes, I have to provide a response Schema, which is unfortunate. | closed | 2022-03-11T08:52:35Z | 2022-03-11T10:03:27Z | https://github.com/vitalik/django-ninja/issues/388 | [] | stinovlas | 5 |
inducer/pudb | pytest | 275 | Adding break point like pdb | Hi, how can I add breakpoints like this `b /home/amos/softwares/qutebrowser/qutebrowser/mainwindow/mainwindow.py:81` in pudb? | closed | 2017-09-15T01:24:46Z | 2017-10-04T19:01:34Z | https://github.com/inducer/pudb/issues/275 | [] | amosbird | 7 |
davidsandberg/facenet | tensorflow | 599 | Validation on custom dataset | Hey,
I was wondering how to have validation rate after each epoch of training using train_softmax.py?
BTW, I'm training on a custom dataset, so I can't use LFW validation.
Thanks. | closed | 2018-01-02T07:22:54Z | 2023-05-31T08:02:27Z | https://github.com/davidsandberg/facenet/issues/599 | [] | modanesh | 0 |
numpy/numpy | numpy | 28,409 | ENH: Add a `spectral_radius` function to `numpy.linalg` | ### Proposed new feature or change:
Mailing list post: https://mail.python.org/archives/list/numpy-discussion@python.org/thread/2QINJRZKAC3345VITB6KCMQJPHQNEVP4/
Add a function to [`numpy.linalg`](https://numpy.org/doc/stable/reference/routines.linalg.html) called `spectral_radius` that computes the [spectral radius](https://en.wikipedia.org/wiki/Spectral_radius) of a given matrix.
A naive way to do this is `np.max(np.abs(np.linalg.eigvals(a)))`, but there are more efficient methods in the literature:
- https://en.wikipedia.org/wiki/Power_iteration
- https://en.wikipedia.org/wiki/Arnoldi_iteration
- https://en.wikipedia.org/wiki/Lanczos_algorithm
- https://en.wikipedia.org/wiki/Inverse_iteration
| closed | 2025-03-01T22:21:13Z | 2025-03-02T01:06:36Z | https://github.com/numpy/numpy/issues/28409 | [] | carlosgmartin | 1 |
TencentARC/GFPGAN | pytorch | 472 | Sai | open | 2023-12-10T02:56:13Z | 2023-12-10T02:56:13Z | https://github.com/TencentARC/GFPGAN/issues/472 | [] | sai9232 | 0 | |
plotly/dash | data-visualization | 2,664 | [Feature Request] Manually clear dynamically registered callback functions | The `_allow_dynamic_callbacks` in 2.14 could be more useful than it was designed to be, to build more flexible applications, but as the continuous dynamic callback registration, the info comes from `/_dash-dependencies` increases too, for example, create callbacks with random `uuid ` every time, is there any way to manually clear dynamically created callbacks?:


| closed | 2023-10-12T15:14:51Z | 2023-10-20T08:57:14Z | https://github.com/plotly/dash/issues/2664 | [] | CNFeffery | 2 |
mars-project/mars | numpy | 3,225 | [BUG] could not convert string to float | py3.7
pyodps[mars]==0.11
df = md.DataFrame(mt.random.rand(100000000, 4), columns=list('abcd'))
print(df.sum().execute())
{ValueError}could not convert string to float: '\r\n\r\n\r\n\n\n\n\n \n \n \n <title>淘宝网 - 淘!我喜欢</title>\n \n \n <meta name="description"\n content="淘宝网 - 亚洲较大的网上交易平台,提供各类服饰、美容、家居、数码、话费/点卡充值… 数亿优质商品,同时提供担保交易(先收货后付款)等安全交易保障服务,并由商家提供退货承诺、破损补寄等消费者保障服务,让你安心享受网上购物乐趣!" />\n <meta name="keyword"\n content="淘宝,掏宝,网上购物,C2C,在线交易,交易市场,网上交易,交易市场,网上买,网上卖,购物网站,团购,网上贸易,安全购物,电子商务,放心买,供应,买卖信息,网店,一口价,拍卖,网上开店,网络购物,打折,免费开店,网购,频道,店铺" />\n \n \n <script\n src="//g.alicdn.com/??kissy/k/1.4.16/seed-min.js,kg/kmd-adapter/0.1.5/index.js,k... | open | 2022-08-16T01:38:00Z | 2023-10-09T13:40:44Z | https://github.com/mars-project/mars/issues/3225 | [] | zhangyuqi-1 | 2 |
iperov/DeepFaceLab | deep-learning | 5,630 | DeepFaceLive for docker | I create [DeepFaceLive in Docker](https://github.com/valador/deepfacelab-docker-multi.git) with CUDA, AMD do not support for now.
| open | 2023-02-27T13:46:46Z | 2023-06-08T20:03:32Z | https://github.com/iperov/DeepFaceLab/issues/5630 | [] | valador | 2 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 28 | 单通道图像训练 | 请问我想利用单通道灰度图像训练resnet,我该如何将resnet改成能识别单通道图像呢?
谢谢! | closed | 2020-06-25T07:06:50Z | 2020-06-26T00:15:53Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/28 | [] | zhongqiu1245 | 1 |
horovod/horovod | pytorch | 3,197 | RayElastic scale-up test fails | Followup from #2813, the ray elastic scale-up test is failing in Buildkite as well. We should investigate thus as part of #3190.
```
/usr/local/lib/python3.8/dist-packages/horovod/ray/elastic.py:454: RuntimeError
--
| ------------------------------ Captured log call -------------------------------
| ERROR root:registration.py:179 failed to activate new hosts -> stop running
| Traceback (most recent call last):
| File "/usr/local/lib/python3.8/dist-packages/horovod/runner/elastic/registration.py", line 177, in _on_workers_recorded
| self._driver.resume()
| File "/usr/local/lib/python3.8/dist-packages/horovod/runner/elastic/driver.py", line 99, in resume
| self._activate_workers(self._min_np)
| File "/usr/local/lib/python3.8/dist-packages/horovod/runner/elastic/driver.py", line 177, in _activate_workers
| pending_slots = self._update_host_assignments(current_hosts)
| File "/usr/local/lib/python3.8/dist-packages/horovod/runner/elastic/driver.py", line 248, in _update_host_assignments
| raise RuntimeError('No hosts from previous set remaining, unable to broadcast state.')
| RuntimeError: No hosts from previous set remaining, unable to broadcast state.
```
https://buildkite.com/horovod/horovod/builds/6525#f16bee64-0b80-4cf8-9ba7-89b2d6aebde7/6-9500 | closed | 2021-10-05T14:49:16Z | 2021-10-18T18:23:42Z | https://github.com/horovod/horovod/issues/3197 | [
"bug"
] | tgaddair | 0 |
PokeAPI/pokeapi | graphql | 612 | [Feature request] A way to only get all Pokémon's that aren't duplicates of an original | Hello,
I would like to see a feature that basically allows you to filter the Pokémon results to only have original Pokémon's (and not some special duplicate edition of that Pokémon), currently if we get all Pokémon from [here](https://pokeapi.co/api/v2/pokemon/?limit=1118) it also has a list of cosplay's and other special sprites and stuff.
For example:
- Pikachu (og)
- Pikachu-rock-star
- Pikachu-belle
- Pikachu-pop-star
- Pikachu-phd
- Pikachu-libre
- Pikachu-cosplau
- Pikachu-original-cap
.... etc
Since they all have the same stats as the original Pikachu, I would like a way to ignore those cosplays (and others with the same stats as the original pokemon) and only get the original Pikachu.
Thank you!
- Bear
| closed | 2021-04-13T13:56:51Z | 2021-04-13T19:41:21Z | https://github.com/PokeAPI/pokeapi/issues/612 | [] | BlackBearFTW | 5 |
waditu/tushare | pandas | 1,287 | 每日指标(daily_basic)接口,添加end_date参数后,只能获取2019年12月31日以前的数据,2020年的数据获取不到 | 接口说明地址:https://tushare.pro/document/2?doc_id=32
测试结果截图:

账号ID:276150
| closed | 2020-02-16T05:00:24Z | 2020-03-01T12:28:02Z | https://github.com/waditu/tushare/issues/1287 | [] | ustb-pomelo | 1 |
iperov/DeepFaceLab | deep-learning | 476 | run in python command | Hi, could you give some discriptions about training and testing the code in **python command**. That is to say how to run the code from the script rather than by the **.bat** file. Thank you. | closed | 2019-11-01T09:26:11Z | 2020-03-28T05:41:44Z | https://github.com/iperov/DeepFaceLab/issues/476 | [] | rookiecm | 1 |
MaartenGr/BERTopic | nlp | 1,976 | ModuleNotFoundError: Can't use LangChain with version 0.16.0 | I am using BERTopic version 0.16.0 due to the issue raised [here](https://github.com/MaartenGr/BERTopic/issues/1946).
I wanted to use LangChain as a representation model as described in the official [documentation](https://maartengr.github.io/BERTopic/getting_started/representation/llm.html#langchain:~:text=modeling%20with%20BERTopic.-,LangChain,-%C2%B6)
But I'm getting the error:
```
ModuleNotFoundError: In order to use langchain you will need to install via;
`pip install langchain`
```
I did pip install Langchain and can import it:

Do I need to install a specific version of Langchain?
| closed | 2024-05-06T22:03:30Z | 2024-05-06T22:38:07Z | https://github.com/MaartenGr/BERTopic/issues/1976 | [] | mzhadigerov | 1 |
wkentaro/labelme | computer-vision | 932 | [BUG] Ubuntu 20.04 LXD container, labelme complains when starting.. but starts | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
I am launching LabelMe within an LXD/C container , using bash scripts... here's how I install it and start labelme
```
apt install -y git python3 python3-pip python3-pyqt5
pip3 install labelme
<then I reboot the container>
```
attempt to startup
```
$ labelme
[INFO ] __init__:get_config:70 - Loading config file from: /home/ubuntu/.labelmerc
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
qt.qpa.xcb: QXcbConnection: XCB error: 148 (Unknown), sequence: 214, resource id: 0, major code: 140 (Unknown), minor code: 20
```
**Expected behavior**
I would expect no errors on startup of the app.. do the instructions for setup need to have something added or did I do something wrong?
**Desktop (please complete the following information):**
- OS: `ubuntu 20.04 LXD host with focal fassa 20.04 LXC image`
- Labelme Version `v4.5.13`
| closed | 2021-10-14T12:00:28Z | 2022-04-23T17:01:23Z | https://github.com/wkentaro/labelme/issues/932 | [
"issue::bug"
] | EMCP | 2 |
davidsandberg/facenet | tensorflow | 1,255 | i-JOiN | open | 2024-10-28T08:53:34Z | 2024-10-28T08:53:34Z | https://github.com/davidsandberg/facenet/issues/1255 | [] | orb-jaydee | 0 | |
huggingface/diffusers | pytorch | 11,043 | When will we be getting Quanto support for Wan 2.1? | The diffusers library for quantizers currently doesn't contain an entry for Quantro:
https://github.com/huggingface/diffusers/tree/main/src/diffusers/quantizers
Isn't this needed to perform requantization on a quantized Transformer for WAN 2.1?
Currently we can't do this due to missing Quanto quantizer after we've quantized and stored a Transformer:
` print('Quantize transformer')
class QuantizedWanTransformer3DModel(QuantizedDiffusersModel):
base_class = WanTransformer3DModel
transformer = QuantizedWanTransformer3DModel.from_pretrained(
"./wan quantro T2V 14B Diffusers/basemodel/wantransformer3dmodel_qint8"
).to(dtype=dtype)` | closed | 2025-03-12T12:43:59Z | 2025-03-23T18:17:53Z | https://github.com/huggingface/diffusers/issues/11043 | [] | ukaprch | 2 |
samuelcolvin/watchfiles | asyncio | 120 | Binaries for linux arm64 | Hi,
It would be nice to have pre-build binaries for ARM Linux on PyPI.
The thing is that by default docker for mac on Apple Silicon macs runs containers and image builds for linux/arm64 platform and to install the package one needs to either install a rust compilator in container/image or run containers/builds under linux/amd64 emulation, which is significantly slower than the default platform. | closed | 2022-04-08T15:20:58Z | 2022-04-08T22:33:57Z | https://github.com/samuelcolvin/watchfiles/issues/120 | [] | tsimoshka | 5 |
allenai/allennlp | data-science | 4,860 | Provide Way to Expose Spacy Document Backend | **Is your feature request related to a problem? Please describe.**
I use AllenNLP for SRL in conjunction with raw Spacy dependency parsing. However, AllenNLP uses Spacy to compute the SRL output, as far as I can tell. My system would benefit from a speed-up if there were some way to avoid redundantly calling Spacy again to get its dependency parse.
**Describe the solution you'd like**
Provide a flag that allows the user to specify a Spacy context with custom settings/pipeline. Cache the most recent Spacy document after every SRL sentence call to predict, and define a getter to retrieve that document from the predictor.
**Describe alternatives you've considered**
Just use my own Spacy context and call it, but this is a waste of computation time.
**Additional context**
I'd be happy to write a proof-of-concept myself given a little more clarification. If we're worried about depending on Spacy, I'd still like to proceed with this proof-of-concept for my own use, so it would be very helpful to know how to proceed. Specifically, I cannot seem to figure out the full call-trace. Where is the method containing the instantiation of the Spacy `Doc` object actually called? See: [this line](https://github.com/allenai/allennlp-models/blob/ef5d4229f62f3ea8f44345d43b6a7fd1ab2d09fa/allennlp_models/structured_prediction/predictors/srl.py#L66)
I see that the tokenizer is created [here](https://github.com/allenai/allennlp-models/blob/ef5d4229f62f3ea8f44345d43b6a7fd1ab2d09fa/allennlp_models/structured_prediction/predictors/srl.py#L25). Would it be safe to pass my own Spacy context as an optional argument, or does AllenNLP do something tricky/required with it behind the scenes that would require that I do something special with my context?
Thank you for your time. (Please label with models. I can't seem to add a label myself.)
| closed | 2020-12-12T04:27:38Z | 2021-05-13T16:11:25Z | https://github.com/allenai/allennlp/issues/4860 | [
"Feature request"
] | KTRosenberg | 3 |
marshmallow-code/apispec | rest-api | 288 | FlaskPlugin fails if FlaskInjector is injecting into a view | Because FlaskInjector wraps the view function in yet another "decorator" the view passed into `add_path` is not `==` to the view retrieved from `current_app.view_functions`. The failure occurs only for views that need injection (an argument in the function signature that's not a path parameter).
If there is a workaround for this that would be great.
For a long time solution, maybe allow to add_path using rule instead of view? | closed | 2018-09-18T17:48:30Z | 2018-09-19T07:15:43Z | https://github.com/marshmallow-code/apispec/issues/288 | [] | andho | 2 |
littlecodersh/ItChat | api | 997 | 【!!!研发必看!!!】替代UOS WEB版,做机器人,小助手,营销系统,客服系统,监管系统的可以 看下这个API方案 |
企业方案(用来做企业业务推荐):wechaty,E云管家,我用了E云4年了 推荐!
个人方案(适合白嫖,不过不稳定,没办法做业务):,可爱猫 鲲鹏框架(但是做个小demo哄女朋友没问题)
选择方案:一定选择签合同/开发票的,我们公司前几年这浪费很多钱,刚写好一个功能运行几天,又得改代码,插件类和hook类就别考虑了 | open | 2023-09-13T02:53:31Z | 2023-11-04T10:10:29Z | https://github.com/littlecodersh/ItChat/issues/997 | [] | 2905683882 | 5 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 124 | Unable to read metadata of table with Nullable columns. | **Describe the bug**
Driver fails with `AttributeError: 'GenericTypeCompiler' object has no attribute 'visit_nullable'` if you attempt to use MetaData and Table classes to show the column type.
**To Reproduce**
First create the following table in ClickHouse.
```
CREATE TABLE superset_test
(
`basic_string` String,
`nullable_string` Nullable(String),
`lowcard_string` LowCardinality(String),
`basic_datetime` DateTime,
`nullable_datetime` Nullable(DateTime),
`int_with_codec` UInt64 CODEC(Delta, ZSTD(1))
)
ENGINE = TinyLog
```
Now run the following Python script.
```
from sqlalchemy import create_engine, MetaData, Table
url = 'clickhouse+native://localhost/foo'
engine = create_engine(url, echo=False)
meta = MetaData(bind=engine, reflect=False)
table = Table('superset_test', meta, autoload=True, autoload_with=engine)
print(f'Name: {table.name}')
print(f'Schema: {table.schema}')
for col in table.columns:
print(f'Column Name: {col.name}')
print(f' Type: {col.type}')
print(f' Nullable: {col.nullable}')
```
It fails with the following stack trace:
```
python3 sqa-table-meta.py
Name: superset_test
Schema: None
Column Name: basic_string
Type: VARCHAR
Nullable: True
Column Name: nullable_string
Traceback (most recent call last):
File "/home/rhodges/altinity/presentations/python/superset/clickhouse-sqlalchemy/clickhouse-sqlalchemy/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 89, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'GenericTypeCompiler' object has no attribute 'visit_nullable'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "sqa-table-meta.py", line 11, in <module>
print(f' Type: {col.type}')
File "/home/rhodges/altinity/presentations/python/superset/clickhouse-sqlalchemy/clickhouse-sqlalchemy/lib/python3.7/site-packages/sqlalchemy/sql/type_api.py", line 622, in __str__
return str(self.compile())
File "/home/rhodges/altinity/presentations/python/superset/clickhouse-sqlalchemy/clickhouse-sqlalchemy/lib/python3.7/site-packages/sqlalchemy/sql/type_api.py", line 605, in compile
return dialect.type_compiler.process(self)
File "/home/rhodges/altinity/presentations/python/superset/clickhouse-sqlalchemy/clickhouse-sqlalchemy/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 402, in process
return type_._compiler_dispatch(self, **kw)
File "/home/rhodges/altinity/presentations/python/superset/clickhouse-sqlalchemy/clickhouse-sqlalchemy/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 93, in _compiler_dispatch
replace_context=err,
File "/home/rhodges/altinity/presentations/python/superset/clickhouse-sqlalchemy/clickhouse-sqlalchemy/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.sql.compiler.GenericTypeCompiler object at 0x7f0ec1d3ae10> can't render element of type <class 'clickhouse_sqlalchemy.types.common.Nullable'> (Background on this error at: http://sqlalche.me/e/13/l7de)
```
**Expected behavior**
The program should print the type VARCHAR and set the nullable property on the Column object. (This does not seem to be set correctly for any column.)
**Versions**
```
$ python --version
Python 3.7.9
$ pip freeze|grep clickhouse
clickhouse-driver==0.2.0
clickhouse-sqlalchemy==0.1.5
``` | closed | 2021-03-04T06:15:18Z | 2021-03-15T15:15:36Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/124 | [] | hodgesrm | 3 |
kizniche/Mycodo | automation | 1,372 | PID widgets display null instead of negative values | Recently upgraded from 8.4 to 8.15.3, via clean install on new SD card.
Seems that PID widgets display NULL instead of negative values: see screenshot:

Been checking for a couple of days - tt seems the PID controls shown are both running as expected (as on 8.4 running for years), i.e. at time of screenshot the cooiing PID is negative and output is active, so purely a display issue. When values are zero or positive they display as expected. I also notice also that the PID widgets no longer display a graph of the values, so presumably there has been rewrite somewhere, but I couldn't spot in the changelog.
| open | 2024-03-14T12:05:21Z | 2024-04-16T15:43:01Z | https://github.com/kizniche/Mycodo/issues/1372 | [
"bug",
"Fixed and Committed",
"Testing"
] | drgrumpy | 12 |
mwaskom/seaborn | data-visualization | 3,794 | Question about abstraction | https://github.com/mwaskom/seaborn/blob/b4e5f8d261d6d5524a00b7dd35e00a40e4855872/seaborn/distributions.py#L1449
Is there an architectural reason you don't expose the stats data? (i.e. something like `ax.p = p`)
Most academic publications want to see the number behind the plots. | closed | 2024-11-30T23:48:10Z | 2024-12-01T19:42:42Z | https://github.com/mwaskom/seaborn/issues/3794 | [] | refack | 1 |
pyeve/eve | flask | 995 | Using Eve without manual schema specification | I am trying to create a JSON Web API frontend for my MongoDB database.
The API is read-only (only querying data via `GET` is supported).
The data is not normalized, and collections contain many different dynamic schema's.
The problem I've ran into is, that when I make a request against an Eve endpoint, I get no data back.
Example request to `http://127.0.0.1:5000/Accounts?where={"name":"furion"}`:
```
{
"_items": [
{
"_id": "589617d40fbc46e306720116",
"name": "furion",
"_created": "Thu, 01 Jan 1970 00:00:00 GMT",
"_updated": "Thu, 01 Jan 1970 00:00:00 GMT",
"_etag": "59b6b9c6189c73e8d6b4f3d850c0bbefd0719b86",
"_links": {
"self": {
"title": "Account",
"href": "Accounts/furion"
}
}
}
],
"_links": {
"parent": {
"title": "home",
"href": "/"
},
"self": {
"title": "Accounts",
"href": "Accounts?where={\"name\":\"furion\"}"
}
},
"_meta": {
"page": 1,
"max_results": 50,
"total": 1
}
}
```
As you can see, the only data I get back from the API is metadata. I would however like to get a JSON of the entire `Account` document in `_items`. Basically something like this:
```
"_items": [
{
"_id": "589617d40fbc46e306720116",
"name": "furion",
"vesting_withdraw_rate" : {
"amount" : 6642622.597267,
"asset" : "VESTS"
},
"savings_balance" : {
"amount" : 0.0,
"asset" : "STEEM"
},
"last_account_update" : ISODate("2017-01-29T20:46:12.000Z"),
"to_withdraw" : "86354093764474",
"id" : 16321,
"last_root_post" : ISODate("2017-02-16T20:21:39.000Z"),
"balance" : {
"amount" : 20066.154,
"asset" : "STEEM"
},
"can_vote" : true,
"rep" : 68.32,
"followers_count" : 820,
"witnesses_voted_for" : 17,
"following_count" : 41,
"recovery_account" : "steem",
"created" : ISODate("2016-07-10T17:27:00.000Z"),
"new_average_market_bandwidth" : "5717430242",
"post_bandwidth" : 10000,
"name" : "furion",
"last_bandwidth_update" : ISODate("2017-02-28T09:52:33.000Z"),
"voting_power" : 6651,
"last_owner_update" : ISODate("2016-08-10T11:37:39.000Z"),
"lifetime_vote_count" : 0,
"owner" : {
"key_auths" : [
[
"STM71hJBmmaYLyG9E2DBNkTLrScVudJbAGqfRQBQ5KuGYtEfmexKQ",
1
]
],
"account_auths" : [],
"weight_threshold" : 1
},
"last_vote_time" : ISODate("2017-02-28T09:52:33.000Z"),
"reputation" : "65019018424764",
"vote_history" : [],
"conversion_requests" : [],
"sbd_seconds" : "0",
"withdraw_routes" : 0,
"savings_withdraw_requests" : 0,
"last_market_bandwidth_update" : ISODate("2017-02-27T05:00:15.000Z"),
"posting_rewards" : 38544327,
"sbd_seconds_last_update" : ISODate("2017-02-18T04:50:00.000Z"),
"sbd_balance" : {
"amount" : 6.453,
"asset" : "SBD"
},
"balances" : {
"SAVINGS_SBD" : 100.0,
"VESTS" : 87667699.654752,
"STEEM" : 20066.154,
"SAVINGS_STEEM" : 0.0,
"SBD" : 6.453
},
"active_challenged" : false,
"average_bandwidth" : 146842836,
"profile" : {
"profile_image" : "https://pbs.twimg.com/profile_images/2977697474/013b3a40dad8d407fb4366d83544260f_400x400.jpeg",
"location" : "0x32fc1"
},
"savings_sbd_last_interest_payment" : ISODate("2017-02-01T16:22:00.000Z"),
"lifetime_bandwidth" : "4902073000000",
"savings_sbd_seconds_last_update" : ISODate("2017-02-01T16:22:12.000Z"),
"last_post" : ISODate("2017-02-26T06:54:48.000Z"),
"proxy" : "",
"next_vesting_withdrawal" : ISODate("2017-03-03T01:12:42.000Z"),
"vesting_shares" : {
"amount" : 87667699.654752,
"asset" : "VESTS"
},
"updatedAt" : ISODate("2017-02-28T10:02:37.066Z"),
"post_history" : [],
"market_history" : [],
"witness_votes" : [
"aizensou",
"blocktrades",
"busy.witness",
"chainsquad.com",
"clayop",
"complexring",
"good-karma",
"gtg",
"jesta",
"pfunk",
"roelandp",
"smooth.witness",
"someguy123",
"teamsteem",
"thecryptodrive",
"theprophet0",
"witness.svk"
],
"curation_stats" : {
"24hr" : 11.3652486203076,
"7d" : 79.6967315124897,
"avg" : 11.3852473589271
},
"owner_challenged" : false,
"mined" : false,
"savings_sbd_balance" : {
"amount" : 100.0,
"asset" : "SBD"
},
"json_metadata" : "{\"profile\":{\"profile_image\":\"https://pbs.twimg.com/profile_images/2977697474/013b3a40dad8d407fb4366d83544260f_400x400.jpeg\",\"location\":\"0x32fc1\"}}",
"posting" : {
"key_auths" : [
[
"STM7oYqjixTpzHEG9ALv74uSBdUQuwPKbSM1tJSPwDAPoAD4NfPtp",
1
]
],
"account_auths" : [],
"weight_threshold" : 1
},
"last_active_proved" : ISODate("1970-01-01T00:00:00.000Z"),
"memo_key" : "STM6LX71P2iu7wgHf5GvLf8xMzrEfYqC4b7NmU5y18wZEGCER5NZU",
"transfer_history" : [],
"sbd_last_interest_payment" : ISODate("2017-02-18T04:50:00.000Z"),
"guest_bloggers" : [],
"comment_count" : 0,
"savings_sbd_seconds" : "1200156",
"sp" : 42105.77,
"account" : "furion",
"post_count" : 447,
"blog_category" : {},
"new_average_bandwidth" : "411889806580",
"other_history" : [],
"reset_account" : "null",
"curation_rewards" : 2066270,
"active" : {
"key_auths" : [
[
"STM86oZbjcHVT4cr4dM3WkRUW8UuFoqkBvMC7UYVWnX6Pe6kRvRes",
1
]
],
"account_auths" : [],
"weight_threshold" : 1
},
"last_owner_proved" : ISODate("1970-01-01T00:00:00.000Z"),
"vesting_balance" : {
"amount" : 0.0,
"asset" : "STEEM"
}
"_created": "Thu, 01 Jan 1970 00:00:00 GMT",
"_updated": "Thu, 01 Jan 1970 00:00:00 GMT",
"_etag": "59b6b9c6189c73e8d6b4f3d850c0bbefd0719b86",
"_links": {
"self": {
"title": "Account",
"href": "Accounts/furion"
}
}
}
],
"_links": {
"parent": {
"title": "home",
"href": "/"
},
"self": {
"title": "Accounts",
"href": "Accounts?where={\"name\":\"furion\"}"
}
},
"_meta": {
"page": 1,
"max_results": 50,
"total": 1
}
}
```
My `DOMAIN`
```
DOMAIN = {
'Accounts': {
'id_field': 'name',
'item_lookup': True,
'additional_lookup': {
'url': 'regex("[\w]+")',
'field': 'name',
},
},
'Posts': {
'id_field': 'identifier',
'item_lookup': True,
'additional_lookup': {
'url': 'regex("@[\w]+/[\w]+")',
'field': 'identifier',
},
},
'PriceHistory': {},
'Operations': {},
'AccountOperations': {},
}
```
Is there any way to force eve to just return the whole document, as found in MongoDB, in each API response? | closed | 2017-03-03T11:04:41Z | 2017-03-03T18:09:48Z | https://github.com/pyeve/eve/issues/995 | [] | Netherdrake | 2 |
pyg-team/pytorch_geometric | pytorch | 9,698 | `MoleculeGPT`: Dataset+Model+Unit tests+Example | ### 🚀 The feature, motivation and pitch
Paper: https://ai4d3.github.io/papers/34.pdf
Part of the community sprint https://github.com/pyg-team/pytorch_geometric/issues/9694
The goal of this project is to reproduce the work done in MoleculeGPT while tying it as closely to the existing GNN+LLM frameworks in PyG. We recommend using as many existing features as possible from PyG. Additional features which you feel will be reusable for other workflows should be added to PyG. One-off functions that are specific to this workflow can be left inside the example.
Most of the effort will likely go into building a PyG dataset that matches the one described in the paper. At a high level the dataset is a composition of Q+A pairs for molecular field, with matching molecules as context. These Q+A pairs focus on molecular property prediction.
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-10-08T17:19:14Z | 2024-11-25T05:12:41Z | https://github.com/pyg-team/pytorch_geometric/issues/9698 | [
"feature"
] | puririshi98 | 6 |
recommenders-team/recommenders | deep-learning | 1,382 | [Typo?] Code clone tab points to old master and not to new main | ### Description
I cloned the repository as below but it seems to point to an old version of master.

Found this in the setup document and cloning this way got me to main.

Seems like a simple fix to update the clone link.
| closed | 2021-04-22T20:11:21Z | 2021-04-23T06:53:45Z | https://github.com/recommenders-team/recommenders/issues/1382 | [
"bug"
] | prvenk | 2 |
google-deepmind/sonnet | tensorflow | 107 | Best way to use pre-trained nets in Sonnet | Hi,
I'm trying to use a pre-trained ResNet-50 as a module in Sonnet.
I have read the Sonnet documentation stating: `If you create variables yourself, it is crucial to create them with tf.get_variable. Calling the tf.Variable constructor directly will only work the first time the module is connected, but on the second call you will receive an error message “Trainable variable created when calling a template after the first time”.`
What is the best way to use a pre-trained network with Sonnet? I tried implementing it in the following way:
```
from keras.applications.resnet50 import ResNet50
class ResNet50Encoder(snt.AbstractModule):
def __init__(self, name='resnet50encoder'):
super(ResNet50Encoder, self).__init__(name=name)
def _build(self, inputs, is_training=True):
input_rgb = inputs[..., :3]
base_model = ResNet50(weights='imagenet', pooling=max, include_top=False, input_shape=(120, 160, 3))
base_model.layers.pop()
for layer in base_model.layers:
layer.trainable = False
last = base_model.layers[-1].output # pool 5 output
last = Flatten()(last)
output = Dense(512, activation='relu')(last)
finetuned_model = Model(inputs=base_model.input, outputs=output)
x = preprocess_input(input_rgb)
features_rgb = finetuned_model(x)
outputs = snt.Linear(output_size=EncodeProcessDecode.n_neurons)(features_rgb)
return outputs
```
Running it yields the following error:
> resnet50encoder_nodes/bn_conv1/moving_mean/biased does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
Surrounding the ResNet creation with `tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE)` doesn't help either.
There seem to be a few issues with the batch normalization(bn) layer used in many pre-trained Keras nets w.r.t. variable scopes (see: keras-team/keras#9214). Due to this, I tried using VGG16 which does not use bn layers but exchanging ResNet50 with VGG16 in the above code then yields:
> ValueError: Trainable variable created when calling a template after the first time, perhaps you used tf.Variable when you meant tf.get_variable
The question that remains to be answered therefore is: what's the best way to use pre-trained nets in Sonnet and avoid the clash with tf.get_variable usage? The documentation doesn't seem to explain this.
Thank you | closed | 2018-12-02T23:02:45Z | 2018-12-05T06:16:17Z | https://github.com/google-deepmind/sonnet/issues/107 | [] | ferreirafabio | 1 |
PhantomInsights/subreddit-analyzer | matplotlib | 6 | json.decoder.JSONDecodeError: Expecting value | Hi @agentphantom ,
This is me again, sorry. I have another issue with the script.
I'm trying to download the submissions from r/goodanimemes from the last 2 weeks.
But after some time I get this error message :
`Traceback (most recent call last):
File "subreddit_submissions_alt.py", line 115, in <module>
init()
File "subreddit_submissions_alt.py", line 40, in init
download_submissions(subreddit=subreddit)
File "subreddit_submissions_alt.py", line 110, in download_submissions
download_submissions(subreddit, latest_timestamp)
File "subreddit_submissions_alt.py", line 110, in download_submissions
download_submissions(subreddit, latest_timestamp)
File "subreddit_submissions_alt.py", line 110, in download_submissions
download_submissions(subreddit, latest_timestamp)
[Previous line repeated 80 more times]
File "subreddit_submissions_alt.py", line 72, in download_submissions
json_data = response.json()
File "C:\Python38\lib\site-packages\requests\models.py", line 898, in json
return complexjson.loads(self.text, **kwargs)
File "C:\Python38\lib\json\__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "C:\Python38\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Python38\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)`
So I tried to launch the script again and I get the same error message but this line ` [Previous line repeated 80 more times] ` changed to `[Previous line repeated 24 more times]` (it's a random number everytime).
_I'm no JS programmer but my wild guess is that at some point, response.json() does not receive the right parameters, program does not know how to handle it and it causes it to stop._
Do you have any fix for this ?
Thank you ! | closed | 2020-09-12T20:08:12Z | 2020-09-13T02:59:37Z | https://github.com/PhantomInsights/subreddit-analyzer/issues/6 | [] | Lyfhael | 11 |
microsoft/nni | pytorch | 4,780 | Do you have to call model speedup every time before inference to see performance gains? | **Describe the issue**:
I am running a training and pruning process where I train one model for 100 epochs and then prune at various percentages with NNI to create different sparsity models. Each time I create a new pruned model, I use NNI's model speedup before retraining/fine-tuning, and then save the model. However, when I want to time inference for the model's to compare their different inference times, I am seeing very similar times between the unpruned model and the pruned model. However, when I call prune and compress and then call the model speedup _again_ before timing inference, I see better times. I couldn't find this in the documentation, **but really my question is**, do you have to call model speedup every time you load the model and before inference? Or if you call it, and save the model, and then later load it, should the model already be sped up? Any insight would be great, thanks!
**Environment**:
- NNI version: 2.7
- Python version: Python 3.8.10
- PyTorch/TensorFlow version: 1.10.2+cu113
- Is conda/virtualenv/venv used?: Yes
**How to reproduce it?**:
Train a model, prune it, call model speedup, then save the model. Then, load the model and perform inference and compare the time to the unpruned model, after warming up the gpu and averaging inference times over multiple runs.
Then, call the pruner and model speedup again on the already pruned and saved model, and perform inference again to compare. | closed | 2022-04-20T02:07:32Z | 2022-04-23T20:11:31Z | https://github.com/microsoft/nni/issues/4780 | [] | pmmitche | 2 |
dask/dask | pandas | 11,718 | Processes scheduler runs `map_blocks` serially | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
It seems that the _processes_ scheduler doesn't run in parallel when using `map_blocks`.
**Minimal Complete Verifiable Example**:
```python
from time import sleep
import dask.array as da
from dask.diagnostics import ProgressBar
def func(data):
sleep(10)
return data * 2
if __name__ == '__main__':
# 4 chunks
array = da.ones((100, 100), chunks=(25, -1))
out = array.map_blocks(func, meta=array)
print("using threads scheduler")
with ProgressBar():
out2 = out.compute(scheduler='threads')
print("using processes scheduler")
with ProgressBar():
out3 = out.compute(scheduler='processes')
```
The _threads_ and _processes_ scheduler will take about 10s and 40s, respectively. Howoever, in the following example (using dask delayed), both takes similar times: ~10-11s.
```python
from time import sleep
import dask
import dask.array as da
from dask.diagnostics import ProgressBar
import numpy as np
def func(data):
sleep(10)
return data * 2
if __name__ == '__main__':
# 4 task
delayed_out = [dask.delayed(func, pure=True)(data_) for data_ in [np.ones((100, 100))]*4]
arrays = [
da.from_delayed(delayed_out_, dtype=float, shape=(100, 100))
for delayed_out_ in delayed_out
]
out = da.stack(arrays, axis=0)
print("using threads scheduler")
with ProgressBar():
out2 = out.compute(scheduler='threads')
print("using processes scheduler")
with ProgressBar():
out3 = out.compute(scheduler='processes')
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2025.1.0
- Python version: 3.12
- Operating System: windows 10
- Install method (conda, pip, source): conda
| open | 2025-02-05T20:11:06Z | 2025-03-17T02:05:03Z | https://github.com/dask/dask/issues/11718 | [
"scheduler",
"needs attention",
"bug"
] | ericpre | 2 |
comfyanonymous/ComfyUI | pytorch | 7,306 | --use-flash-attention does not speed up A100 | ### Expected Behavior
--use-flash-attention speed up model inference
### Actual Behavior
xformers and flash attention has the same speed
below is my information

I have started up the flash attn but the speed is the same as xformers
### Steps to Reproduce
python main.py --listen 0.0.0.0 --port 8008 --preview-method auto --cuda-device 0 --gpu-only --use-flash-attention --disable-xformers
vs
python main.py --listen 0.0.0.0 --port 8008 --preview-method auto --cuda-device 0 --gpu-only
### Debug Logs
```powershell
As above
```
### Other
_No response_ | closed | 2025-03-19T01:49:16Z | 2025-03-20T05:35:49Z | https://github.com/comfyanonymous/ComfyUI/issues/7306 | [
"Potential Bug"
] | hanggun | 2 |
lorien/grab | web-scraping | 170 | Bug or ¿feature? with links without http and links with http. | It seems there is some kind of issue when mixing urls with http and without http.
Example:
> import grab
> g=grab.Grab()
> g.go("google.com").select("*").text()
> 'Google (function(){window.google= ........' # Works fine.
> g.go.doc.url
> 'https://www.google.es/?gfe_rd=cr&ei=fNG1VsPbMI-t8wex4LXoBA&gws_rd=ssl' # Fine too
> g.go("http://google.com").select("\*").text()
> 'Google (function(){window.google= ........' # Still working fine.
> g.go.doc.url
> 'https://www.google.es/?gfe_rd=cr&ei=FtK1VteYD4-t8wex4LXoBA&gws_rd=ssl' # Fine too
Now here comes when it fails:
> g.go("google.com").select("*").text() # Now again without http
> '404 Not Found'
> g.doc.url
> 'http://google.com/google.com' # Ops that looks wrong.
Obviously if I do:
> g.go("http://google.com")
> g.go("maps")
> g.doc.url
> 'https://www.google.com:443/maps'
Here in this case it might be useful?¿ If you have to load the main page anyway obviously, if the main page was not needed it would waste resources.
I looked around and well I'm not sure if this is a feature... or simply a bug.
It caused some issues to me but well I can simply fix urls so they all have http in front before doing g.go if it's not a bug.
| closed | 2016-02-06T11:12:25Z | 2016-02-09T08:36:27Z | https://github.com/lorien/grab/issues/170 | [] | mmarquezs | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,891 | [Bug]: | ### Checklist
- [x] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
torch don't istalling
### Steps to reproduce the problem
1. Start UI
2. Wait
### What should have happened?
WebUI should start
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
can't generate
### Console logs
```Shell
venv "V:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.1.2
Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl (2473.9 MB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
torch==2.1.2 from https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl#sha256=9925143dece0e63c5404a72d59eb668ef78795418e96b576f94d75dcea6030b9:
Expected sha256 9925143dece0e63c5404a72d59eb668ef78795418e96b576f94d75dcea6030b9
Got 3edee9eaa79a7a477e6dbd294393416de5527aac9d81ce5a9b37df6759cda4b8
Traceback (most recent call last):
File "V:\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "V:\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
File "V:\stable-diffusion-webui\modules\launch_utils.py", line 381, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
File "V:\stable-diffusion-webui\modules\launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "V:\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
```
### Additional information
added git pull and --autolaunch | open | 2025-03-14T16:33:29Z | 2025-03-18T18:02:23Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16891 | [
"bug-report"
] | Sensanko52123 | 2 |
aminalaee/sqladmin | asyncio | 669 | Object identifier values function not parsing correctly path parameters when a field is False | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
When trying to edit, delete or view a record which has a primary key containing a False value, a 404 error is thrown. This is due to the `def object_identifier_values(id_string: str, model: type) -> tuple:` function in line 28. https://github.com/aminalaee/sqladmin/blob/9e163b48e49bb52433475ebb9ca2f25b04c55ac6/sqladmin/helpers.py#L224-L229
Notice that `get_column_python_type`returns a Python type, which is `bool`, but when `part` is called with `bool` it has a value of `"False"`, which leads to the unexpected behavior, as `bool("")` is the only way to return `False` in Python using `bool`.
### Steps to reproduce the bug
1.Create a model which contains a PK with a Bool value and other PK.
2. Create an admin view for that model with some records having false as part of the PK.
3. Try to access the admin view and the detail/edit view or deleting the record. You will get a 404 error.

### Expected behavior
You should get the correct record when looking for a primary key which contains a falsey value.
### Actual behavior
The record can not be found
### Debugging material

False is being cast to True
### Environment
- Python 3.11, MacOS, sqladmin==0.15.2
### Additional context
_No response_ | closed | 2023-11-13T00:05:08Z | 2023-11-13T09:36:38Z | https://github.com/aminalaee/sqladmin/issues/669 | [] | ncarvajalc | 0 |
pyro-ppl/numpyro | numpy | 1,650 | Question: How to construct the gaussian process model with student-t likelihood? | Hello, I tried to construct the gaussian process model with student-t likelihood based on the following pages.
- https://num.pyro.ai/en/stable/examples/gp.html
- https://num.pyro.ai/en/stable/examples/dais_demo.html
However, `r_hat` cannot be less than 1.1. Is there still anything I can do to fix this problem? Thank you.
### Code
```python
import argparse
import os
import time
import numpy as np
import jax.numpy as jnp
import jax.random as random
import numpyro
import numpyro.distributions as dist
from numpyro.infer import (
MCMC,
NUTS,
init_to_feasible,
init_to_median,
init_to_sample,
init_to_uniform,
init_to_value,
)
def kernel(X, Z, var, length, jitter=1.0e-6):
deltaXsq = jnp.power((X[:, None] - Z) / length, 2.0)
k = var * jnp.exp(-0.5 * deltaXsq) + jitter * jnp.eye(X.shape[0])
return k
def model(X, Y):
var = numpyro.sample("kernel_var", dist.LogNormal(0.0, 10.0))
length = numpyro.sample("kernel_length", dist.LogNormal(0.0, 10.0))
noise = numpyro.sample("likelihood_noise", dist.LogNormal(0.0, 10.0))
df = numpyro.sample("likelihood_df", dist.LogNormal(0.0, 10.0))
k = kernel(X, X, var, length)
f = numpyro.sample(
"f",
dist.MultivariateNormal(loc=jnp.zeros(X.shape[0]), covariance_matrix=k),
)
numpyro.sample("obs", dist.StudentT(df=df, loc=f, scale=noise), obs=Y)
def run_inference(model, args, rng_key, X, Y):
start = time.time()
if args.init_strategy == "value":
init_strategy = init_to_value(
values={"kernel_var": 1.0, "kernel_noise": 0.05, "kernel_length": 0.5}
)
elif args.init_strategy == "median":
init_strategy = init_to_median(num_samples=10)
elif args.init_strategy == "feasible":
init_strategy = init_to_feasible()
elif args.init_strategy == "sample":
init_strategy = init_to_sample()
elif args.init_strategy == "uniform":
init_strategy = init_to_uniform(radius=1)
kernel = NUTS(model, init_strategy=init_strategy)
mcmc = MCMC(
kernel,
num_warmup=args.num_warmup,
num_samples=args.num_samples,
num_chains=args.num_chains,
thinning=args.thinning,
progress_bar=False if "NUMPYRO_SPHINXBUILD" in os.environ else True,
)
mcmc.run(rng_key, X, Y)
mcmc.print_summary()
print("\nMCMC elapsed time:", time.time() - start)
return mcmc.get_samples()
def get_data(N=30, sigma_obs=0.15, N_test=400):
np.random.seed(0)
X = jnp.linspace(-1, 1, N)
Y = X + 0.2 * jnp.power(X, 3.0) + 0.5 * jnp.power(0.5 + X, 2.0) * jnp.sin(4.0 * X)
Y += sigma_obs * np.random.randn(N)
Y -= jnp.mean(Y)
Y /= jnp.std(Y)
assert X.shape == (N,)
assert Y.shape == (N,)
X_test = jnp.linspace(-1.3, 1.3, N_test)
return X, Y, X_test
def main(args):
X, Y, X_test = get_data(N=args.num_data)
rng_key, rng_key_predict = random.split(random.PRNGKey(0))
samples = run_inference(model, args, rng_key, X, Y)
if __name__ == "__main__":
assert numpyro.__version__.startswith("0.13.2")
parser = argparse.ArgumentParser(description="Gaussian Process example")
parser.add_argument("-n", "--num-samples", nargs="?", default=1000, type=int)
parser.add_argument("--num-warmup", nargs="?", default=1000, type=int)
parser.add_argument("--num-chains", nargs="?", default=3, type=int)
parser.add_argument("--thinning", nargs="?", default=2, type=int)
parser.add_argument("--num-data", nargs="?", default=25, type=int)
parser.add_argument("--device", default="cpu", type=str, help='use "cpu" or "gpu".')
parser.add_argument(
"--init-strategy",
default="median",
type=str,
choices=["median", "feasible", "value", "uniform", "sample"],
)
args = parser.parse_args()
numpyro.set_platform(args.device)
numpyro.set_host_device_count(args.num_chains)
main(args)
```
### Output
```
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1695711481.764211 570 tfrt_cpu_pjrt_client.cc:349] TfrtCpuClient created.
Running chain 0: 100%|███████████████████████████████████████████████████████████████████████████| 2000/2000 [00:46<00:00, 43.34it/s]
Running chain 1: 100%|███████████████████████████████████████████████████████████████████████████| 2000/2000 [00:46<00:00, 43.34it/s]
Running chain 2: 100%|███████████████████████████████████████████████████████████████████████████| 2000/2000 [00:46<00:00, 43.34it/s]
mean std median 5.0% 95.0% n_eff r_hat
f[0] -0.01 0.02 0.00 -0.04 0.01 1.50 29.28
f[1] -0.01 0.02 0.00 -0.04 0.00 1.50 34.17
f[2] -0.01 0.02 0.00 -0.04 0.00 1.50 23.70
f[3] -0.01 0.02 0.00 -0.04 0.00 1.50 30.59
f[4] -0.01 0.02 0.00 -0.04 0.00 1.50 28.47
f[5] -0.01 0.02 0.00 -0.04 0.00 1.50 25.04
f[6] -0.01 0.02 0.00 -0.04 0.01 1.50 21.95
f[7] -0.01 0.02 0.00 -0.04 0.01 1.50 32.25
f[8] -0.01 0.02 0.00 -0.04 0.01 1.50 32.69
f[9] -0.01 0.02 0.00 -0.04 0.00 1.50 33.16
f[10] -0.01 0.02 0.00 -0.04 0.01 1.51 25.94
f[11] -0.01 0.02 0.00 -0.04 0.01 1.50 25.57
f[12] -0.01 0.02 0.00 -0.04 0.01 1.50 30.18
f[13] -0.01 0.02 0.00 -0.04 0.00 1.50 21.76
f[14] -0.01 0.02 0.00 -0.04 0.01 1.50 28.31
f[15] -0.01 0.02 -0.00 -0.04 0.01 1.51 22.66
f[16] -0.01 0.02 0.00 -0.04 0.01 1.51 22.58
f[17] -0.01 0.02 0.00 -0.04 0.01 1.51 21.02
f[18] -0.01 0.02 0.00 -0.04 0.01 1.50 23.40
f[19] -0.01 0.02 0.00 -0.04 0.01 1.50 30.69
f[20] -0.01 0.02 0.00 -0.04 0.00 1.50 24.92
f[21] -0.01 0.02 0.00 -0.04 0.00 1.50 26.15
f[22] -0.01 0.02 0.00 -0.04 0.01 1.50 28.08
f[23] -0.01 0.02 0.00 -0.04 0.00 1.50 26.85
f[24] -0.01 0.02 0.00 -0.04 0.00 1.50 29.77
kernel_length 3063.60 3376.91 1322.14 84.83 7787.93 1.50 954.54
kernel_var 0.04 0.06 0.01 0.00 0.12 1.50 730.53
likelihood_df 1206050.88 146733.44 1150100.00 1060936.38 1409329.50 1.50 128.50
likelihood_noise 1.05 0.14 1.12 0.86 1.18 1.50 125.45
Number of divergences: 0
MCMC elapsed time: 46.6217155456543
``` | closed | 2023-09-26T07:16:24Z | 2023-09-26T21:40:59Z | https://github.com/pyro-ppl/numpyro/issues/1650 | [] | Display-ST | 2 |
ultralytics/yolov5 | pytorch | 12,997 | Guide on how to utilize 'Weighted Loss' method in yolov5 custom training | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am trying to detect football players, referees and ball on videos of mounted camera in football matches. From the nature of football analysis, I have an imbalanced dataset, overrepresented for player class and under-representation for class referee and ball, specifically for ball class, which is highly underrepresented. I am sure that my labels are accurate.
Below is the method I tried but did not help:
- manipulated ball and referee class instances in big part of dataset by excluding player class (because I have enough player class instances). However, this decreased model performance (usually not recommended by experts. I found it out later).
Now, I am going to try the **`Weighted Loss`** method but I am not sure how to do it. I also found [this](https://github.com/ultralytics/ultralytics/issues/2703) for yolov8 but it was not clear. So, it would be great, if you give me a detailed guide on how to modify code. I am new.
FIY: My dataset class instance distribution `459:80:14` in data.yaml file order
Thank you so much !!!
### Additional
_No response_ | closed | 2024-05-10T08:40:59Z | 2024-11-07T22:29:08Z | https://github.com/ultralytics/yolov5/issues/12997 | [
"question",
"Stale"
] | dilwolf | 5 |
Asabeneh/30-Days-Of-Python | python | 641 | Muito Bom | Excelente repositório sobre python para quem está começando!!! | open | 2025-01-17T00:16:43Z | 2025-01-17T00:16:43Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/641 | [] | lucasmpeg | 0 |
pandas-dev/pandas | data-science | 60,786 | ENH: generic `save` and `read` methods for DataFrame | ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Currently, pandas has separate IO methods for each file format (to_csv, read_parquet, etc.). This requires users to:
- Remember multiple method names
- Change code when switching formats
### Feature Description
A unified `save`/`read` API would simplify common IO operations while maintaining explicit control when needed:
- File type is inferred from the filepath extension, but a `format` arg can be passed to be explicit, raising an error in some cases where the inferred file type disagrees with passed file type.
- Both methods accept `**kwargs` and pass them along to the underlying file-type-specific pandas IO methods.
- Optionally, support some basic translation across discrepancies in arg names in existing IO methods (i.e. "usecols" in `read_csv` vs "columns" in `read_parquet`).
```
# Simplest happy path:
df.save('data.csv') # Uses to_csv
df = pd.read('data.parquet') # Uses read_parquet
# Optionally, be explicit about expected file type
df.save('data.csv', format="csv") # Uses to_csv
df = pd.read('data.parquet', format="parquet") # Uses read_parquet
# Raises ValueError for conflicting format info:
df.save('data.csv', format='parquet') # Conflicting types
df.save('data.txt', format='csv') # .txt implies text format
# Reading allows overrides for misnamed files (or should we require users to rename their files properly first?)
df = pd.read('mislabeled.txt', format='parquet')
# Not sure if we should allow save when inferred file type is not a standard type:
df.save('data', format='csv') # No extension, needs type
df.save('mydata.unknown', format='csv') # Unclear extension
```
### Alternative Solutions
Existing functionality is OK, just not the simplest to use.
### Additional Context
_No response_ | open | 2025-01-25T01:18:47Z | 2025-02-01T23:28:04Z | https://github.com/pandas-dev/pandas/issues/60786 | [
"Enhancement",
"Needs Triage"
] | zkurtz | 2 |
FlareSolverr/FlareSolverr | api | 575 | [hdtime] (updating) The cookies provided by FlareSolverr are not valid | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-11-07T04:48:46Z | 2022-11-08T02:36:27Z | https://github.com/FlareSolverr/FlareSolverr/issues/575 | [
"duplicate",
"invalid"
] | sunshineMr | 1 |
pywinauto/pywinauto | automation | 1,282 | Find Element by Name in Pywinauto | Is there a way to find element by Name in Pywinauto just like Selenium.
I found the following method, but it needs a control type.
OK_btn_spec = app.window(title_re=".*Main Title").child_window(control_type="Button", title="OK")
But I am trying to write a generic code, which can read Name of any element in the app and tell me if it exists or not.
Is there a way to do that? | open | 2023-02-07T06:12:50Z | 2023-02-07T06:12:50Z | https://github.com/pywinauto/pywinauto/issues/1282 | [] | sarthakmahapatra9 | 0 |
jupyter-book/jupyter-book | jupyter | 1,761 | Right navigation bar highlights preceding section | ### Describe the bug
**context**
On pages with right navigation bars (e.g., [this page](https://jupyterbook.org/en/stable/content/content-blocks.html#indexes) of the jupyter-book documentation) the browser highlights the section that precedes the selected section.
**expectation**
When clicking on _Indexes_, I expected the browser to go to the _Indexes_ section of the active page and highlight it in the right navigation bar.
**bug**
While the browser goes to the correct section, it highlights the preceding section instead (_Glossaries_).
### Reproduce the bug
1. Go to https://jupyterbook.org/en/stable/content/content-blocks.html#indexes
2. On the right navigation bar, click on _Indexes_
3. The browser highlights the _Glossaries_ section
### List your environment
Tested on Chrome for Windows and Firefox for Windows. | open | 2022-06-21T13:54:03Z | 2022-06-21T14:06:18Z | https://github.com/jupyter-book/jupyter-book/issues/1761 | [
"bug"
] | paulremo | 1 |
noirbizarre/flask-restplus | api | 102 | app.test_client() results is not JSON serializable in 0.8.1 | Upgraded to 0.8.1 and unittests are now producing error:
TypeError: <MagicMock name='mock.validate_payload()()' id='139794117372168'> is not JSON serializable.
When I downgrade to 0.8.0, the problem goes away.
Example erring test:
```
from unittest import TestCase
import json
from vhfs.vm.api.app import app, db
class VmViewPhase2Tests(TestCase):
def setUp(self):
db.create_all()
self.app = app.test_client()
def tearDown(self):
db.session.remove()
db.drop_all()
def test_get_vms(self):
resp = self.app.get('/api/v1/vms/')
self.assertEqual(resp.status_code, 200)
data = json.loads(resp.data.decode('utf-8'))
self.assertEqual(data, {'results': []})
```
Here's the stack trace:
```
======================================================================
ERROR: test_get_vms (test_vm_phase2.VmViewPhase2Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/vagrant/source/hfs/vhfs/vm/api/web/test/test_vm_phase2.py", line 19, in test_get_vms
resp = self.app.get('/api/v1/vms/')
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/werkzeug/test.py", line 778, in get
return self.open(*args, **kw)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/testing.py", line 108, in open
follow_redirects=follow_redirects)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/werkzeug/test.py", line 751, in open
response = self.run_wsgi_app(environ, buffered=buffered)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/werkzeug/test.py", line 668, in run_wsgi_app
rv = run_wsgi_app(self.application, environ, buffered=buffered)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/werkzeug/test.py", line 871, in run_wsgi_app
app_rv = app(environ, start_response)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask_restful/__init__.py", line 270, in error_router
return original_handler(e)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/_compat.py", line 32, in reraise
raise value.with_traceback(tb)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask_restful/__init__.py", line 270, in error_router
return original_handler(e)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/_compat.py", line 32, in reraise
raise value.with_traceback(tb)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask_restful/__init__.py", line 475, in wrapper
return self.make_response(data, code, headers=headers)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask_restful/__init__.py", line 504, in make_response
resp = self.representations[mediatype](data, *args, **kwargs)
File "/home/vagrant/source/hfs/vhfs/vm/api/.venv-vhfs-vm-api/lib/python3.5/site-packages/flask_restful/representations/json.py", line 20, in output_json
dumped = dumps(data, **settings) + "\n"
File "/opt/python35/lib/python3.5/json/__init__.py", line 237, in dumps
**kw).encode(obj)
File "/opt/python35/lib/python3.5/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/opt/python35/lib/python3.5/json/encoder.py", line 436, in _iterencode
o = _default(o)
File "/opt/python35/lib/python3.5/json/encoder.py", line 180, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <MagicMock name='mock.validate_payload()()' id='139794117372168'> is not JSON serializable
```
| closed | 2015-11-30T19:37:24Z | 2015-12-11T09:21:41Z | https://github.com/noirbizarre/flask-restplus/issues/102 | [] | gddk | 5 |
Yorko/mlcourse.ai | scikit-learn | 394 | Topic 2: wrong direction on chart in section 4.3 t-SNE |

Wrong direction on chart for the article on Github (https://mlcourse.ai/notebooks/blob/master/jupyter_english/topic02_visual_data_analysis/topic2_visual_data_analysis.ipynb): "__south-west__".
But it's true for the article on Medium (https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-2-visual-data-analysis-in-python-846b989675cd), as there is another version of the chart:

It's not a critical inaccuracy, but a little embarrassing. | closed | 2018-10-19T17:02:45Z | 2018-10-26T09:50:41Z | https://github.com/Yorko/mlcourse.ai/issues/394 | [] | ptaiga | 1 |
pandas-dev/pandas | python | 61,043 | BUG: `.str.replace()` with capture groups does not play nice with string methods | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
Code
```python
import pandas as pd
c = pd.Series("THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG")
x, y, z = "\\b(FOX|THE)\\b", "_ABC_", "\\1_ABC_"
print(c.str.replace(x, y.lower(), regex=True))
print(c.str.replace(x, z.lower(), regex=True))
```
Output
```
0 _abc_ QUICK BROWN _abc_ JUMPS OVER _abc_ LAZY DOG
dtype: object
0 THE_abc_ QUICK BROWN FOX_abc_ JUMPS OVER THE_a...
dtype: object
```
### Issue Description
The `.lower()` string method inconsistently modifies the `repl` argument when the latter includes a regex capture group.
### Expected Behavior
I would expect `.lower()` to modify all characters in `repl`, including those in the capture group (or a warning stating otherwise).
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.16
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : None
IPython : 8.33.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.2.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details> | closed | 2025-03-03T22:00:55Z | 2025-03-03T22:57:07Z | https://github.com/pandas-dev/pandas/issues/61043 | [
"Bug",
"Strings"
] | noahblakesmith | 1 |
sammchardy/python-binance | api | 1,341 | The interface 'GET/faci/v1/balance' will no longer be supported from 2023-07-15. | **Describe the bug**
The interface 'GET/faci/v1/balance' will no longer be supported from 2023-07-15.

This will affect some interfaces.
I will modify and submit the PR later
**To Reproduce**
Code snippet to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Environment (please complete the following information):**
- Python version: [e.g. 3.5]
- Virtual Env: [e.g. virtualenv, conda]
- OS: [e.g. Mac, Ubuntu]
- python-binance version
**Logs or Additional context**
Add any other context about the problem here.
| open | 2023-07-24T02:35:06Z | 2023-08-09T03:58:55Z | https://github.com/sammchardy/python-binance/issues/1341 | [] | zweix123 | 4 |
google-research/bert | nlp | 1,121 | Create and Load my own Pre-Training data from Scratch | I'm a student and i'm currently doing a research about "Q&A System using BERT". I'm also new to NLP. From some of sources (paper, github forum, etc) i've read i knew that BERT has its own Pre-Trained data which is obtained from large corpus such as Wikipedia.
What i want to know is **how can i create my own pre-train data**? Because in my Q&A research the data i'm using are from Quora which are in **Indonesian language**. If i need to run any file, which file should i run?
Thank you so much in advance. | open | 2020-07-13T04:12:39Z | 2020-07-19T18:06:33Z | https://github.com/google-research/bert/issues/1121 | [] | dhimasyoga16 | 2 |
geex-arts/django-jet | django | 415 | .tiff and .woff Icons not displaying using S3 | When I use AWS s3, admin requests
`static/jet/css/icons/fonts/jet-icons.ttf?415d6s` & `static/jet/css/icons/fonts/jet-icons.woff?415d6` but s3 changes these file urls to `static/jet/css/icons/fonts/jet-icons.ttf%3415d6s` and
`static/jet/css/icons/fonts/jet-icons.woff%3415d6` accordingly. Any suggestions? | open | 2019-11-14T23:26:19Z | 2024-02-11T08:21:53Z | https://github.com/geex-arts/django-jet/issues/415 | [] | drewburnett | 1 |
ARM-DOE/pyart | data-visualization | 816 | read cinrad error | I have install the pyart module on my window and run the examples code well. There have some errors when I read cinrad data using 'plot_ppi_map' function. Is there anybody could help me out of these? Thank you so much.
...
display.plot_ppi_map
in scans [0, 2, 4, 5, 6, 7, 8, 9, 10] for moment REF.
UserWarning)
altitude: <ndarray of type: float64 and shape: (1,)>
altitude_agl: None
antenna_transition: None
azimuth: <ndarray of type: float64 and shape: (4002,)>
elevation: <ndarray of type: float32 and shape: (4002,)>
fields:
REF: <ndarray of type: float32 and shape: (4002, 1840)>
VEL: <ndarray of type: float32 and shape: (4002, 1840)>
SW: <ndarray of type: float32 and shape: (4002, 1840)>
fixed_angle: <ndarray of type: float32 and shape: (11,)>
instrument_parameters:
unambiguous_range: <ndarray of type: float32 and shape: (4002,)>
nyquist_velocity: <ndarray of type: float32 and shape: (4002,)>
latitude: <ndarray of type: float64 and shape: (1,)>
longitude: <ndarray of type: float64 and shape: (1,)>
nsweeps: 11
ngates: 1840
nrays: 4002
radar_calibration: None
range: <ndarray of type: float32 and shape: (1840,)>
scan_rate: None
scan_type: ppi
sweep_end_ray_index: <ndarray of type: int32 and shape: (11,)>
sweep_mode: <ndarray of type: |S20 and shape: (11,)>
sweep_number: <ndarray of type: int32 and shape: (11,)>
sweep_start_ray_index: <ndarray of type: int32 and shape: (11,)>
target_scan_rate: None
time: <ndarray of type: float64 and shape: (4002,)>
metadata:
Conventions: CF/Radial instrument_parameters
version: 1.3
title:
institution:
references:
source:
history:
comment:
instrument_name:
original_container: CINRAD SA
Traceback (most recent call last):
File "<ipython-input-70-09694f763b5c>", line 1, in <module>
runfile('D:/works/pyart/pyart_tutorial/test.py', wdir='D:/works/pyart/pyart_tutorial')
File "d:\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 704, in runfile
execfile(filename, namespace)
File "d:\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "D:/works/pyart/pyart_tutorial/test.py", line 119, in <module>
mask_outside = False, cmap = 'pyart_NWSRef') #pyart.graph.cm.NWSRef)
File "C:\Users\lsj\AppData\Roaming\Python\Python37\site-packages\pyart\graph\radarmapdisplay.py", line 270, in plot_ppi_map
resolution=resolution, ax=ax, **kwargs)
File "d:\Anaconda3\lib\site-packages\mpl_toolkits\basemap\__init__.py", line 753, in __init__
llcrnrlon,llcrnrlat,urcrnrlon,urcrnrlat = _choosecorners(width,height,**projparams)
File "d:\Anaconda3\lib\site-packages\mpl_toolkits\basemap\__init__.py", line 5118, in _choosecorners
p = pyproj.Proj(kwargs)
File "d:\Anaconda3\lib\site-packages\pyproj\__init__.py", line 362, in __new__
return _proj.Proj.__new__(self, projstring)
File "_proj.pyx", line 129, in _proj.Proj.__cinit__
RuntimeError: b'conic lat_1 = -lat_2'
... | closed | 2019-02-28T06:23:09Z | 2020-03-26T20:24:21Z | https://github.com/ARM-DOE/pyart/issues/816 | [
"component: pyart.io",
"Further details needed"
] | cycle13 | 5 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 703 | hi | 计算损失时
for name, x in inputs.items:
报错:
AttributeError: 'Tensor' object has no attribute 'items'
请问这个是什么原因导致的呢!
| closed | 2022-12-01T05:43:33Z | 2022-12-03T05:20:49Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/703 | [] | 7788fine | 1 |
django-import-export/django-import-export | django | 1,763 | skip_row doesn't work correctly | Even if I left skip_row=True report_skipped = False and left the others untouched, I couldn’t skip the rows. Until I changed the source, I received errors that my fields were None.
TypeError: __str__ returned non-string (type NoneType)
I sat for hours and corrected everything, tried everything, thinking the problem was in my code. When I corrected this (below), everything worked. I stopped getting errors and it turned out to be skipped according to my logic.
[file [resources.py] line 819.](https://github.com/django-import-export/django-import-export/blob/493ff3c65131814ecd5ef5d12ee547c60bc253fd/import_export/resources.py#L819C27-L819C55) Adding an indent, moved the else inward.
Please look at this case. I have Linux. xlsx file. I import in the admin panel. I did everything as in the documentation. If something is not clear, ask, I will show you where you need it. | closed | 2024-02-29T14:13:54Z | 2024-03-02T08:04:50Z | https://github.com/django-import-export/django-import-export/issues/1763 | [
"bug"
] | ikhasanmusaev | 15 |
ultralytics/yolov5 | machine-learning | 13,199 | How to reduce the size of label and fontsize | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello, I am trying to reduce the labeling size since my images are only 240*240 and I could not see clearly without doing this. However, I have read that some others have posted this question before and I could not find neither 'plot_one_box' nor 'box_label'. Could anyone help me with it?

### Additional
_No response_ | open | 2024-07-18T02:20:46Z | 2024-10-20T19:50:20Z | https://github.com/ultralytics/yolov5/issues/13199 | [
"question"
] | Kelly02140 | 4 |
jupyter/nbviewer | jupyter | 731 | Branches can be removed | There are currently two non-master branches on this repo that could be removed.
* rgbkrk-swapper: has no commits that aren't on master
* add-base-url-support: has a commit that is the same as the change proposed in #729 | closed | 2017-10-13T04:50:27Z | 2017-10-14T03:21:35Z | https://github.com/jupyter/nbviewer/issues/731 | [] | pelson | 1 |
yunjey/pytorch-tutorial | deep-learning | 162 | ImportError: cannot import name 'get_backend' matplotlib | got this error while running this line:
python build_vocab.py
If anyone knows how to solve it, then please help.
| closed | 2019-03-04T17:06:06Z | 2019-03-04T21:06:28Z | https://github.com/yunjey/pytorch-tutorial/issues/162 | [] | anjalinagel12 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.