repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
tqdm/tqdm | jupyter | 1,342 | Make tqdm(disable=None) default, instead of tqdm(disable=False) | - [x] I have marked all applicable categories:
+ [x] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
From my personal experience (and many others), a progress bar is a great tool in developing but a nightmare for cronjob and logs. It's hard (if not impossible) to find a case in which writing a progress bar to a log file can be justified as good, or even necessary. I think it would be a wise choice to turn tqdm's output off to all non-TTY outputs BY DEFAULT. | open | 2022-07-11T13:30:35Z | 2024-08-05T11:40:11Z | https://github.com/tqdm/tqdm/issues/1342 | [] | sunyj | 3 |
yzhao062/pyod | data-science | 272 | some PyOD models will fail while being used with SUOD | The reason is sklearn.clone will lead to issues if the hyperparameters are not well used.
Problem can be reproduced by cloning models:
This includes:
* COD | open | 2021-01-14T22:20:13Z | 2022-06-17T12:51:48Z | https://github.com/yzhao062/pyod/issues/272 | [] | yzhao062 | 1 |
pytorch/pytorch | deep-learning | 148,970 | ONNX export drops namespace qualifier for custom operation | ### 🐛 Describe the bug
Here, a repro modified from the example used on Pytorch doc page for custom ONNX ops.
I expect saved ONNX file to have com.microsoft::Gelu node - OnnxProgram seem to have the qualifier, but it's lost when file is saved:
```
import torch
import onnxscript
import onnx
class GeluModel(torch.nn.Module):
def forward(self, input_x):
return torch.ops.aten.gelu(input_x)
microsoft_op = onnxscript.values.Opset(domain="com.microsoft", version=1)
from onnxscript import FLOAT
@onnxscript.script(microsoft_op)
def custom_aten_gelu(self: FLOAT, approximate: str = "none") -> FLOAT:
return microsoft_op.Gelu(self)
x = torch.tensor([1.0])
onnx_program = torch.onnx.export(
GeluModel().eval(),
(x,),
dynamo=True,
custom_translation_table={
torch.ops.aten.gelu.default: custom_aten_gelu,
},
)
onnx_program.optimize()
print(onnx_program.model)
onnx_file_path="ms.onnx"
print("==============")
onnx_program.save(onnx_file_path)
onnx_model = onnx.load(onnx_file_path)
print(onnx.helper.printable_graph(onnx_model.graph))
```
The output, note no qualifier in the second printout:
```
python ms.py
'Gelu' is not a known op in 'com.microsoft'
/git/onnxscript/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. P\
lease use '.op_signature' instead.
param_schemas = callee.param_schemas()
/git/onnxscript/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the\
future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
[torch.onnx] Obtain model graph for `GeluModel()` with `torch.export.export(..., strict=False)`...
/usr/local/lib/python3.12/dist-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The cur\
rent Torch version does not have Intel GPU Support. (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
[torch.onnx] Obtain model graph for `GeluModel()` with `torch.export.export(..., strict=False)`... ✅
[torch.onnx] Run decomposition...
/usr/local/lib/python3.12/dist-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The cur\
rent Torch version does not have Intel GPU Support. (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
[torch.onnx] Run decomposition... ✅
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... ✅
<
ir_version=10,
opset_imports={'pkg.onnxscript.torch_lib.common': 1, 'com.microsoft': 1, '': 18},
producer_name='pytorch',
producer_version='2.7.0.dev20250310+cu128',
domain=None,
model_version=None,
>
graph(
name=main_graph,
inputs=(
%"input_x"<FLOAT,[1]>
),
outputs=(
%"gelu"<FLOAT,[1]>
),
) {
0 | # n0
%"gelu"<FLOAT,[1]> ⬅️ com.microsoft::Gelu(%"input_x")
return %"gelu"<FLOAT,[1]>
}
==============
graph main_graph (
%input_x[FLOAT, 1]
) {
%gelu = Gelu(%input_x)
return %gelu
}
```
@justinchuby @xadupre @titaiwangms
### Versions
Pytorch nightly | closed | 2025-03-11T16:05:21Z | 2025-03-11T18:20:48Z | https://github.com/pytorch/pytorch/issues/148970 | [
"module: onnx",
"triaged",
"onnx-triaged",
"onnx-needs-info"
] | borisfom | 5 |
akfamily/akshare | data-science | 5,573 | 获取集思录可转债实时数据错误哦 |
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3790, in get_loc
return self._engine.get_loc(casted_key)
File "index.pyx", line 152, in pandas._libs.index.IndexEngine.get_loc
File "index.pyx", line 181, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 7080, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 7088, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: '强赎状态'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/aaa.py", line 16, in <module>
df = df[~df["强赎状态"].str.contains(" 已公告")]
File "/opt/anaconda3/lib/python3.9/site-packages/pandas/core/frame.py", line 3896, in __getitem__
indexer = self.columns.get_loc(key)
File "/opt/anaconda3/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3797, in get_loc
raise KeyError(key) from err
KeyError: '强赎状态' | closed | 2025-02-09T08:13:45Z | 2025-02-09T09:02:20Z | https://github.com/akfamily/akshare/issues/5573 | [
"bug"
] | paladin-dalao | 0 |
miguelgrinberg/microblog | flask | 183 | Ch 15 - Blueprints refactoring and Unit Testing reorganization issues. | Hi, I have followed along the eBook to Chapter 15 where I have learnt how to refactor my Microblog code in order to better organize the files. Everything works ok after I refactored the code using Blueprints.
### My question is this:
If I want to move my Unit Testing file **tests.py** to a separate sub-folder called **testing** where I can organize several Unit Testing files in the future. How can I accomplish that?
I'm getting this Error after moving the tests.py file to **testing folder**. The file works when it's located directly under microblog folder.
```
/microblog$ ls -l
microblog
|-- appz
|-- config.py
|-- logs
|-- microblog.py
|-- migrations
|-- run_Flask_server.sh
|-- testing
|-- tests.py
```
```
/microblog/testing$ python3 tests.py
Traceback (most recent call last):
File "tests.py", line 1, in <module>
from appz import db
ModuleNotFoundError: No module named 'appz'
```
.
The **tests.py** file is basically the same as from the book's example code archive. Here's a snippet.
The only difference is that I have re-named my folder to **appz** instead of **app**.
```
from appz import db
from appz import create_app
from config import Config
from datetime import datetime
from datetime import timedelta
import unittest
# Database Models
from appz.models import Post
from appz.models import User
class TestConfig(Config):
TESTING = True
SQLALCHEMY_DATABASE_URI = 'sqlite://'
class UserModelCase(unittest.TestCase):
def setUp(self):
self.app = create_app(TestConfig)
self.app_context = self.app.app_context()
self.app_context.push()
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all()
self.app_context.pop()
...
...
if __name__ == '__main__':
unittest.main(verbosity=2)
``` | closed | 2019-09-25T19:18:33Z | 2019-09-26T09:00:01Z | https://github.com/miguelgrinberg/microblog/issues/183 | [
"question"
] | mrbiggleswirth | 2 |
amidaware/tacticalrmm | django | 1,585 | Define and display URL Actions grouped as Client, Agent or Globally targeted. | Currently we define and deploy a pretty good handful of URL Actions that target either the client {{client.id}} or the agent {{agent.agent_id}}. All URL Actions are bunched together so we end up scrolling past a lot of "agent" actions to get to a "client" action and vise-versa.
So our problem is, we are having all URL Actions show when selecting a client -> Run URL Action and when selecting a agent -> Run URL Action.
I pose that a flag is added to the URL action manager to define if its a global, client or agent level action and then sort and display only actions flagged for a particular endpoint (client. agent or global).
This way only the intended URL Actions display in the URL Actions list for a client or agent. | open | 2023-08-04T14:32:48Z | 2024-08-10T22:05:10Z | https://github.com/amidaware/tacticalrmm/issues/1585 | [
"enhancement"
] | CubertTheDweller | 0 |
xmu-xiaoma666/External-Attention-pytorch | pytorch | 54 | WeightedPermuteMLP代码中的Linear问题? | WeightedPermuteMLP 中采用了几个全连接层Linear,具体代码位置在ViP.py中的21-23行
```python
self.mlp_c=nn.Linear(dim,dim,bias=qkv_bias)
self.mlp_h=nn.Linear(dim,dim,bias=qkv_bias)
self.mlp_w=nn.Linear(dim,dim,bias=qkv_bias)
```
这几个线性层的输入输出通道数都是dim,即输入输出的通道数不变
在forward时,除了mlp_c是直接输入了x没有什么问题
```python
def forward(self,x) :
B,H,W,C=x.shape
c_embed=self.mlp_c(x)
S=C//self.seg_dim
h_embed=x.reshape(B,H,W,self.seg_dim,S).permute(0,3,2,1,4).reshape(B,self.seg_dim,W,H*S)
h_embed=self.mlp_h(h_embed).reshape(B,self.seg_dim,W,H,S).permute(0,3,2,1,4).reshape(B,H,W,C)
w_embed=x.reshape(B,H,W,self.seg_dim,S).permute(0,3,1,2,4).reshape(B,self.seg_dim,H,W*S)
w_embed=self.mlp_w(w_embed).reshape(B,self.seg_dim,H,W,S).permute(0,2,3,1,4).reshape(B,H,W,C)
weight=(c_embed+h_embed+w_embed).permute(0,3,1,2).flatten(2).mean(2)
weight=self.reweighting(weight).reshape(B,C,3).permute(2,0,1).softmax(0).unsqueeze(2).unsqueeze(2)
x=c_embed*weight[0]+w_embed*weight[1]+h_embed*weight[2]
x=self.proj_drop(self.proj(x))
```
其他的两个线性层在使用时都有问题
可以看到这一步
```python
h_embed=x.reshape(B,H,W,self.seg_dim,S).permute(0,3,2,1,4).reshape(B,self.seg_dim,W,H*S)
```
最后将通道数改为了`H*S` ,在执行时如果`H*S`不等于`C`,接下来的线性层就会出错了,实际上这一步肯定会错误。
论文当中的代码处理也是类似的方法,不知道怎么解决? | open | 2022-06-02T08:51:55Z | 2022-06-02T08:52:10Z | https://github.com/xmu-xiaoma666/External-Attention-pytorch/issues/54 | [] | ZVChen | 0 |
cvat-ai/cvat | computer-vision | 9,097 | Incorrect data returned in frames meta request for a ground truth job | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Create a task with Ground Truth job, using attached archive (Frame selection method: random).
Content of the image corresponds to the image number in this archive). E.g. this is image_10.jpg. Each image resolution is random. This specific image has resolution 800x900.

2. Open GT job and check meta response. It has resolution 1500x900. The image distorsed
<img width="1893" alt="Image" src="https://github.com/user-attachments/assets/acae3d62-2083-4f45-a01d-920c287ccb12" />
Of course, image name also may be incorrect:
<img width="1162" alt="Image" src="https://github.com/user-attachments/assets/6664e99c-f328-4879-86f1-5dc51cd94e44" />
[images.zip](https://github.com/user-attachments/files/18766802/images.zip)
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
commit db293b7024ff9252cfb8bb5c648364539d4a6f09
``` | closed | 2025-02-12T11:16:31Z | 2025-03-03T12:32:35Z | https://github.com/cvat-ai/cvat/issues/9097 | [
"bug"
] | bsekachev | 2 |
encode/databases | asyncio | 176 | query_lock() in iterate() prohibits any other database operations within `async for` loop | #108 introduced query locking to prohibit situation when multiple queries are executed at same time, however logic within `iterate()` is also is also wrapped with such logic, making code like such impossible due to deadlock:
```
async for row in database.iterate("SELECT * FROM table"):
await database.execute("UPDATE table SET ... WHERE ...")
``` | open | 2020-03-14T22:28:20Z | 2023-01-30T23:29:41Z | https://github.com/encode/databases/issues/176 | [] | rafalp | 16 |
microsoft/nlp-recipes | nlp | 624 | [ASK] Error while running extractive_summarization_cnndm_transformer.ipynb | When I run below code.
`summarizer.fit(
ext_sum_train,
num_gpus=NUM_GPUS,
batch_size=BATCH_SIZE,
gradient_accumulation_steps=2,
max_steps=MAX_STEPS,
learning_rate=LEARNING_RATE,
warmup_steps=WARMUP_STEPS,
verbose=True,
report_every=REPORT_EVERY,
clip_grad_norm=False,
use_preprocessed_data=USE_PREPROCSSED_DATA
)`
It gives me error like this.
```
Iteration: 0%| | 0/199 [00:00<?, ?it/s]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-40-343cf59f0aa4> in <module>()
12 report_every=REPORT_EVERY,
13 clip_grad_norm=False,
---> 14 use_preprocessed_data=USE_PREPROCSSED_DATA
15 )
16
11 frames
/content/drive/My Drive/nlp-recipes/utils_nlp/models/transformers/extractive_summarization.py in fit(self, train_dataset, num_gpus, gpu_ids, batch_size, local_rank, max_steps, warmup_steps, learning_rate, optimization_method, max_grad_norm, beta1, beta2, decay_method, gradient_accumulation_steps, report_every, verbose, seed, save_every, world_size, rank, use_preprocessed_data, **kwargs)
775 report_every=report_every,
776 clip_grad_norm=False,
--> 777 save_every=save_every,
778 )
779
/content/drive/My Drive/nlp-recipes/utils_nlp/models/transformers/common.py in fine_tune(self, train_dataloader, get_inputs, device, num_gpus, max_steps, global_step, max_grad_norm, gradient_accumulation_steps, optimizer, scheduler, fp16, amp, local_rank, verbose, seed, report_every, save_every, clip_grad_norm, validation_function)
191 disable=local_rank not in [-1, 0] or not verbose,
192 )
--> 193 for step, batch in enumerate(epoch_iterator):
194 inputs = get_inputs(batch, device, self.model_name)
195 outputs = self.model(**inputs)
/usr/local/lib/python3.7/dist-packages/tqdm/std.py in __iter__(self)
1102 fp_write=getattr(self.fp, 'write', sys.stderr.write))
1103
-> 1104 for obj in iterable:
1105 yield obj
1106 # Update and possibly print the progressbar.
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self)
519 if self._sampler_iter is None:
520 self._reset()
--> 521 data = self._next_data()
522 self._num_yielded += 1
523 if self._dataset_kind == _DatasetKind.Iterable and \
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
559 def _next_data(self):
560 index = self._next_index() # may raise StopIteration
--> 561 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
562 if self._pin_memory:
563 data = _utils.pin_memory.pin_memory(data)
/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
45 else:
46 data = self.dataset[possibly_batched_index]
---> 47 return self.collate_fn(data)
/content/drive/My Drive/nlp-recipes/utils_nlp/models/transformers/extractive_summarization.py in collate_fn(data)
744 def collate_fn(data):
745 return self.processor.collate(
--> 746 data, block_size=self.max_pos_length, device=device
747 )
748
/content/drive/My Drive/nlp-recipes/utils_nlp/models/transformers/extractive_summarization.py in collate(self, data, block_size, device, train_mode)
470 else:
471 if train_mode is True and "tgt" in data[0] and "oracle_ids" in data[0]:
--> 472 encoded_text = [self.encode_single(d, block_size) for d in data]
473 batch = Batch(list(filter(None, encoded_text)), True)
474 else:
/content/drive/My Drive/nlp-recipes/utils_nlp/models/transformers/extractive_summarization.py in <listcomp>(.0)
470 else:
471 if train_mode is True and "tgt" in data[0] and "oracle_ids" in data[0]:
--> 472 encoded_text = [self.encode_single(d, block_size) for d in data]
473 batch = Batch(list(filter(None, encoded_text)), True)
474 else:
/content/drive/My Drive/nlp-recipes/utils_nlp/models/transformers/extractive_summarization.py in encode_single(self, d, block_size, train_mode)
539 + ["[SEP]"]
540 )
--> 541 src_subtoken_idxs = self.tokenizer.convert_tokens_to_ids(src_subtokens)
542 _segs = [-1] + [i for i, t in enumerate(src_subtoken_idxs) if t == self.sep_vid]
543 segs = [_segs[i] - _segs[i - 1] for i in range(1, len(_segs))]
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in convert_tokens_to_ids(self, tokens)
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in _convert_token_to_id_with_added_voc(self, token)
TypeError: Can't convert 0 to PyString
```
P.S. I try to run this code using google colab free GPU.
Any help is welcome :)
| open | 2021-07-24T16:24:13Z | 2022-01-04T12:13:26Z | https://github.com/microsoft/nlp-recipes/issues/624 | [] | ToonicTie | 2 |
plotly/dash | dash | 2,517 | [BUG] Dash Design Kit's ddk.Notification does not render correctly on React 18.2.0 | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.9.3
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dash_cytoscape 0.2.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: MacOS[e.g. iOS]
- Browser Firefox, Chrome
- Version [e.g. 22]
**Describe the bug**
ddk.Notification rendering is inconsistent. It does not render immediately until the next UI event. This is incredibly buggy on React 18.
Reproduction Steps on this Example:
1. Click on "Click Me"
2. Observe that "was inserted!" is added to the DOM, but the ddk.Notification does not show up.
```python
import dash
import dash_design_kit as ddk
from dash import Dash, dcc, html, Input, Output
app = Dash(__name__)
# Enable react 18
# See https://github.com/plotly/dash/pull/2260/files
dash._dash_renderer._set_react_version("18.2.0")
app.layout = ddk.App(
children=[
ddk.Header(ddk.Title("Hi")),
html.H1(children="Hello Dash"),
html.Button(id="click", children="Click Me!"),
html.Div(id="stuff"),
]
)
@app.callback(
Output("stuff", "children"), Input("click", "n_clicks"), prevent_initial_call=True
)
def insert_notification(n_clicks):
return html.Div(
children=[
html.Div("was inserted!"),
ddk.Notification(
type="danger",
title=f"n_clicks: {n_clicks}",
timeout=-1,
),
]
)
if __name__ == "__main__":
app.run_server(debug=True)
```
**Expected behavior**
It renders immediately on each key press.
**Screenshots**
I've included a screencapture of this behavior comparing React 16 and React 18.
React 16: https://user-images.githubusercontent.com/1694040/235269740-57d35c94-530e-432f-b052-0b7bf7de4302.mov
React 18: https://user-images.githubusercontent.com/1694040/235269470-159ee33b-994a-4ba3-a5a2-ae42eff829a5.mov
| closed | 2023-04-28T23:34:28Z | 2024-05-06T14:16:28Z | https://github.com/plotly/dash/issues/2517 | [] | rymndhng | 6 |
supabase/supabase-py | fastapi | 119 | bug: no module named `realtime.connection; realtime` is not a package | I have an error like this when using this package.
ModuleNotFoundError: No module named 'realtime.connection'; 'realtime' is not a package
anyone can help me | closed | 2022-01-11T07:57:02Z | 2022-05-14T17:36:42Z | https://github.com/supabase/supabase-py/issues/119 | [
"bug"
] | alif-arrizqy | 3 |
onnx/onnx | machine-learning | 6,364 | Sonarcloud for static code analysis? | ### System information
_No response_
### What is the problem that this feature solves?
Introduction of sonarcloud
### Alternatives considered
Focus on codeql ?
### Describe the feature
Thanks to the improvements made by @cyyever I wonder if we want to officially set up a tool like Sonarcloud, for example. ( I could do that)
For a fork of mine, for example, it looks like this:
https://sonarcloud.io/project/issues?rules=python%3AS6711&issueStatuses=OPEN%2CCONFIRMED&id=andife_onnx&open=AZHq5D8n6JXh0XXyfRwb&tab=code
(My general experience with sonarcloud/sonarqube has been very positive)
Is the codeql integrated in github systematically used so far?
I know different static linkers produce different results and blindly following the suggestions does not necessarily lead to better code quality.
A comparison can be found at https://medium.com/@suthakarparamathma/sonarqube-vs-codeql-code-quality-tool-comparison-32395f2a77b3
### Will this influence the current api (Y/N)?
no
### Feature Area
best practices, code quality
### Are you willing to contribute it (Y/N)
Yes
### Notes
I could create it for our regulate onnx/onnx branch. It is free for open source projects
https://www.sonarsource.com/plans-and-pricing/ | open | 2024-09-14T16:01:12Z | 2024-09-25T04:41:40Z | https://github.com/onnx/onnx/issues/6364 | [
"topic: enhancement"
] | andife | 4 |
amdegroot/ssd.pytorch | computer-vision | 85 | A bug in box_utils.py, log_sum_exp | I change the batch_size to 2 , it there any solutions ?
File "train.py", line 232, in <module>
train()
File "train.py", line 184, in train
loss_l, loss_c = criterion(out, targets)
File "/home/junhao.li/anaconda2/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/data/lijunhao_dataset/pytorch_proj/SSD/ssd.pytorch/layers/modules/multibox_loss.py", line 95, in forward
loss_c = log_sum_exp(batch_conf) - batch_conf.gather(1, conf_t.view(-1, 1))
File "/data/lijunhao_dataset/pytorch_proj/SSD/ssd.pytorch/layers/box_utils.py", line 168, in log_sum_exp
return torch.log(torch.sum(torch.exp(x-x_max), 1, keepdim=True)) + x_max
**RuntimeError: value cannot be converted to type float without overflow: inf**
| closed | 2017-12-12T08:52:15Z | 2020-05-30T13:49:11Z | https://github.com/amdegroot/ssd.pytorch/issues/85 | [] | jxlijunhao | 5 |
holoviz/panel | jupyter | 6,956 | value_throttled isn't throttled for FloatSlider when using keyboard arrows | #### ALL software version info
bokeh~=3.4.2
panel~=1.4.4
param~=2.1.1
Python 3.12.4
Firefox 127.0.2
OS: Linux
#### Description of expected behavior and the observed behavior
Expected:
value_throttled event is triggered after keyboard arrow key is released + some delay to make sure the user has finished changing the value.
Observed:
FloatSlider triggers value_throttled event many times when you press and hold keyboard arrow key.
IntInput triggers value_throttled event each time you press an arrow key on the keyboard even if you do it many times per second.
#### Complete, minimal, self-contained example code that reproduces the issue
```
import param
import panel as pn
import datetime
slider = pn.widgets.FloatSlider(end=100.0, start=0.0, step=0.1)
int_input = pn.widgets.IntInput(start=0, end=1000)
log = pn.widgets.TextAreaInput(name='timestamps:', auto_grow=True)
def callback(target, event):
t = datetime.datetime.now().isoformat()
target.value += t + ' value: ' + str(event.new) + '\n'
slider.link(log, callbacks={'value_throttled': callback})
int_input.link(log, callbacks={'value_throttled': callback})
pn.Column(slider, int_input, log).servable()
```
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action

- [X] I may be interested in making a pull request to address this
| open | 2024-07-06T17:13:22Z | 2024-07-13T11:53:38Z | https://github.com/holoviz/panel/issues/6956 | [] | pmvd | 4 |
apachecn/ailearning | python | 585 | AI | closed | 2020-05-13T11:04:46Z | 2020-11-23T02:05:17Z | https://github.com/apachecn/ailearning/issues/585 | [] | LiangJiaxin115 | 0 | |
xonsh/xonsh | data-science | 5,029 | Parse single commands with dash as subprocess instead of Python | ## Expected Behavior
When doing this...
```console
$ fc-list
```
...`fc-list` should run.
## Current Behavior
```console
$ fc-list
TypeError: unsupported operand type(s) for -: 'function' and 'type'
$
```
## Steps to Reproduce
```console
$ which fc-list
/opt/homebrew/bin/fc-list
$ fc-list
TypeError: unsupported operand type(s) for -: 'function' and 'type'
$
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| closed | 2023-01-13T17:54:23Z | 2023-01-18T08:45:17Z | https://github.com/xonsh/xonsh/issues/5029 | [
"parser"
] | rpdelaney | 1 |
widgetti/solara | fastapi | 334 | Autoreload KeyError: <package_name> | I'm having an issue with autoreload on solara v 1.22.0, and I think has the same issue with v1.21.
I have a solara MWE script in `filename.py` and then run with:
```
solara run package_name.module.filename
```
Traceback:
```
Traceback (most recent call last):
File "C:\Users\jhsmi\pp\do-fret\.venv\lib\site-packages\solara\server\app.py", line 317, in load_app_widget
widget, render_context = _run_app(
File "C:\Users\jhsmi\pp\do-fret\.venv\lib\site-packages\solara\server\app.py", line 265, in _run_app
main_object = app_script.run()
File "C:\Users\jhsmi\pp\do-fret\.venv\lib\site-packages\solara\server\app.py", line 198, in run
self._first_execute_app = self._execute()
File "C:\Users\jhsmi\pp\do-fret\.venv\lib\site-packages\solara\server\app.py", line 131, in _execute
spec = importlib.util.find_spec(self.name)
File "C:\Users\jhsmi\pp\do-fret\.venv\lib\importlib\util.py", line 103, in find_spec
return _find_spec(fullname, parent_path)
File "<frozen importlib._bootstrap>", line 925, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1423, in find_spec
File "<frozen importlib._bootstrap_external>", line 1389, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1252, in __iter__
File "<frozen importlib._bootstrap_external>", line 1239, in _recalculate
File "<frozen importlib._bootstrap_external>", line 1235, in _get_parent_path
KeyError: 'dont_fret'
```
Instead, If i move the file up out of the module, and directly under the package root and run with:
```
solara run package_name.filename
```
I don't have issues with autoreload and it does correctly detect also changes in the dependencies being imported from the module
| closed | 2023-10-23T09:35:24Z | 2023-10-30T14:01:58Z | https://github.com/widgetti/solara/issues/334 | [] | Jhsmit | 2 |
assafelovic/gpt-researcher | automation | 493 | TypeError: unsupported operand type(s) for -: 'int' and 'simsimd.DistancesTensor' | `ERROR: Exception in ASGI application
.......
research_result = await researcher.conduct_research()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/gpt_researcher/master/agent.py", line 85, in conduct_research
self.context = await self.get_context_by_search(self.query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/gpt_researcher/master/agent.py", line 158, in get_context_by_search
context = await asyncio.gather(*[self.process_sub_query(sub_query) for sub_query in sub_queries])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/gpt_researcher/master/agent.py", line 174, in process_sub_query
content = await self.get_similar_content_by_query(sub_query, scraped_sites)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/gpt_researcher/master/agent.py", line 226, in get_similar_content_by_query
return context_compressor.get_context(query, max_results=8)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/gpt_researcher/context/compression.py", line 43, in get_context
relevant_docs = compressed_docs.invoke(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/langchain_core/retrievers.py", line 194, in invoke
return self.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/langchain_core/retrievers.py", line 323, in get_relevant_documents
raise e
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/langchain_core/retrievers.py", line 316, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/langchain/retrievers/contextual_compression.py", line 48, in _get_relevant_documents
compressed_docs = self.base_compressor.compress_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/langchain/retrievers/document_compressors/base.py", line 39, in compress_documents
documents = _transformer.compress_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/langchain/retrievers/document_compressors/embeddings_filter.py", line 61, in compress_documents
similarity = self.similarity_fn([embedded_query], embedded_documents)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/philipp/Library/Caches/pypoetry/virtualenvs/backend-xXYcI_nD-py3.11/lib/python3.11/site-packages/langchain_community/utils/math.py", line 29, in cosine_similarity
Z = 1 - simd.cdist(X, Y, metric="cosine")
~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for -: 'int' and 'simsimd.DistancesTen`
I am getting the error above when running the ".conduct_research()" function.
I am using the pip package in version 0.4.0.
I am using the FastAPI example
| closed | 2024-05-11T16:07:46Z | 2025-02-01T15:31:38Z | https://github.com/assafelovic/gpt-researcher/issues/493 | [] | ockiphertweck | 6 |
keras-team/autokeras | tensorflow | 1,341 | Add limit model size to faq. | closed | 2020-09-16T16:54:12Z | 2020-11-02T06:41:21Z | https://github.com/keras-team/autokeras/issues/1341 | [
"documentation",
"pinned"
] | haifeng-jin | 0 | |
httpie/cli | api | 728 | get ssl and tcp time | can i get ssl time and tcp time of http connection? | closed | 2018-11-09T07:59:40Z | 2020-09-20T07:34:22Z | https://github.com/httpie/cli/issues/728 | [] | robyn-he | 2 |
django-import-export/django-import-export | django | 1,120 | Django Import Exports fails for MongoDB | import-export is working for Mysql but fails for MongoDb.
Does this package supports Mongo?
or is there any additional requirement?
The error is same as in issue:
https://github.com/django-import-export/django-import-export/issues/811 | closed | 2020-04-29T13:34:11Z | 2020-04-29T14:31:30Z | https://github.com/django-import-export/django-import-export/issues/1120 | [] | sv8083 | 2 |
plotly/dash | data-visualization | 2,992 | dcc.Graph rendering goes into infinite error loop when None is returned for Figure | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
```
dash 2.18.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: MacOS/Linux/Windows
- Browser Chrome
- Version 128
**Describe the bug**
When running the below script (which has a bug: `graph_selections` returns `None` instead of a `Figure`) with python3, and displaying in the Chrome browser, the browser tab seems to lock up. If the developer tools are open, one can see the error count rapidly rising with the errors in the screenshot below being repeated over and over again in a tight loop.
```python
from dash import Dash, html, dcc, callback, Output, Input
app = Dash()
app.layout = html.Div([
html.Button('RUN', id='run-btn', n_clicks=0),
dcc.Graph(id='graph-container')
])
@callback(
Output('graph-container', 'figure'),
Input('run-btn', 'n_clicks'),
)
def graph_selections(n_clicks):
print(n_clicks)
if __name__ == "__main__":
app.run(port=8050, host='0.0.0.0', debug=True)
```
**Expected behavior**
An error message in the browser, describing the bad return from the callback..
**Screenshots**
<img width="1230" alt="Screenshot 2024-09-09 at 12 28 28" src="https://github.com/user-attachments/assets/0352285a-b7c2-4139-89eb-ddf8eddeb2be">
| open | 2024-09-09T19:44:45Z | 2024-09-11T19:16:40Z | https://github.com/plotly/dash/issues/2992 | [
"bug",
"P3"
] | reggied | 0 |
miguelgrinberg/microblog | flask | 51 | translate.py TypeError: the JSON object must be str, not 'bytes' | Hello,
return json.loads(r.content) raise an error _TypeError: the JSON object must be str, not 'bytes'_
_return json.loads(r.content.decode('utf-8-sig'))_ fix it.
Regards,
immontilla | closed | 2017-12-20T08:34:24Z | 2018-01-04T18:46:37Z | https://github.com/miguelgrinberg/microblog/issues/51 | [
"bug"
] | immontilla | 3 |
allure-framework/allure-python | pytest | 66 | Add support for Nose Framework | closed | 2017-06-27T14:40:43Z | 2020-11-27T14:22:21Z | https://github.com/allure-framework/allure-python/issues/66 | [
"type:enhancement"
] | sseliverstov | 1 | |
widgetti/solara | fastapi | 521 | Please add meta information with license for ipyvue in PYPI | meta information with license missing for ipyvue causing problem to install solara in my org. | closed | 2024-02-24T13:42:13Z | 2024-02-27T13:10:54Z | https://github.com/widgetti/solara/issues/521 | [] | pratyush581 | 1 |
numpy/numpy | numpy | 28,076 | Overview issue: Typing regressions in NumPy 2.2 | NumPy 2.2 had a lot of typing improvements, but that also means some regressions (at least and maybe especially for mypy users).
So maybe this exercise is mainly useful to me to make sense of the mega-issue in gh-27957.
My own take-away is that we need the user documentation (gh-28077), not just for users, but also to understand better who and why people have to change their typing. That is to understand the two points:
1. How many users and what kind of users are affected:
* Early "shaping users" of unsupported shapes may be few?
* `mypy` users of unannotated code are maybe quite many.
2. And what do they need to do:
* Removing shape types seems easy (if unfortunate).
* Adding `--allow-redefinition` is easy, avoiding `mypy` may be more work (maybe unavoidable).
* Are there other work-around? Maybe `scipy-lectures` is "special" or could hide generic types outside the code users see...
One other thing that I would really like to see is also the "alternatives". Maybe there are none, but I would at least like to spell it out, as in:
Due to ... only thing that we might be able to avoid these regression is to hide it away as `from numpy.typing_future import ndarray` and that is impractical/impossible because...
CC @jorenham although it is probably boring to you, also please feel free to amend or expand.
## Issues that require user action
### User issues due to (necessarily) incomplete typing
There are two things that came up where NumPy used to have less precise or wrong typing, but correcting it making it more precise (while also [necessarily incomplete](https://github.com/numpy/numpy/issues/27957#issuecomment-2551643556) as it may [require a new PEP](https://github.com/numpy/numpy/issues/27957#issuecomment-2552091173)) means that type checking can fail:
* **`floating`** is now used as a supertype of `float64` (rather than identity) meaning it (correctly) matches `float32`, `float`, etc.
* Incomplete typing means functions may return `floating` rather than `float64` even when they clearly return `float64`.
* (N.B.: NumPy runtime is slightly fuzzy about this, since `np.dtype(np.floating)` gives float64, but with a warning because it is not a good meaning.)
* There is now some support for **shape typing**
* Previously, users could add shapes, but these were ignored.
* E.g. https://github.com/search?q=ndarray%5Btuple&type=code although 1800 files doesn't seem _that_ much.
* Shape typing *should not* be used currently, because most functions will return shape-generic results, meaning that even correct shapes types will typically just type checking.
(Users could choose to use this, but probably would need to cast explicitly often.)
There is a **mypy** specific angle in gh-27957 to both of these, because `mypy` defaults (but always runs into it) to infer the type at the first assignment. This assignment is likely (e.g. creation) to include the correct shape and float64 type, but later re-assignments will fail.
* `mypy` has `--allow-redefinition` although it doesn't fix it fully [at least for nested scopes in for-loops](https://github.com/numpy/numpy/issues/27957#issuecomment-2547042651), `mypy` may [improve this]().
The **user impact** is that:
* At least `mypy` fails even for **unannotated** code.
* Users have to avoid even correct `float64` and shape types due to imprecise NumPy type stubs. These previously passed, whether intentional or not.
* For `float64` passing previously was arguably a bug, but is still a regression.
* For shapes, this means [explicitly broaden correct shapes](https://github.com/numpy/numpy/issues/27957#issuecomment-2552022426) (not necessary previously)
(I, @seberg, cannot tell how problematic these are, or what options we have to try to make this easier on downstream, short of reverting or including reverting.)
## Simple regressions fixed or fixable in NumPy
* gh-27964
* The `floating` change has at least that seems very much fixable with follow-ups, see gh-28071 (e.g. `numpy.zeros(2, dtype=numpy.float64) + numpy.float64(1.0)` is clearly `float64`).
* https://github.com/numpy/numpy/issues/27977
* https://github.com/numpy/numpy/issues/27944
* https://github.com/numpy/numpy/issues/27945
## Type-checkers issues that may impact NumPy
* MyPy has already a few new fixed related to issues found in NumPy (not sure all are 2.2 related): https://github.com/python/mypy/issues/18343
| open | 2024-12-30T13:54:26Z | 2025-03-19T19:25:03Z | https://github.com/numpy/numpy/issues/28076 | [
"41 - Static typing"
] | seberg | 21 |
pydantic/FastUI | fastapi | 275 | 422 Error in demo: POST /api/forms/select | I'm running a local copy of the demo and there's an issue with the Select form. Pressing "Submit" throws a server-side error, and the `post` router method is never run.
I think the problem comes from the multiple select fields. Commenting these out, or converting them to single fields, fixes the problem, and the Submit button triggers to goto event leading back to the root URI. I read in other issues on here that arrays types in forms are not yet supported. For clarity, perhaps this should be removed from the demo until they are?
Also, I tried adding a handler like this in `demo/__init__.py`:
```Python
from fastapi.exceptions import RequestValidationError
from fastapi.responses import JSONResponse
from fastapi import status
@app.exception_handler(RequestValidationError)
async def validation_exception_handler(request, exc):
print(f"Caught 422 exception on request:\n\{request}\n\n")
return JSONResponse(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
content={"detail": exc.errors(), "body": exc.body},
)
```
The 422 event is printed to the console, but the handler never gets fired. Why is this? | open | 2024-04-17T18:08:54Z | 2024-05-02T00:03:23Z | https://github.com/pydantic/FastUI/issues/275 | [
"bug",
"documentation"
] | charlie-corus | 1 |
streamlit/streamlit | python | 10,107 | Inconsistent item assignment exception for `st.secrets` | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
`st.secrets` is read-only. When assigning items via key/dict notation (`st.secrets["foo"] = "bar"`), it properly shows an exception:

But when assigning an item via dot notation (`st.secrets.foo = "bar"`), it simply fails silently, i.e. it doesn't show an exception but it also doesn't set the item. I think in this situation it should also show an exception.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10107)
```Python
import streamlit as st
st.secrets.foo = "bar"
```
### Steps To Reproduce
_No response_
### Expected Behavior
Show same exception message as for `st.secrets["foo"] = "bar"`.
### Current Behavior
Nothing.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.0
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | closed | 2025-01-03T18:42:25Z | 2025-03-12T10:29:20Z | https://github.com/streamlit/streamlit/issues/10107 | [
"type:bug",
"good first issue",
"feature:st.secrets",
"status:confirmed",
"priority:P3"
] | jrieke | 3 |
huggingface/transformers | python | 36,926 | `Mllama` not supported by `AutoModelForCausalLM` after updating `transformers` to `4.50.0` | ### System Info
- `transformers` version: 4.50.0
- Platform: Linux-5.15.0-100-generic-x86_64-with-glibc2.35
- Python version: 3.12.2
- Huggingface_hub version: 0.29.3
- Safetensors version: 0.5.3
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A40
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Install latest version of `transformers` (4.50.0)
2. Run the following:
```
from transformers import AutoModelForCausalLM
model_name = "meta-llama/Llama-3.2-11B-Vision"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.float16)
```
**Got the error:**
```
ValueError: Unrecognized configuration class <class 'transformers.models.mllama.configuration_mllama.MllamaTextConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of AriaTextConfig, BambaConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, DiffLlamaConfig, ElectraConfig, Emu3Config, ErnieConfig, FalconConfig, FalconMambaConfig, FuyuConfig, GemmaConfig, Gemma2Config, Gemma3Config, Gemma3TextConfig, GitConfig, GlmConfig, GotOcr2Config, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GraniteConfig, GraniteMoeConfig, GraniteMoeSharedConfig, HeliumConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MllamaConfig, MoshiConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, Olmo2Config, OlmoeConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PhimoeConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, ZambaConfig, Zamba2Config.
```
However, it's mentioned in the latest document that the `mllama` model is supported
https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM.from_pretrained
I tested this in an environment with `transformers==4.49.0` and the model is loaded without issue
### Expected behavior
The multimodal mllama model (Llama-3.2-11B-Vision) is loaded successfully | open | 2025-03-24T12:07:09Z | 2025-03-24T12:28:00Z | https://github.com/huggingface/transformers/issues/36926 | [
"bug"
] | WuHaohui1231 | 2 |
jupyter-incubator/sparkmagic | jupyter | 833 | [BUG] SparkMagic pyspark kernel magic(%%sql) hangs when running with Papermill. | I initially reported this as a papermill issue(not quite sure about this). I am copying that issue to SparkMagic community to see if there happen to be any expert who can provide advice for unblocking. Please feel free to close if this is not SparkMagic issue. Thanks in advance.
**Describe the bug**
Our use case is to use SparkMagic wrapper kernels with PaperMill notebook execution.
Most of the functions are working as expected except the %%sql magic, which will get stuck during execution. The SparkMagic works properly when executed in interactive mode in JupyterLab and issue only happens for %%sql magic when running with PaperMill.
From the debugging log(attached), I can see the %%sql logic had been executed and response was retrieved back. The execution state was back to idle at the end. But the output of %%sql cell was not updated properly and the following cells were not executed.
Following content was printed by PaperMill, which shows the %%sql has been executed properly. This content was not rendered into cell output.
> msg_type: display_data
content: {'data': {'text/plain': '<IPython.core.display.HTML object>', 'text/html': '<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border="1" class="dataframe hideme">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>database</th>\n <th>tableName</th>\n <th>isTemporary</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>default</td>\n <td>movie_reviews</td>\n <td>False</td>\n </tr>\n </tbody>\n</table>\n</div>'}, 'metadata': {}, 'transient': {}}
**To Reproduce**
```
conda create --name py310 python=3.10
conda activate pyenv310
pip install sparkmagic
pip install papermill
# install kernelspecs
SITE_PACKAGES_LOC=$(pip show sparkmagic | grep Location | awk '{print $2}')
cd $SITE_PACKAGES_LOC
jupyter-kernelspec install sparkmagic/kernels/sparkkernel --user
jupyter-kernelspec install sparkmagic/kernels/pysparkkernel --user
jupyter-kernelspec install sparkmagic/kernels/sparkrkernel --user
jupyter nbextension enable --py --sys-prefix widgetsnbextension
pip install notebook==6.5.1 (Downgrade rom 7.0.3 to 6.5.1 due to ModuleNotFoundError: No module named 'notebook.utils')
# Run papermill job(notebook is also uploaded)
# Before run this, an EMR cluster is needed and the sparkmagic configure is also needed.
# If it's not possible/easy to create it, please comment for any testing/verification needed, I can help. Also, you can check the uploaded the papermill debugging log.
papermill pm_sparkmagic_test.ipynb output1.ipynb --kernel pysparkkernel --log-level DEBUG
```
Following is package list which might be highly related. I also attached one text contains all the packages.
```
pip list | grep 'papermill\|sparkmagic\|autovizwidget\|hdijupyterutils\|ipykernel\|ipython\|ipywidgets\|mock\|nest-asyncio\|nose\|notebook\|numpy\|pandas\|requests\|requests-kerberos\|tornado\|ansiwrap\|click\|entrypoints\|nbclient\|nbformat\|pyyaml\|requests\|tenacity\|tqdm\|jupyter\|ipython'|sort
ansiwrap 0.8.4
autovizwidget 0.20.5
click 8.1.7
entrypoints 0.4
hdijupyterutils 0.20.5
ipykernel 6.25.2
ipython 8.15.0
ipython-genutils 0.2.0
ipywidgets 8.1.0
jupyter 1.0.0
jupyter_client 8.3.1
jupyter-console 6.6.3
jupyter_core 5.3.1
jupyter-events 0.7.0
jupyterlab 4.0.5
jupyterlab-pygments 0.2.2
jupyterlab_server 2.24.0
jupyterlab-widgets 3.0.8
jupyter-lsp 2.2.0
jupyter_server 2.7.3
jupyter_server_terminals 0.4.4
nbclient 0.8.0
nbformat 5.9.2
nest-asyncio 1.5.5
notebook 6.5.1
notebook_shim 0.2.3
numpy 1.25.2
pandas 1.5.3
papermill 2.4.0
requests 2.31.0
requests-kerberos 0.14.0
sparkmagic 0.20.5
tenacity 8.2.3
tornado 6.3.3
tqdm 4.66.1
```
**Expected behavior**
The %%sql should not hang and following cell should proceed for execution.
**Screenshots**
**Output notebook of papermill:**
<img width="959" alt="image" src="https://github.com/nteract/papermill/assets/83920185/a2bc253b-ec4d-4190-ad02-f8dbef3fdca8">
**Expected output(from JupyterLab)**
<img width="759" alt="image" src="https://github.com/nteract/papermill/assets/83920185/e43539ac-35b3-4cb7-bdf9-fac22e30e3a2">
**Versions:**
- SparkMagic (0.20.5)
- Livy (N/A)
- Spark (N/A)
**Additional context**
[log and other files.zip](https://github.com/nteract/papermill/files/12541921/log.and.other.files.zip) contains:
1. log - papermill debugging log
2. my_test_env_requirements.txt - full list of packages in the conda env
3. pm_sparkmagic_test.ipynb - the notebook executed in jupyterlab and it's also the input of papermill job
4. output1.ipynb - output notebook from the papermill job
| open | 2023-09-06T20:04:07Z | 2024-08-09T02:48:55Z | https://github.com/jupyter-incubator/sparkmagic/issues/833 | [
"kind:bug"
] | edwardps | 18 |
Colin-b/pytest_httpx | pytest | 87 | If the url query parameter contains Chinese characters, it will cause an encoding error | ```
httpx_mock.add_response(
url='test_url?query_type=数据',
method='GET',
json={'result': 'ok'}
)
```
Executing the above code, It wil cause an encoding error:
> obj = '数据', encoding = 'ascii', errors = 'strict'
>
> def _encode_result(obj, encoding=_implicit_encoding,
> errors=_implicit_errors):
> return obj.encode(encoding, errors)
> E UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128)
>
> /usr/local/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/parse.py:108: UnicodeEncodeError | closed | 2022-11-02T07:52:49Z | 2022-11-03T21:05:49Z | https://github.com/Colin-b/pytest_httpx/issues/87 | [
"bug"
] | uncle-shu | 2 |
taverntesting/tavern | pytest | 574 | Unable to set custom user agent through headers | I'm trying to set custom user agents as part of my requests, and I think Tavern might have a bug there/
Example:
```
stages:
- name: request
request:
url: "http://foo.bar/endpoint"
method: POST
headers:
user-agent: "my/useragent"
json: {}
```
The resulting user-agent received by my endpoint is
`python-requests/2.23.0,my/useragent`
while I'd really expect it to be just`my/useragent`
Am I doing something wrong (the doc does not really contain anything about user agent) or is that a bug ?
| closed | 2020-07-28T10:42:02Z | 2020-11-05T17:37:36Z | https://github.com/taverntesting/tavern/issues/574 | [] | nicoinn | 1 |
Avaiga/taipy | automation | 1,942 | [🐛 BUG] No delete chats button in the chatbot | ### What went wrong? 🤔

### Expected Behavior
_No response_
### Steps to Reproduce Issue
1. A code fragment
2. And/or configuration files or code
3. And/or Taipy GUI Markdown or HTML files
### Solution Proposed
Delete chat can be added to the cahbot for better communication
### Screenshots

### Runtime Environment
_No response_
### Browsers
_No response_
### OS
_No response_
### Version of Taipy
_No response_
### Additional Context
_No response_
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional) | closed | 2024-10-06T17:11:36Z | 2024-10-07T20:30:28Z | https://github.com/Avaiga/taipy/issues/1942 | [
"💥Malfunction"
] | NishantRana07 | 2 |
kaarthik108/snowChat | streamlit | 3 | Table not showing | Hi!
Are you supposed to first select a table from the side bar from the database you specify in secrets.toml file? Because for me, the options are still the default ones

And even if I query the default tables I don't get a table, and the code generated is not formatted like in the demo app:

What could've been wrong? Thanks so much! | closed | 2023-05-19T09:12:19Z | 2023-06-25T04:21:14Z | https://github.com/kaarthik108/snowChat/issues/3 | [] | ewosl | 1 |
python-gino/gino | asyncio | 224 | Error creating table with ForeignKey referencing table wo `__tablename__` attribute | * GINO version: 0.7.2
* Python version: 3.6.5
Trying to create the following declarative schema:
```
class Parent(db.Model):
id = db.Column(db.Integer, primary_key=True)
class Child(db.Model):
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parent.id'))
```
Got this exception:
```
Traceback (most recent call last):
File "xxx/lib/python3.6/site-packages/gino/declarative.py", line 34, in __getattr__
raise AttributeError
AttributeError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "script.py", line 150, in <module>
loop.run_until_complete(Parent.create())
File "/home/xxx/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
return future.result()
File "/home/xxx/lib/python3.6/site-packages/gino/crud.py", line 418, in _create_without_instance
return await cls(**values)._create(bind=bind, timeout=timeout)
File "/home/xxx/lib/python3.6/site-packages/gino/crud.py", line 398, in __init__
self.update(**values)
File "/home/xxx/lib/python3.6/site-packages/gino/crud.py", line 518, in _update
return self._update_request_cls(self).update(**values)
File "/home/xxx/lib/python3.6/site-packages/gino/crud.py", line 81, in __init__
type(self._instance).update)
File "/home/xxx/lib/python3.6/site-packages/gino/declarative.py", line 38, in __getattr__
self.__name__, item))
AttributeError: type object 'Parent' has no attribute 'update'
```
Not reproducible with `__tablename__` fields:
```
class Parent(db.Model):
__tablename__ = 'parents'
id = db.Column(db.Integer, primary_key=True)
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
```
Not reproducible creating only one table without `__tablename__` (and no fk to this table):
```
class Parent(db.Model):
id = db.Column(db.Integer, primary_key=True)
``` | closed | 2018-05-17T13:06:35Z | 2018-06-23T12:33:03Z | https://github.com/python-gino/gino/issues/224 | [
"wontfix"
] | gyermolenko | 4 |
vitalik/django-ninja | pydantic | 609 | How do I change the title on the document? | I want to change these two headings in the picture what I want

| closed | 2022-11-13T08:50:27Z | 2022-11-13T16:41:29Z | https://github.com/vitalik/django-ninja/issues/609 | [] | Zzc79 | 1 |
deepfakes/faceswap | machine-learning | 777 | AttributeError: 'NoneType' object has no attribute 'split' | **Describe the bug**
Hi, I'm try to install the repo follow [General-Install-Guide](https://github.com/deepfakes/faceswap/blob/master/INSTALL.md#General-Install-Guide)
But when I run `python setup.py`, It throw the error `AttributeError: 'NoneType' object has no attribute 'split'`. How should I fit it?
```sh
$ pip install -r requirements.txt
$ pip install tensorflow-gpu
$ pytyon ./setup.py
INFO Running as Root/Admin
INFO The tool provides tips for installation
and installs required python packages
INFO Setup in Linux 4.14.79+
INFO Installed Python: 3.6.8 64bit
INFO Encoding: UTF-8
INFO Upgrading pip...
INFO Installed pip: 19.1.1
INFO AMD Support: AMD GPU support is currently limited.
Nvidia Users MUST answer 'no' to this option.
Enable AMD Support? [y/N]
INFO AMD Support Disabled
Enable Docker? [y/N]
INFO Docker Disabled
Enable CUDA? [Y/n]
INFO CUDA Enabled
INFO CUDA version: 10.0
INFO cuDNN version: 7.4.2
Please ensure your System Dependencies are met. Continue? [y/N] y
Traceback (most recent call last):
File "./setup.py", line 753, in <module>
Install(ENV)
File "./setup.py", line 524, in __init__
self.check_missing_dep()
File "./setup.py", line 544, in check_missing_dep
key = pkg.split("==")[0]
AttributeError: 'NoneType' object has no attribute 'split'
```
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| closed | 2019-06-27T17:20:57Z | 2019-06-28T17:35:44Z | https://github.com/deepfakes/faceswap/issues/777 | [] | s97712 | 7 |
gradio-app/gradio | deep-learning | 10,350 | Always jump to the first selection when selecting in dropdown, if there are many choices and bar in the dropdown list. | ### Describe the bug
If there are a lot of choices in a dropdown, a bar will appear. In this case, when I select a new key, the bar will jump to the first key I've chosen. This is so inconvenient.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def sentence_builder(quantity, animal, countries, place, activity_list, morning):
return f"""The {quantity} {animal}s from {" and ".join(countries)} went to the {place} where they {" and ".join(activity_list)} until the {"morning" if morning else "night"}"""
demo = gr.Interface(
sentence_builder,
[
gr.Slider(2, 20, value=4, label="Count", info="Choose between 2 and 20"),
gr.Dropdown(
["cat", "dog", "bird"], label="Animal", info="Will add more animals later!"
),
gr.CheckboxGroup(["USA", "Japan", "Pakistan"], label="Countries", info="Where are they from?"),
gr.Radio(["park", "zoo", "road"], label="Location", info="Where did they go?"),
gr.Dropdown(
["ran", "swam", "ate", "slept", "ran1", "swam1", "ate1", "slept1", "ran2", "swam2", "ate2", "slept2", "ran3", "swam3", "ate3", "slept3", "ran4", "swam4", "ate4", "slept4", "ran5", "swam5", "ate5", "slept5", "ran6", "swam6", "ate6", "slept6", "ran7", "swam7", "ate7", "slept7"], value=["swam", "slept"], multiselect=True, label="Activity", info="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed auctor, nisl eget ultricies aliquam, nunc nisl aliquet nunc, eget aliquam nisl nunc vel nisl."
),
gr.Checkbox(label="Morning", info="Did they do it in the morning?"),
],
"text",
examples=[
[2, "cat", ["Japan", "Pakistan"], "park", ["ate", "swam"], True],
[4, "dog", ["Japan"], "zoo", ["ate", "swam"], False],
[10, "bird", ["USA", "Pakistan"], "road", ["ran"], False],
[8, "cat", ["Pakistan"], "zoo", ["ate"], True],
]
)
if __name__ == "__main__":
demo.launch()
```
### Screenshot

### Logs
_No response_
### System Info
```shell
gradio 5.12
```
### Severity
I can work around it | closed | 2025-01-14T03:09:06Z | 2025-02-27T00:03:34Z | https://github.com/gradio-app/gradio/issues/10350 | [
"bug"
] | tyc333 | 0 |
jmcnamara/XlsxWriter | pandas | 1,112 | Bug: <Write_String can not write string like URL to a normal String but Hyperlink> | ### Current behavior
When I use pandas with xlsxwriter engine to write data to excel. xlsxwriter can not write data as text or string but as URL.
Even I use custom writer to write_string as text_format, xlsxwriter still write as URL (Hyperlink) and raise URL 65536 limits in Excel. I just want it to be write as normal text not Hyperlink.
def to_excel_as_string_by_xlsxwriter(df: pd.DataFrame, output_path: str):
# Ensure all data is treated as strings to avoid URL interpretation
df = df.astype(str)
# Write to Excel with xlsxwriter engine
with pd.ExcelWriter(output_path, engine='xlsxwriter') as writer:
df.to_excel(writer, index=False, sheet_name='Sheet1')
# Access the xlsxwriter workbook and worksheet
workbook = writer.book
worksheet = writer.sheets['Sheet1']
# Set a format that avoids hyperlink interpretation
text_format = workbook.add_format({'text_wrap': False, 'align': 'left', 'valign': 'vcenter'})
# Apply the text format to all columns to avoid hyperlinks
worksheet.set_column(0, len(df.columns) - 1, None, text_format)
# Write the data explicitly as strings
for row_num, row in enumerate(df.values, start=1):
for col_num, cell in enumerate(row):
worksheet.write_string(row_num, col_num, str(cell), text_format)
### Expected behavior
I want an way to detect URL (hyperlink) or someway to write as text for excell and not raise the URL (hyperlink) 65536 limits.
When I use openpyxl with pandas to write, it works fine. I hope xlsxwriter can do the samething, cause openpyxl is kind of slow, I prefer xlsxwriter for writing work. Thanks in advance!
### Sample code to reproduce
```markdown
def to_excel_as_string_by_xlsxwriter(df: pd.DataFrame, output_path: str):
# Ensure all data is treated as strings to avoid URL interpretation
df = df.astype(str)
# Write to Excel with xlsxwriter engine
with pd.ExcelWriter(output_path, engine='xlsxwriter') as writer:
df.to_excel(writer, index=False, sheet_name='Sheet1')
# Access the xlsxwriter workbook and worksheet
workbook = writer.book
worksheet = writer.sheets['Sheet1']
# Set a format that avoids hyperlink interpretation
text_format = workbook.add_format({'text_wrap': False, 'align': 'left', 'valign': 'vcenter'})
# Apply the text format to all columns to avoid hyperlinks
worksheet.set_column(0, len(df.columns) - 1, None, text_format)
# Write the data explicitly as strings
for row_num, row in enumerate(df.values, start=1):
for col_num, cell in enumerate(row):
worksheet.write_string(row_num, col_num, str(cell), text_format)
```
### Environment
```markdown
- XlsxWriter version: 3.2.0
- Python version: 3.10.11
- Excel version: 2016 pro
- OS: window 10 pro 22H2 19045.5247
```
### Any other information
_No response_
### OpenOffice and LibreOffice users
- [X] I have tested the output file with Excel. | closed | 2025-01-06T04:37:50Z | 2025-01-06T10:04:13Z | https://github.com/jmcnamara/XlsxWriter/issues/1112 | [
"bug"
] | xzpater | 1 |
ultralytics/yolov5 | deep-learning | 12,931 | polygon annotation to object detection | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I want to run object detection with segmentation labeling data, but I got an error.
As far as I know, object detection is possible with segmentation labeled data, but is it a labeling issue?
`python tools/train.py --batch 32 --conf configs/yolov6s_finetune.py --epoch 50 --data ./FST1/data.yaml --fuse_ab --device 0`
`img record infomation path is:./FST1/train/.images_cache.json
Traceback (most recent call last):
File "tools/train.py", line 143, in <module>
main(args)
File "tools/train.py", line 128, in main
trainer = Trainer(args, cfg, device)
File "/media/HDD/조홍석/YOLOv6/yolov6/core/engine.py", line 91, in __init__
self.train_loader, self.val_loader = self.get_data_loader(self.args, self.cfg, self.data_dict)
File "/media/HDD/조홍석/YOLOv6/yolov6/core/engine.py", line 387, in get_data_loader
train_loader = create_dataloader(train_path, args.img_size, args.batch_size // args.world_size, grid_size,
File "/media/HDD/조홍석/YOLOv6/yolov6/data/data_load.py", line 46, in create_dataloader
dataset = TrainValDataset(
File "/media/HDD/조홍석/YOLOv6/yolov6/data/datasets.py", line 82, in __init__
self.img_paths, self.labels = self.get_imgs_labels(self.img_dir)
File "/media/HDD/조홍석/YOLOv6/yolov6/data/datasets.py", line 435, in get_imgs_labels
*[
File "/media/HDD/조홍석/YOLOv6/yolov6/data/datasets.py", line 438, in <listcomp>
np.array(info["labels"], dtype=np.float32)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.`
### Additional
_No response_ | closed | 2024-04-17T07:31:16Z | 2024-05-28T00:21:51Z | https://github.com/ultralytics/yolov5/issues/12931 | [
"question",
"Stale"
] | Cho-Hong-Seok | 2 |
pydantic/pydantic-ai | pydantic | 950 | Agent making mutiple, sequential requests with tool calls | Hi, I'm new to Pydantic-ai and trying to understand `Agent`'s behavior.
My question is why sometimes the Agent make multiple, sequential tool calls? Most of the time it only make one, where one or several tools are called at the same time, like in the examples from pydantic-ai docs.
But I found that sometimes the `Agent` makes multiple requests, typically when given a complex task. For example, given a simple three-door maze problem, it makes multiple tool calls to solve the maze level by level, like trial and error.
```
"""
Pydantic-ai maze solver
Pydantic-ai agent can execute tools iteratively until solving the maze
Here we provide the agent with options, states, signal of exit, and some encouragement
"""
from pydantic_ai.usage import UsageLimits
from pydantic_ai.exceptions import UsageLimitExceeded
from pydantic_ai import Agent, RunContext
from loguru import logger
from dataclasses import dataclass
from pydantic_ai.models.openai import OpenAIModel
model = OpenAIModel('openai:gpt-4o')
my_agent = Agent(model)
SYSTEM_PROMPT = "I am trapped in a maze. Ahead of me are three layers of doors, each layer has three doors. I cannot access next layers unless I solve the current layer. Need to find the exit by solving all layers"
@dataclass
class MyDeps:
current_layer: int
exitted: bool
tried_doors: list[str]
@my_agent.system_prompt
async def get_system_prompt(ctx: RunContext[MyDeps]) -> str:
return SYSTEM_PROMPT
@my_agent.tool
async def tool_door_left(ctx: RunContext[MyDeps]) -> str:
"""
you select the door on the left
"""
if ctx.deps.current_layer == 2:
message = ("cleared layer 2")
logger.info(message + f"\nDoor tried: {ctx.deps.tried_doors}")
ctx.deps.current_layer += 1
ctx.deps.tried_doors = []
return message
else:
ctx.deps.tried_doors.append('left')
@my_agent.tool
async def tool_door_right(ctx: RunContext[MyDeps]) -> str:
"""
you select the door on the right
"""
if ctx.deps.current_layer == 1:
message = ("cleared layer 1")
logger.info(message + f"\nDoor tried: {ctx.deps.tried_doors}")
ctx.deps.current_layer += 1
ctx.deps.tried_doors = []
return message
else:
ctx.deps.tried_doors.append('right')
@my_agent.tool
async def tool_door_middle(ctx: RunContext[MyDeps]) -> str:
"""
you select the door in the middle
"""
if ctx.deps.current_layer == 3:
message = ("cleared layer 3. Found the exit!")
logger.info(message + f"\nDoor tried: {ctx.deps.tried_doors}")
ctx.deps.exitted = True
ctx.deps.tried_doors = []
return message
else:
ctx.deps.tried_doors.append('middle')
usage_limits = UsageLimits(request_limit=5)
deps = MyDeps(current_layer=1, exitted=False, tried_doors=[])
try:
res = await my_agent.run("Good luck", deps=deps, usage_limits=usage_limits)
print(res.data)
except UsageLimitExceeded as e:
print("I'm trapped forever", e)
```
I cannot find this behavior documented anywhere in the doc/github issues. Although it seems like helpful, I want to confirm if
| closed | 2025-02-20T02:03:29Z | 2025-02-20T02:08:45Z | https://github.com/pydantic/pydantic-ai/issues/950 | [] | xtfocus | 1 |
StackStorm/st2 | automation | 6,137 | Renew test SSL CA + Cert | Our test SSL CA+cert just expired. We need to renew it and document how to do so.
https://github.com/StackStorm/st2/tree/master/st2tests/st2tests/fixtures/ssl_certs
Since this is for testing, I think we could do something like a 15 year duration. | closed | 2024-02-13T18:50:11Z | 2024-02-16T17:07:01Z | https://github.com/StackStorm/st2/issues/6137 | [
"tests",
"infrastructure: ci/cd"
] | cognifloyd | 2 |
tensorflow/tensor2tensor | deep-learning | 1,847 | Out of Memory while training | I am getting an OoM error while training with 8 GPUs but not with 1 GPU.
I use the following command to train.
t2t-trainer \
--data_dir=$DATA_DIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams='max_length=100,batch_size=1024,eval_drop_long_sequences=true'\
--worker_gpu=8 \
--train_steps=350000 \
--hparams_set=$HPARAMS \
--eval_steps=5000 \
--output_dir=$TRAIN_DIR \
--schedule=continuous_train_and_eval
Any suggestions? I also tried to reduce the batch_size as well as the max_length but no luck. | open | 2020-09-08T13:58:43Z | 2022-10-20T14:00:33Z | https://github.com/tensorflow/tensor2tensor/issues/1847 | [] | dinosaxon | 1 |
xuebinqin/U-2-Net | computer-vision | 75 | Results without fringe | Hi @NathanUA,
I have a library that makes use of your model.
@alfonsmartinez opened an issue about the model result, please, take a look at here:
https://github.com/danielgatis/rembg/issues/14
Can you figure out how I can achieve this result without the black fringe?
thanks. | closed | 2020-09-29T21:29:17Z | 2020-10-10T16:43:20Z | https://github.com/xuebinqin/U-2-Net/issues/75 | [] | danielgatis | 10 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,018 | Error when testing pix2pix with a single image | Hi,
I trained pix2pix with my own dataset which ran fine for 200 epochs and the visom results through training seem promising. I now want to test the model with a single test image (without the image pair format, just the A style image to convert to B style)
I placed that single image in its own folder and gave the following attributes, as is suggested to ``test.py``:
`--dataroot ./datasets/edge2face/single_test/ --name egde2face_pix2pix --model test --dataset_mode single`
But I get the following error:
> AttributeError: 'Sequential' object has no attribute 'model'
Here is the full output if that helps:
```
----------------- Options ---------------
aspect_ratio: 1.0
batch_size: 1
checkpoints_dir: ./checkpoints
crop_size: 256
dataroot: ./datasets/edge2face/single_test/ [default: None]
dataset_mode: single
direction: AtoB
display_winsize: 256
epoch: latest
eval: False
gpu_ids: 0
init_gain: 0.02
init_type: normal
input_nc: 3
isTrain: False [default: None]
load_iter: 0 [default: 0]
load_size: 256
max_dataset_size: inf
model: test
model_suffix:
n_layers_D: 3
name: egde2face_pix2pix [default: experiment_name]
ndf: 64
netD: basic
netG: resnet_9blocks
ngf: 64
no_dropout: False
no_flip: False
norm: instance
ntest: inf
num_test: 50
num_threads: 4
output_nc: 3
phase: test
preprocess: resize_and_crop
results_dir: ./results/
serial_batches: False
suffix:
verbose: False
----------------- End -------------------
dataset [SingleDataset] was created
initialize network with normal
model [TestModel] was created
loading the model from ./checkpoints\egde2face_pix2pix\latest_net_G.pth
Traceback (most recent call last):
File "C:/Users/PycharmProjects/pix2pix-cyclegan/test.py", line 47, in <module>
model.setup(opt) # regular setup: load and print networks; create schedulers
File "C:\Users\PycharmProjects\pix2pix-cyclegan\models\base_model.py", line 88, in setup
self.load_networks(load_suffix)
File "C:\Users\PycharmProjects\pix2pix-cyclegan\models\base_model.py", line 197, in load_networks
self.__patch_instance_norm_state_dict(state_dict, net, key.split('.'))
File "C:\Users\PycharmProjects\pix2pix-cyclegan\models\base_model.py", line 173, in __patch_instance_norm_state_dict
self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1)
File "C:\Users\PycharmProjects\pix2pix-cyclegan\models\base_model.py", line 173, in __patch_instance_norm_state_dict
self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1)
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 535, in __getattr__
type(self).__name__, name))
AttributeError: 'Sequential' object has no attribute 'model'
```
Thanks | open | 2020-05-06T11:41:15Z | 2020-05-07T01:53:26Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1018 | [] | StuckinPhD | 3 |
nolar/kopf | asyncio | 401 | [archival placeholder] | This is a placeholder for later issues/prs archival.
It is needed now to reserve the initial issue numbers before going with actual development (PRs), so that later these placeholders could be populated with actual archived issues & prs with proper intra-repo cross-linking preserved. | closed | 2020-08-18T20:05:39Z | 2020-08-18T20:05:41Z | https://github.com/nolar/kopf/issues/401 | [
"archive"
] | kopf-archiver[bot] | 0 |
roboflow/supervision | computer-vision | 957 | Segmentation problem | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Dear @SkalskiP
I am trying to adapt your code for velocity estimation on cars, so that besides detection can display segmentation also.
I chaned the model into **model = YOLO('yolov8s-seg.pt')**, but there is no change in the result.
Bellow is the code, that I just want to add segmentation on it so that I can use it for my purposes:
```python
import argparse
from collections import defaultdict, deque
import cv2
import numpy as np
from ultralytics import YOLO
from tqdm.notebook import tqdm
import supervision as sv
SOURCE = np.array([[1252, 787], [2298, 803], [5039, 2159], [-550, 2159]]) #coordinates of the tracking region)
TARGET_WIDTH = 25 #physical dimentions of the targeted region)
TARGET_HEIGHT = 250
TARGET = np.array(
[
[0, 0],
[TARGET_WIDTH - 1, 0],
[TARGET_WIDTH - 1, TARGET_HEIGHT - 1], #targeted region in coordinates
[0, TARGET_HEIGHT - 1],
]
)
LINE_START = sv.Point(50, 1500)
LINE_END = sv.Point(3840-50, 1500)
class ViewTransformer:
def __init__(self, source: np.ndarray, target: np.ndarray) -> None:
source = source.astype(np.float32)
target = target.astype(np.float32)
self.m = cv2.getPerspectiveTransform(source, target)
def transform_points(self, points: np.ndarray) -> np.ndarray: #transform points from source to the target for the tracking objects
if points.size == 0:
return points
reshaped_points = points.reshape(-1, 1, 2).astype(np.float32)
transformed_points = cv2.perspectiveTransform(reshaped_points, self.m)
return transformed_points.reshape(-1, 2)
def parse_arguments() -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Vehicle Speed Estimation using Ultralytics and Supervision"
)
parser.add_argument(
"--source_video_path",
required=True,
help="Path to the source video file",
type=str,
)
parser.add_argument(
"--target_video_path",
required=True,
help="Path to the target video file (output)", #function to upload the source video and the putput !!!! Done from command window!!!!
type=str,
)
parser.add_argument(
"--confidence_threshold",
default=0.3,
help="Confidence threshold for the model",
type=float,
)
parser.add_argument(
"--iou_threshold", default=0.7, help="IOU threshold for the model", type=float
)
return parser.parse_args()
if __name__ == "__main__":
args = parse_arguments()
video_info = sv.VideoInfo.from_video_path(video_path=args.source_video_path) #Uploads the video and uses YOLO
model = YOLO('yolov8s-seg.pt')
#model = YOLO("yolov8n-seg.pt")
byte_track = sv.ByteTrack(
frame_rate=video_info.fps, track_thresh=args.confidence_threshold #tracking objects in the video frames
)
thickness = sv.calculate_dynamic_line_thickness( #annotating bounding boxes and traces on the frames
resolution_wh=video_info.resolution_wh
)
text_scale = sv.calculate_dynamic_text_scale(resolution_wh=video_info.resolution_wh)
bounding_box_annotator = sv.BoundingBoxAnnotator(thickness=thickness)
label_annotator = sv.LabelAnnotator( #labeling with number in the bottom center
text_scale=text_scale,
text_thickness=thickness,
text_position=sv.Position.BOTTOM_CENTER, #label colour not specified and selects a colour based on the label id
)
trace_annotator = sv.TraceAnnotator(
thickness=thickness,
trace_length=video_info.fps * 2, #tracer display bottom center
position=sv.Position.BOTTOM_CENTER,color_lookup=sv.ColorLookup.TRACK #tracer colous changes based on the track number
)
frame_generator = sv.get_video_frames_generator(source_path=args.source_video_path) #појма немам
polygon_zone = sv.PolygonZone(
polygon=SOURCE, frame_resolution_wh=video_info.resolution_wh #tracer display bottom center
)
view_transformer = ViewTransformer(source=SOURCE, target=TARGET) #појма немам
coordinates = defaultdict(lambda: deque(maxlen=video_info.fps)) # dictionary corresponds to a tracker ID, and the associated value is a deque (double-ended queue)
# with a maximum length of video_info.fps, which is likely the frames per second of the video.
line_counter = sv.LineZone(start=LINE_START, end=LINE_END)
line_annotator = sv.LineZoneAnnotator(thickness=thickness)
box_annotator = sv.BoxAnnotator(
thickness=thickness,
text_thickness=thickness,
text_scale=text_scale
)
with sv.VideoSink(args.target_video_path, video_info) as sink:
for frame in frame_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
detections = detections[detections.confidence > args.confidence_threshold]
detections = detections[polygon_zone.trigger(detections)] #detections chack up
detections = detections.with_nms(threshold=args.iou_threshold)
detections = byte_track.update_with_detections(detections=detections)
points = detections.get_anchors_coordinates(
anchor=sv.Position.BOTTOM_CENTER
)
points = view_transformer.transform_points(points=points).astype(int)
for tracker_id, [_, y] in zip(detections.tracker_id, points): #storing y coordinates in dictenary
coordinates[tracker_id].append(y)
labels = []
for tracker_id in detections.tracker_id:
if len(coordinates[tracker_id]) < video_info.fps / 2:
labels.append(f"#{tracker_id}")
else:
coordinate_start = coordinates[tracker_id][-1] #speed estimation
coordinate_end = coordinates[tracker_id][0]
distance = abs(coordinate_start - coordinate_end)
time = len(coordinates[tracker_id]) / video_info.fps
speed = distance / time * 3.6
labels.append(f"#{tracker_id} {int(speed)} km/h")
annotated_frame = frame.copy()
annotated_frame=sv.draw_polygon(annotated_frame,polygon=SOURCE,color=sv.Color.red()) #draw the poligone red for the detection zone
annotated_frame = trace_annotator.annotate(
scene=annotated_frame, detections=detections
)
annotated_frame = bounding_box_annotator.annotate(
scene=annotated_frame, detections=detections
)
annotated_frame = label_annotator.annotate(
scene=annotated_frame, detections=detections, labels=labels
)
line_counter.trigger(detections=detections)
line_annotator.annotate(frame=annotated_frame, line_counter=line_counter)
sink.write_frame(annotated_frame)
cv2.imshow("frame", annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"): # displays images, and q is to terminate the loop CHEERS
break
cv2.destroyAllWindows()
```
### Additional
I will be so grateful for your help.
Also, is there a possibility to estimate the area of the segmentation, in real dimensions?
Thank you in advance | closed | 2024-02-29T02:55:39Z | 2024-02-29T08:41:02Z | https://github.com/roboflow/supervision/issues/957 | [
"question"
] | ana111todorova | 1 |
recommenders-team/recommenders | data-science | 2,147 | [BUG] Test failing Service invocation timed out | ### Description
<!--- Describe your issue/bug/request in detail -->
The VMs for the tests are not even starting:
```
Class AutoDeleteSettingSchema: This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Class AutoDeleteConditionSchema: This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Class BaseAutoDeleteSettingSchema: This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Class IntellectualPropertySchema: This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Class ProtectionLevelSchema: This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Class BaseIntellectualPropertySchema: This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Uploading recommenders (10.26 MBs): 0%| | 0/10263396 [00:00<?, ?it/s]
Uploading recommenders (10.26 MBs): 1%| | 107280/10263396 [00:00<00:09, 1055618.87it/s]
Uploading recommenders (10.26 MBs): 31%|███ | 3174427/10263396 [00:00<00:00, 17968925.70it/s]
Uploading recommenders (10.26 MBs): 52%|█████▏ | 5311634/10263396 [00:00<00:00, 15079703.44it/s]
Uploading recommenders (10.26 MBs): 86%|████████▌ | 8792146/10263396 [00:00<00:00, 21108647.33it/s]
Uploading recommenders (10.26 MBs): 100%|██████████| 10263396/10263396 [00:01<00:00, 9462800.48it/s]
Traceback (most recent call last):
File "/home/runner/work/recommenders/recommenders/tests/ci/azureml_tests/submit_groupwise_azureml_pytest.py", line 175, in <module>
run_tests(
File "/home/runner/work/recommenders/recommenders/tests/ci/azureml_tests/aml_utils.py", line 170, in run_tests
job = client.jobs.create_or_update(
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/core/tracing/decorator.py", line 94, in wrapper_use_tracer
return func(*args, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/_telemetry/activity.py", line 372, in wrapper
return_value = f(*args, **kwargs)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/operations/_job_operations.py", line 663, in create_or_update
self._resolve_arm_id_or_upload_dependencies(job)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/operations/_job_operations.py", line 1070, in _resolve_arm_id_or_upload_dependencies
self._resolve_arm_id_or_azureml_id(job, self._orchestrators.get_asset_arm_id)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/operations/_job_operations.py", line 1335, in _resolve_arm_id_or_azureml_id
job = self._resolve_arm_id_for_command_job(job, resolver)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/operations/_job_operations.py", line 1387, in _resolve_arm_id_for_command_job
job.environment = resolver(job.environment, azureml_type=AzureMLResourceType.ENVIRONMENT)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/operations/_operation_orchestrator.py", line 183, in get_asset_arm_id
name, version = self._resolve_name_version_from_name_label(asset, azureml_type)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/operations/_operation_orchestrator.py", line 443, in _resolve_name_version_from_name_label
_resolve_label_to_asset(
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/_utils/_asset_utils.py", line 1022, in _resolve_label_to_asset
return resolver(name)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/operations/_environment_operations.py", line 448, in _get_latest_version
result = _get_latest(
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/_utils/_asset_utils.py", line [85](https://github.com/recommenders-team/recommenders/actions/runs/10406895552/job/28821110978#step:3:91)3, in _get_latest
latest = result.next()
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/core/paging.py", line 123, in __next__
return next(self._page_iterator)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/core/paging.py", line 75, in __next__
self._response = self._get_next(self.continuation_token)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/azure/ai/ml/_restclient/v2023_04_01_preview/operations/_environment_versions_operations.py", line 335, in get_next
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
azure.core.exceptions.HttpResponseError: (TransientError) Service invocation timed out.
Request: GET environment-management.vienna-eastus.svc/environment/v1.0/subscriptions/***/resourceGroups/recommenders_project_resources/providers/Microsoft.MachineLearningServices/workspaces/azureml-test-workspace/MFE/versions/environments/recommenders-61568e68746eceae2de11114618[86](https://github.com/recommenders-team/recommenders/actions/runs/10406895552/job/28821110978#step:3:92)594ca9a5e14-python3_8-spark
Message: Operation canceled Time waited: 00:00:09.9995201
Code: TransientError
Message: Service invocation timed out.
Request: GET environment-management.vienna-eastus.svc/environment/v1.0/subscriptions/***/resourceGroups/recommenders_project_resources/providers/Microsoft.MachineLearningServices/workspaces/azureml-test-workspace/MFE/versions/environments/recommenders-61568e6[87](https://github.com/recommenders-team/recommenders/actions/runs/10406895552/job/28821110978#step:3:93)46eceae2de1111461886594ca9a5e14-python3_8-spark
Message: Operation canceled Time waited: 00:00:09.9995201
Target: GET https://environment-management.vienna-eastus.svc/environment/v1.0/subscriptions/***/resourceGroups/recommenders_project_resources/providers/Microsoft.MachineLearningServices/workspaces/azureml-test-workspace/MFE/versions/environments/recommenders-61568e68746eceae2de1111461[88](https://github.com/recommenders-team/recommenders/actions/runs/10406895552/job/28821110978#step:3:94)6594ca9a5e14-python3_8-spark?$orderby=createdtime desc&$top=1&listViewType=ActiveOnly
Additional Information:Type: ComponentName
Info: ***
"value": "managementfrontend"
***Type: Correlation
Info: ***
"value": ***
"operation": "5b[90](https://github.com/recommenders-team/recommenders/actions/runs/10406895552/job/28821110978#step:3:96)9b2c3dd76b888a4d120f149cb431",
"request": "cbb3455b1e94a291"
***
***Type: Environment
Info: ***
"value": "eastus"
***Type: Location
Info: ***
"value": "eastus"
***Type: Time
Info: ***
"value": "2024-08-15T16:26:53.8249469+00:00"
***
Error: Process completed with exit code 1.
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
See example: https://github.com/recommenders-team/recommenders/actions/runs/10406895552/job/28821110978
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Willingness to contribute
<!--- Go over all the following points, and put an `x` in the box that apply. -->
- [ ] Yes, I can contribute for this issue independently.
- [ ] Yes, I can contribute for this issue with guidance from Recommenders community.
- [ ] No, I cannot contribute at this time.
### Other Comments
FYI @SimonYansenZhao | closed | 2024-08-16T15:50:07Z | 2024-08-26T10:16:02Z | https://github.com/recommenders-team/recommenders/issues/2147 | [
"bug"
] | miguelgfierro | 15 |
mouredev/Hello-Python | fastapi | 85 | 腾龙博源开户注册微:zhkk6969 | 博源在线开户,手机端:boy9999.cc 电脑pc:boy8888.cc 邀请码:0q821
V《zhkk6969》咨询QQ:1923630145
可以通过腾龙公司的客服电话或在线客服进行,客服人员会协助完成整个注册流程
开户流程:对于投资者来说,开户流程包括准备资料(例如身份证原件、银行卡复印件、个人简历等),并通过腾龙公司的
官方网站或手机应用程序提交开户申请 | closed | 2024-10-08T06:24:10Z | 2024-10-16T05:27:12Z | https://github.com/mouredev/Hello-Python/issues/85 | [] | xiao6901 | 0 |
dynaconf/dynaconf | flask | 314 | [RFC] Move to f"string" | Python 3.5 has been dropped.
Now some uses of `format` can be replaced with fstrings | closed | 2020-03-09T03:47:56Z | 2020-03-31T13:26:42Z | https://github.com/dynaconf/dynaconf/issues/314 | [
"help wanted",
"Not a Bug",
"RFC",
"good first issue"
] | rochacbruno | 2 |
assafelovic/gpt-researcher | automation | 949 | Is it possible to get an arxiv formatted paper , totally by gpt-researcher | closed | 2024-10-25T04:02:13Z | 2024-11-03T09:56:56Z | https://github.com/assafelovic/gpt-researcher/issues/949 | [] | CoderYiFei | 1 | |
fastapi-users/fastapi-users | asyncio | 1,170 | GET users/me returns different ObjectId on each call | also on the `/register` route. See:
https://github.com/fastapi-users/fastapi-users/discussions/1142 | closed | 2023-03-10T13:54:50Z | 2024-07-14T13:24:43Z | https://github.com/fastapi-users/fastapi-users/issues/1170 | [
"bug"
] | gegnew | 1 |
lukas-blecher/LaTeX-OCR | pytorch | 151 | Error while installing pix2tex[gui] | > ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
> spyder 5.1.5 requires pyqt5<5.13, but you have pyqt5 5.15.6 which is incompatible.
| closed | 2022-05-19T14:29:01Z | 2022-05-19T14:32:16Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/151 | [] | islambek243 | 1 |
giotto-ai/giotto-tda | scikit-learn | 589 | [BUG]Validation of argument 'metric_params' initialized to a dictionary fails when used with a callable metric | **Describe the bug**
The validate_params function from utils fails to validate the 'metric_params' argument when initialized to a dictionary with custom parameters to be used with a custom metric. I think I have tracked it down to be an issue related to the following lines in the documentation for the validate_params function
`If reference['type'] == dict – meaning that parameter should be a dictionary – ref_of should have a similar structure as references, and validate_params is called recursively on (parameter, ref_of).`
When I pass a dictionary with custom parameters, ref_of ends up being a NoneType object causing the recursive call to fail.
**To reproduce**
```
from data.generate_datasets import make_point_clouds
point_clouds_basic, labels_basic = make_point_clouds(n_samples_per_shape=1, n_points=20, noise=0.1)
from gtda.homology import VietorisRipsPersistence
homology_dimensions = [0, 1]
def customDist(arr1,arr2,**kwargs):
# Ideally, I want to make use of the value of p in custom metrics here and return a value
# currently function returns something else without using any custom parameters
return abs(arr1[0] - arr2[0])
customMetrics = {}
customMetrics['p'] = 3
persistence = VietorisRipsPersistence(
metric=customDist,
metric_params = customMetrics,
homology_dimensions=homology_dimensions,
n_jobs=6,
collapse_edges=True,
)
diagrams_basic = persistence.fit_transform(point_clouds_basic)
```
**Expected behavior**
No error should be thrown. The custom metric p should be accessible for computation of custom distance
**Actual behaviour**
> **Traceback (most recent call last):**
File "/media/lab/Shared/useCustomDistanceFunction.py", line 136, in <module>
diagrams_basic = persistence.fit_transform(point_clouds_basic)
File "/home/lab/anaconda3/envs/582/lib/python3.6/site-packages/gtda/utils/_docs.py", line 106, in fit_transform_wrapper
return original_fit_transform(*args, **kwargs)
File "/home/lab/anaconda3/envs/582/lib/python3.6/site-packages/sklearn/base.py", line 690, in fit_transform
return self.fit(X, **fit_params).transform(X)
File "/home/lab/anaconda3/envs/582/lib/python3.6/site-packages/gtda/homology/simplicial.py", line 232, in fit
self.get_params(), self._hyperparameters, exclude=["n_jobs"])
File "/home/lab/anaconda3/envs/582/lib/python3.6/site-packages/gtda/utils/validation.py", line 199, in validate_params
return _validate_params(parameters_, references)
File "/home/lab/anaconda3/envs/582/lib/python3.6/site-packages/gtda/utils/validation.py", line 142, in _validate_params
_validate_params(parameter, ref_of, rec_name=name)
File "/home/lab/anaconda3/envs/582/lib/python3.6/site-packages/gtda/utils/validation.py", line 131, in _validate_params
if name not in references.keys():
**AttributeError: 'NoneType' object has no attribute 'keys'**
**Versions**
Linux-5.4.0-74-generic-x86_64-with-debian-buster-sid
Python 3.6.12 |Anaconda, Inc.| (default, Sep 8 2020, 23:10:56)
[GCC 7.3.0]
NumPy 1.19.2
SciPy 1.5.2
Joblib 1.0.1
Scikit-learn 0.23.2
Giotto-tda 0.4.0
**Additional context**
I am not entirely sure of the right way to code up the use of custom metrics and a custom distance function for use with VietorisRipsPersistence. Particularly, assuming the validation of metric_params does not throw any error, I am not sure how to access metric_params within my custom distance function. It would be great if you provide any suggestions or a template. I think that the functionality is based on the pairwise_distances function from scikit-learn (and the function's handling of custom parameters). I went over the documentation and looked for examples but couldn't find a working example for it.
<!-- Thanks for contributing! -->
| closed | 2021-07-03T21:44:27Z | 2021-07-08T15:56:46Z | https://github.com/giotto-ai/giotto-tda/issues/589 | [
"bug"
] | ektas0330 | 5 |
d2l-ai/d2l-en | tensorflow | 1,679 | Adding a sub-topic in Convolutions for images | The current material under topic '6.2 Convolution for images', does not cover 'Dilated Convolutions'.
Proposed Content:
(To be added after '6.2.6. Feature Map and Receptive Field')
- Define dilated convolution
- Add visualizations depicting the larger receptive field compared to standard convolution
- Add code snippets
I am working on the above mentioned material and will create a PR for 6.2 (Convolution for images) mostly likely by end of the day. | open | 2021-03-17T13:13:12Z | 2021-03-17T13:13:12Z | https://github.com/d2l-ai/d2l-en/issues/1679 | [] | Swetha5 | 0 |
zihangdai/xlnet | nlp | 29 | What's the output structure for XLNET? [ A, SEP, B, SEP, CLS] | Hi, is the output embedding structure like this: [ A, SEP, B, SEP, CLS]?
Because for BERT it's like this right: [CLS, A, SEP, B, SEP]?
And for GPT2 is it just like this: [A, B]?
Thanks.
| open | 2019-06-23T04:41:14Z | 2019-09-19T12:07:54Z | https://github.com/zihangdai/xlnet/issues/29 | [] | BoPengGit | 2 |
Asabeneh/30-Days-Of-Python | python | 265 | Day 4: Strings | In the find() example
if find() returns the position first occurrence of 'y', then shouldn't it return 5 instead of 16? | closed | 2022-07-26T00:08:36Z | 2023-07-08T22:16:54Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/265 | [] | AdityaDanturthi | 1 |
marimo-team/marimo | data-visualization | 4,184 | "Object of type Decimal is not JSON serializable" when processing results of DuckDB query | ### Describe the bug
Whenever I do a `sum() ` of integer values in a DuckDB query, I get a return value which is translated to a Decimal object in Python. This produces error/warning messages in marimo like
`Failed to send message to frontend: Object of type Decimal is not JSON serializable`
I believe the reason is DuckDB always uses a `HUGEINT` type (`INT128`) as the value of a sum of integers. The workaround is to cast it to a regular integer in SQL. Assuming of course you don't expect results that would overflow 64 bit int.
### Environment
<details>
```
Replace this line with the output of marimo env. Leave the backticks in place.
```
</details>
### Code to reproduce
_No response_ | closed | 2025-03-21T10:07:03Z | 2025-03-23T03:47:52Z | https://github.com/marimo-team/marimo/issues/4184 | [
"bug",
"cannot reproduce"
] | rjbudzynski | 3 |
scikit-learn/scikit-learn | python | 30,036 | OneVsRestClassifier cannot be used with TunedThresholdClassifierCV | https://github.com/scikit-learn/scikit-learn/blob/d5082d32de2797f9594c9477f2810c743560a1f1/sklearn/model_selection/_classification_threshold.py#L386
When predict is called on `OneVsRestClassifier`, it calls `predict_proba` on the underlying classifier.
If the underlying is a `TunedThresholdClassifierCV`, it redirects to the underlying estimator instead.
On the line referenced, I think that `OneVsRestClassifier` should check if the estimator is `TunedThresholdClassifierCV`, and if so use the `best_threshold_` instead of 0.5 | open | 2024-10-09T07:31:21Z | 2024-10-15T09:24:45Z | https://github.com/scikit-learn/scikit-learn/issues/30036 | [
"Bug",
"Needs Decision"
] | worthy7 | 10 |
Yorko/mlcourse.ai | seaborn | 776 | Issue on page /book/topic04/topic4_linear_models_part5_valid_learning_curves.html | The first validation curve is missing

| closed | 2024-08-30T12:07:28Z | 2025-01-06T15:49:43Z | https://github.com/Yorko/mlcourse.ai/issues/776 | [] | ssukhgit | 1 |
tox-dev/tox | automation | 2,575 | Tox shouldn't set COLUMNS if it's already set | ## Issue
Coverage.py's doc build fails under tox4 when it didn't under tox3. This is due to setting the COLUMNS environment variable. I can fix it, but ideally tox would honor an existing COLUMNS value instead of always setting its own.
My .rst files run through cog to get the `--help` output of my commands. I use optparse, which reads the COLUMNS value to decide on the wrapping width, defaulting to 80. My "doc" environment checks that the files are correct with `cog --check`. Tox3 didn't set the value, so it was always 80 and the files passed the check. Now tox4 [uses the actual width of my terminal](https://github.com/tox-dev/tox/blob/main/src/tox/execute/local_sub_process/__init__.py#L192-L196), and the help output comes out too wide, and importantly, different from the current file, so the check fails.
## To reproduce
<details>
<summary>git clone https://github.com/nedbat/coveragepy</summary>
```
Cloning into 'coveragepy'...
remote: Enumerating objects: 33906, done.
remote: Counting objects: 100% (350/350), done.
remote: Compressing objects: 100% (120/120), done.
remote: Total 33906 (delta 258), reused 303 (delta 230), pack-reused 33556
Receiving objects: 100% (33906/33906), 17.06 MiB | 6.87 MiB/s, done.
```
</details>
<details>
<summary>cd coveragepy</summary>
```
```
</details>
<details>
<summary>python3.7 -m venv .venv</summary>
```
```
</details>
<details>
<summary>. ./.venv/bin/activate</summary>
```
```
</details>
<details>
<summary>pip install tox==4.0.0rc1</summary>
```
Collecting tox==4.0.0rc1
Using cached tox-4.0.0rc1-py3-none-any.whl (140 kB)
Collecting chardet>=5
Using cached chardet-5.0.0-py3-none-any.whl (193 kB)
Collecting virtualenv>=20.16.7
Using cached virtualenv-20.17.0-py3-none-any.whl (8.8 MB)
Collecting pyproject-api>=1.1.2
Using cached pyproject_api-1.1.2-py3-none-any.whl (11 kB)
Collecting colorama>=0.4.6
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting importlib-metadata>=5.1
Using cached importlib_metadata-5.1.0-py3-none-any.whl (21 kB)
Collecting tomli>=2.0.1
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting packaging>=21.3
Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting pluggy>=1
Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
Collecting typing-extensions>=4.4
Using cached typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting cachetools>=5.2
Using cached cachetools-5.2.0-py3-none-any.whl (9.3 kB)
Collecting platformdirs>=2.5.4
Using cached platformdirs-2.5.4-py3-none-any.whl (14 kB)
Collecting zipp>=0.5
Using cached zipp-3.11.0-py3-none-any.whl (6.6 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting distlib<1,>=0.3.6
Using cached distlib-0.3.6-py2.py3-none-any.whl (468 kB)
Collecting filelock<4,>=3.4.1
Using cached filelock-3.8.0-py3-none-any.whl (10 kB)
Installing collected packages: distlib, zipp, typing-extensions, tomli, pyparsing, platformdirs, filelock, colorama, chardet, cachetools, packaging, importlib-metadata, virtualenv, pyproject-api, pluggy, tox
```
</details>
<details>
<summary>tox -e doc</summary>
```
doc: install_deps> python -m pip install -U -r doc/requirements.pip
.pkg: install_requires> python -I -m pip install setuptools
.pkg: get_requires_for_build_editable> python /private/tmp/coveragepy/.venv/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta
.pkg: install_requires_for_build_editable> python -I -m pip install wheel
.pkg: build_editable> python /private/tmp/coveragepy/.venv/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta
doc: install_package_deps> python -m pip install -U 'tomli; python_full_version <= "3.11.0a6"'
doc: install_package> python -m pip install -U --force-reinstall --no-deps .tox/.pkg/dist/coverage-7.0.0a0-0.editable-cp37-cp37m-macosx_10_15_x86_64.whl
doc: commands[0]> python -m cogapp -cP --check --verbosity=1 'doc/*.rst'
Check failed
Checking doc/cmd.rst (changed)
doc: exit 5 (0.32 seconds) /private/tmp/coveragepy> python -m cogapp -cP --check --verbosity=1 'doc/*.rst' pid=23679
.pkg: _exit> python /private/tmp/coveragepy/.venv/lib/python3.7/site-packages/pyproject_api/_backend.py True setuptools.build_meta
```
</details>
## Fixes
I can fix it like this, but it's awkward because I have to set the value after tox invokes me, but before optparse creates the parsers in coverage.cmdline:
```diff
--- a/doc/cmd.rst
+++ b/doc/cmd.rst
@@ -1,18 +1,20 @@
.. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
.. For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
.. This file is meant to be processed with cog to insert the latest command
help into the docs. If it's out of date, the quality checks will fail.
Running "make prebuild" will bring it up to date.
.. [[[cog
+ import os
+ os.environ["COLUMNS"] = "80"
import contextlib
import io
import re
import textwrap
from coverage.cmdline import CoverageScript
def show_help(cmd):
with contextlib.redirect_stdout(io.StringIO()) as stdout:
CoverageScript().command_line([cmd, "--help"])
help = stdout.getvalue()
```
Ideally, I could set the value in the tox.ini file, but right now, that is too early and tox sets its own value, so this doesn't work:
```diff
--- a/tox.ini
+++ b/tox.ini
@@ -24,20 +24,21 @@ deps =
install_command = python -m pip install -U {opts} {packages}
passenv = *
setenv =
pypy{3,37,38,39}: COVERAGE_NO_CTRACER=no C extension under PyPy
jython: COVERAGE_NO_CTRACER=no C extension under Jython
jython: PYTEST_ADDOPTS=-n 0
# For some tests, we need .pyc files written in the current directory,
# so override any local setting.
PYTHONPYCACHEPREFIX=
+ COLUMNS=80
commands =
# Create tests/zipmods.zip
python igor.py zip_mods
# Build the C extension and test with the CTracer
python setup.py --quiet build_ext --inplace
python -m pip install -q -e .
python igor.py test_with_tracer c {posargs}
```
| closed | 2022-12-01T11:33:22Z | 2022-12-03T01:56:42Z | https://github.com/tox-dev/tox/issues/2575 | [] | nedbat | 2 |
apache/airflow | automation | 47,413 | Scheduler HA mode, DagFileProcessor Race Condition | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### If "Other Airflow 2 version" selected, which one?
2.10.1
### What happened?
We use dynamic dag generation to generate dags in our Airflow environment. We have one base dag definition file, we will call `big_dag.py`, generating >1500 dags. Recently, after the introduction of a handful more dags generated from `big_dag.py`, all the `big_dag.py` generated dags have disappeared from UI and reappear randomly in a loop.
We noticed that if we restart our env a couple times, we could randomly achieve stability. We started to believe some timing issue was at play.
### What you think should happen instead?
Goal State: Dags that generate >1500 dags should not cause any disruptions to environment, given appropriate timeouts.
After checking the dag_process_manager log stream we noticed a prevalence of this error:
`psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "serialized_dag_pkey" DETAIL: Key (dag_id)=(<dag_name>)`
I believe the issue is on this line of the `write_dag` function of the `SerializedDagModel`:
**This code is from the main branch, I believe the issue is still present in main**
https://github.com/apache/airflow/blob/7bfe283cf4fa28453c857e659f4c1d5917f9e11c/airflow/models/serialized_dag.py#L197
The check for if a serialized dag should be updated or not is NOT ATOMIC, which leads to the issue where more than 1 scheduler runs into a race condition while trying to update serialization.
I believe a "check-then-update" atomic action should be used here through a mechanism like the row level `SELECT ... FOR UPDATE`.
### How to reproduce
You can reproduce this by having an environment with multiple schedulers/standalone_dag_file_processors and dag files that dynamically generate > 1500 dags. Time for a full processing of a >1500 dag file should be ~200 seconds (make sure timeout accommodates this).
To increase the likelihood the duplicate serialized pkey issue happens, reduce min_file_process_interval to like 30 seconds.
### Operating System
Amazon Linux 2023
### Versions of Apache Airflow Providers
_No response_
### Deployment
Amazon (AWS) MWAA
### Deployment details
2.10.1
2 Schedulers
xL Environment Size:

min_file_process_interval= 600
standalone_dag_processor = True (we believe MWAA creates one per scheduler)
dag_file_processor_timeout = 900
dagbag_import_timeout = 900
### Anything else?
I am not sure why the timing works out when dag definitio files are generating <<1500 dags, but could just be the speed of the environment is finishing all work before a race condition can occur.
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-05T19:43:20Z | 2025-03-11T16:09:10Z | https://github.com/apache/airflow/issues/47413 | [
"kind:bug",
"area:Scheduler",
"area:MetaDB",
"area:core",
"needs-triage"
] | robertchinezon | 4 |
tfranzel/drf-spectacular | rest-api | 1,380 | __empty__ choice raise AssertionError: Invalid nullable case | **Describe the bug**
I'd like to add __empty__ as choice on a nullable field, see: https://docs.djangoproject.com/en/5.1/ref/models/fields/#enumeration-types (at the bottom of the paragraph). However `AssertionError: Invalid nullable case` is then raised on scheme generation. I noticed this error is also raised when overriding the choices and adding a (None, 'unknown') tuple to the choices.
**To Reproduce**
Add
```
__empty__ = 'unknown'
```
OR
```
extra_kwargs = {'default': {'choices': list(SomeTextChoices.choices) + [(None, 'unknown')]}}
```
to the choices
**Expected behavior**
I would expect the field to have null as a choice.
I have tried all other methods to make the (read_only) field nullable, but this seems impossible.
What have I tried:
Add allow_null=True, allow_blank=True, required=False, amongst others.
I do have
` "ENUM_ADD_EXPLICIT_BLANK_NULL_CHOICE": False,`
because otherwise I loose a lot of Enums (they become simply "string", not enum, in the scheme generation).
I have also noticed that upgrading to 0.28.0 also made me lose a lot of read-only Enums, so I'm still on 0.27.2.
| open | 2025-02-13T16:22:27Z | 2025-02-13T19:13:18Z | https://github.com/tfranzel/drf-spectacular/issues/1380 | [
"bug",
"OpenAPI 3.1"
] | gabn88 | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,302 | Website down? | 502 bad gateway error, cannot visit site, Slack channel, community forum etc. | closed | 2022-10-24T09:47:28Z | 2022-10-25T06:51:13Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3302 | [] | goferit | 4 |
indico/indico | sqlalchemy | 6,330 | Prevent 'mark as paid' for pending registrations | When a registration is moderated and there is a fee or paid items, if you first mark a registration as paid and only then approve it it gets into a strange state where at the top it says not paid but at the same time the invoice shows up as paid.
(More context in a SNOW ticket: INC3861152)

Marking as paid should probably be disabled when the registration is still `pending`. | closed | 2024-05-08T13:08:34Z | 2024-10-14T08:54:48Z | https://github.com/indico/indico/issues/6330 | [
"bug"
] | tomasr8 | 0 |
blacklanternsecurity/bbot | automation | 1,374 | Enhancement: Notifications Cache | **Description**
It would be nice for BBOT 2.0 if a notifications cache feature was available. The current notifications modules `Discord` / `Slack` / `Teams` will ping as soon as the event_type is discovered which is a great feature! However for named scans that run on a regular basis the pinging of these services can get overwhelming and interesting new finds can get buried.
This could be implemented with the notifications output modules performing some checks on when this event was last seen before emitting it. The cache should have an expiry time also.
| open | 2024-05-13T16:28:32Z | 2025-02-06T00:34:24Z | https://github.com/blacklanternsecurity/bbot/issues/1374 | [
"enhancement"
] | domwhewell-sage | 0 |
FlareSolverr/FlareSolverr | api | 947 | [yggtorrent] (testing) Exception (yggtorrent): The cookies provided by FlareSolverr are not valid: The cookies provided by FlareSolverr are not valid | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.7
- Last working FlareSolverr version: IDK
- Operating system: debian
- Are you using Docker: [yes/no] yes
- FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no] no
- Are you using a Proxy: [yes/no] no
- Are you using Captcha Solver: [yes/no] no
- If using captcha solver, which one:
- URL to test this issue:
```
### Description
Hello,
My jackett and prowlarr instances tell me the cookies provided are not valid. In the logs, flaresolver completes correctly the challenge of https://yggtorrent.wtf.
If you more info, please ask
Any help would be very much appreciated.
### Logged Error Messages
```text
2023-11-06 08:59:37 INFO 192.168.0.2 POST http://192.168.0.2:8191/v1 200 OK
2023-11-06 08:59:38 INFO Incoming request => POST /v1 body: {'maxTimeout': 55000, 'cmd': 'request.get', 'url': 'https://www3.yggtorrent.wtf/engine/search?do=search&order=desc&sort=seed&category=all'}
2023-11-06 08:59:38 INFO Challenge detected. Title found: Just a moment...
2023-11-06 08:59:46 INFO Challenge solved!
2023-11-06 08:59:46 INFO Response in 8.928 s
2023-11-06 08:59:46 INFO 192.168.0.2 POST http://192.168.0.2:8191/v1 200 OK
```
### Screenshots


| closed | 2023-11-06T09:06:29Z | 2023-11-13T22:16:58Z | https://github.com/FlareSolverr/FlareSolverr/issues/947 | [
"more information needed"
] | paindespik | 8 |
ExpDev07/coronavirus-tracker-api | rest-api | 110 | Using your API! | Made a windows forms app in c# using your API!
https://github.com/rohandoesjava/corona-info | closed | 2020-03-20T13:06:42Z | 2020-04-19T18:01:50Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/110 | [
"user-created"
] | ghost | 1 |
JaidedAI/EasyOCR | deep-learning | 1,263 | Angle of the text | How we can get the angle of the text using easyocr ? | open | 2024-06-04T10:01:35Z | 2024-06-04T10:02:02Z | https://github.com/JaidedAI/EasyOCR/issues/1263 | [] | Rohinivv96 | 0 |
seleniumbase/SeleniumBase | web-scraping | 2,463 | Could not use "click_and_hold()" and "Release()" as Action_Chain to bypass "Press And Hold" Captcha | I trired to bypass the captcha of walmart but i coudn't find the method to use it!
I really appreciate it if this problem is solved!
Thank you Mdmintz for building this Seleniumbase!

| closed | 2024-02-01T16:23:17Z | 2024-02-01T16:42:15Z | https://github.com/seleniumbase/SeleniumBase/issues/2463 | [
"invalid usage",
"UC Mode / CDP Mode"
] | mynguyen95dn | 1 |
ipython/ipython | jupyter | 14,810 | Assertion failure on theme colour | The iPython could not run on my PyCharm with the following error prompt:
```
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/pydevconsole.py", line 570, in <module>
pydevconsole.start_client(host, port)
File "/Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/pydevconsole.py", line 484, in start_client
interpreter = InterpreterInterface(threading.current_thread(), rpc_client=client)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/_pydev_bundle/pydev_ipython_console.py", line 19, in __init__
self.interpreter = get_pydev_ipython_frontend(rpc_client)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py", line 472, in get_pydev_ipython_frontend
_PyDevFrontEndContainer._instance = _PyDevIPythonFrontEnd(is_jupyter_debugger)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py", line 293, in __init__
self.ipython = self._init_ipy_app(PyDevTerminalInteractiveShell).shell
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py", line 300, in _init_ipy_app
application.initialize(shell_cls)
File "/Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py", line 258, in initialize
self.init_shell(shell_cls)
File "/Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py", line 263, in init_shell
self.shell = shell_cls.instance()
^^^^^^^^^^^^^^^^^^^^
File "***/.venv/lib/python3.12/site-packages/traitlets/config/configurable.py", line 583, in instance
inst = cls(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/Applications/PyCharm.app/Contents/plugins/python-ce/helpers/pydev/_pydev_bundle/pydev_ipython_console_011.py", line 122, in __init__
super(PyDevTerminalInteractiveShell, self).__init__(*args, **kwargs)
File "***/.venv/lib/python3.12/site-packages/IPython/terminal/interactiveshell.py", line 977, in __init__
super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
File "***/.venv/lib/python3.12/site-packages/IPython/core/interactiveshell.py", line 627, in __init__
self.init_syntax_highlighting()
File "***/.venv/lib/python3.12/site-packages/IPython/core/interactiveshell.py", line 774, in init_syntax_highlighting
pyformat = PyColorize.Parser(theme_name=self.colors).format
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "***/.venv/lib/python3.12/site-packages/IPython/utils/PyColorize.py", line 364, in __init__
assert theme_name == theme_name.lower()
AssertionError
Couldn't connect to console process.
Process finished with exit code 1
```
This only happens with version 9.0.0. It works fine with previous versions like 8.31.0.
The reason it failed was the `theme_name` being `NoColor` which were not all lower cases. This assertion appears on multiple locations inside the package. | closed | 2025-03-01T22:19:27Z | 2025-03-08T13:12:06Z | https://github.com/ipython/ipython/issues/14810 | [] | JinZida | 9 |
openapi-generators/openapi-python-client | rest-api | 928 | Nullable array models generate failing code | **Describe the bug**
When an array is marked as nullable (in OpenAPI 3.0 or 3.1) the generated code fails type checking with the message:
```
error: Incompatible types in assignment (expression has type "tuple[None, bytes, str]", variable has type "list[float] | Unset | None") [assignment]
```
From the end-to-end test suite, making `some_array` nullable (part of `Body_upload_file_tests_upload_post`) results in this change:
```diff
@@ -165,10 +172,17 @@ class BodyUploadFileTestsUploadPost:
else (None, str(self.some_number).encode(), "text/plain")
)
- some_array: Union[Unset, Tuple[None, bytes, str]] = UNSET
- if not isinstance(self.some_array, Unset):
- _temp_some_array = self.some_array
- some_array = (None, json.dumps(_temp_some_array).encode(), "application/json")
+ some_array: Union[List[float], None, Unset]
+ if isinstance(self.some_array, Unset):
+ some_array = UNSET
+ elif isinstance(self.some_array, list):
+ some_array = UNSET
+ if not isinstance(self.some_array, Unset):
+ _temp_some_array = self.some_array
+ some_array = (None, json.dumps(_temp_some_array).encode(), "application/json")
+
+ else:
+ some_array = self.some_array
some_optional_object: Union[Unset, Tuple[None, bytes, str]] = UNSET
```
**OpenAPI Spec File**
The following patch applied the end-to-end test suite reproduces the problem:
```diff
diff --git a/end_to_end_tests/baseline_openapi_3.0.json b/end_to_end_tests/baseline_openapi_3.0.json
index d21d1d5..25adeaa 100644
--- a/end_to_end_tests/baseline_openapi_3.0.json
+++ b/end_to_end_tests/baseline_openapi_3.0.json
@@ -1778,6 +1778,7 @@
},
"some_array": {
"title": "Some Array",
+ "nullable": true,
"type": "array",
"items": {
"type": "number"
diff --git a/end_to_end_tests/baseline_openapi_3.1.yaml b/end_to_end_tests/baseline_openapi_3.1.yaml
index 03270af..4e33e68 100644
--- a/end_to_end_tests/baseline_openapi_3.1.yaml
+++ b/end_to_end_tests/baseline_openapi_3.1.yaml
@@ -1794,7 +1794,7 @@ info:
},
"some_array": {
"title": "Some Array",
- "type": "array",
+ "type": [ "array", "null" ],
"items": {
"type": "number"
}
```
**Desktop (please complete the following information):**
- openapi-python-client version 0.17.0
| closed | 2024-01-03T15:15:57Z | 2024-01-04T00:29:42Z | https://github.com/openapi-generators/openapi-python-client/issues/928 | [] | kgutwin | 1 |
3b1b/manim | python | 1,265 | Bezier interpolation ruining graph: Disable feature? | When plotting the function seen in the (attached) image, a ringing occurs on the transition shoulders. I assume this is from the Bezier interpolation the function goes through when `get_graph()` is called? `get_graph()` calls `interpolate(x_min, x_max, alpha)` from `manimlib.utils.bezier`.
Is there a feature to disable this? Correct way to handle this?
```
class FirstScene(GraphScene):
CONFIG={
"camera_config":{"background_color":WHITE},
"x_min":0,
"x_max":4,
"y_min":0,
"function_color":BLUE,
"function2_color":RED,
"x_tick_frequency": 0.25,
"y_tick_frequency": 0.5,
"y_max":1.2,
"y_axis_label": "$y$",
"x_axis_label": "$x$",
"label_nums_color":BLACK,
"x_labeled_nums":range(0,5,1),
"y_labeled_nums":range(0,1,1),
}
def construct(self):
self.setup_axes(animate=True)
func_graph=self.get_graph(self.func_to_graph,self.function_color)
func_graph2=self.get_graph(self.func_to_graph2,self.function2_color)
self.play(ShowCreation(func_graph))
self.play(ShowCreation(func_graph2))
def func_to_graph(self,x):
kB = 8e-5
T = 20
return 1 / ( 1 + np.exp( (x - 1) / (kB*T) ) )
def func_to_graph2(self,x):
kB = 8e-5
T = 800
return 1 / ( 1 + np.exp( (x - 1) / (kB*T) ) )
```

| open | 2020-11-07T03:44:16Z | 2020-11-07T03:46:13Z | https://github.com/3b1b/manim/issues/1265 | [] | jdlake | 0 |
flairNLP/flair | nlp | 3,279 | [Bug]: pip install flair==0.12.2" did not complete successfully | ### Describe the bug
Try to build Dockerfile with flair 0.12.2 fails
### To Reproduce
```python
FROM public.ecr.aws/lambda/python:3.10
COPY requirements.txt .
RUN pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu
RUN pip install flair==0.12.2
```
### Expected behavior
installed
### Logs and Stack traces
```stacktrace
#0 27.62 Collecting tabulate (from flair==0.12.2)
#0 27.67 Downloading tabulate-0.9.0-py3-none-any.whl (35 kB)
#0 27.83 Collecting langdetect (from flair==0.12.2)
#0 27.87 Downloading langdetect-1.0.9.tar.gz (981 kB)
#0 28.12 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 981.5/981.5 kB 4.0 MB/s eta 0:00:00
#0 28.17 Installing build dependencies: started
#0 29.91 Installing build dependencies: finished with status 'done'
#0 29.92 Getting requirements to build wheel: started
#0 30.23 Getting requirements to build wheel: finished with status 'done'
#0 30.23 Preparing metadata (pyproject.toml): started
#0 30.56 Preparing metadata (pyproject.toml): finished with status 'done'
#0 30.88 Collecting lxml (from flair==0.12.2)
#0 30.93 Downloading lxml-4.9.3.tar.gz (3.6 MB)
#0 31.77 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.6/3.6 MB 4.3 MB/s eta 0:00:00
#0 32.05 Installing build dependencies: started
#0 33.67 Installing build dependencies: finished with status 'done'
#0 33.67 Getting requirements to build wheel: started
#0 33.98 Getting requirements to build wheel: finished with status 'error'
#0 33.99 error: subprocess-exited-with-error
#0 33.99
#0 33.99 × Getting requirements to build wheel did not run successfully.
#0 33.99 │ exit code: 1
#0 33.99 ╰─> [4 lines of output]
#0 33.99 <string>:67: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
#0 33.99 Building lxml version 4.9.3.
#0 33.99 Building without Cython.
#0 33.99 Error: Please make sure the libxml2 and libxslt development packages are installed.
#0 33.99 [end of output]
#0 33.99
#0 33.99 note: This error originates from a subprocess, and is likely not a problem with pip.
#0 33.99 error: subprocess-exited-with-error
#0 33.99
#0 33.99 × Getting requirements to build wheel did not run successfully.
#0 33.99 │ exit code: 1
#0 33.99 ╰─> See above for output.
#0 33.99
#0 33.99 note: This error originates from a subprocess, and is likely not a problem with pip.
```
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
public.ecr.aws/lambda/python:3.10 | closed | 2023-07-05T13:07:05Z | 2023-07-05T13:50:41Z | https://github.com/flairNLP/flair/issues/3279 | [
"bug"
] | sub2zero | 1 |
FlareSolverr/FlareSolverr | api | 529 | [hdarea] (updating) The cookies provided by FlareSolverr are not valid | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-09-27T04:59:35Z | 2022-09-27T15:26:31Z | https://github.com/FlareSolverr/FlareSolverr/issues/529 | [
"invalid"
] | taoxiaomeng0723 | 1 |
huggingface/datasets | pytorch | 6,640 | Sign Language Support | ### Feature request
Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html
### Motivation
Datasets currently only have labels for several signed languages. There are more signed languages in the world. Furthermore, some signed languages that have a lot of online data cannot be found because of this reason (for instance, German Sign Language, and there is no German Sign Language label on huggingface datasets even though there are a lot of readily available sign language datasets exist for German Sign Language, which are used very frequently in Sign Language Processing papers, and models.)
### Your contribution
I can submit a PR for this as well, adding the ISO codes and languages to the labels in datasets. | open | 2024-02-02T21:54:51Z | 2024-02-02T21:54:51Z | https://github.com/huggingface/datasets/issues/6640 | [
"enhancement"
] | Merterm | 0 |
Nike-Inc/koheesio | pydantic | 38 | [DOC] Broken link in documentation | https://engineering.nike.com/koheesio/latest/ links to
https://engineering.nike.com/koheesio/latest/reference/concepts/tasks.md
which does not exist | closed | 2024-06-04T07:53:54Z | 2024-06-21T19:15:42Z | https://github.com/Nike-Inc/koheesio/issues/38 | [
"bug"
] | diekhans | 1 |
robinhood/faust | asyncio | 293 | Agent isn't getting new messages from Table's changelog topic |
## Steps to reproduce
Define an agent consume message from a table's changelog topic
## Expected behavior
As table getting updated and message written into changelog topic, agent should receive the message
## Actual behavior
Agent is not receiving the new changelog message
## Full traceback
# Versions
* Python version: 3.6.5
* Faust version: 1.4.6
* Operating system: macOS Majave
* Kafka version: confluent 5.1.0
* RocksDB version (if applicable) python-rocksdb==0.6.9
| open | 2019-02-14T06:19:27Z | 2020-02-27T23:18:42Z | https://github.com/robinhood/faust/issues/293 | [
"Status: Confirmed"
] | xqzhou | 1 |
amdegroot/ssd.pytorch | computer-vision | 486 | Too many detections in a image | I tried to evaluate the network _weights/ssd300_mAP_77.43_v2.pth_
`python eval.py`
And here is what I got:

What puzzles me is that, there are too many predicted boxes? Isn't it?
I think there should be only two boxes:
1. Box for predicting the person.
1. Box for predicting the dog.
And I got these predicted boxes by the following modifications:
At `def test_net` of _eval.py_,
```
...
for j in range(1, detections.size(1)):
...
for k in range(boxes.shape[0]):
point_left_up = (int(boxes[k, 0]), int(boxes[k, 1]))
point_right_down = (int(boxes[k, 2]), int(boxes[k, 3]))
cv2.rectangle(img_original, point_left_up, point_right_down, (0, 0, 255), 1)
cv2.imwrite('test/test.png', img_original, [int(cv2.IMWRITE_PNG_COMPRESSION), 0])
```
Is there a mistake in my modification?
Or is this the performance of the network? | closed | 2020-06-05T13:39:42Z | 2020-06-08T08:32:03Z | https://github.com/amdegroot/ssd.pytorch/issues/486 | [] | decoli | 2 |
ets-labs/python-dependency-injector | asyncio | 102 | Add `Callable.injections` read-only property for getting a list of injections | closed | 2015-10-21T07:45:08Z | 2015-10-22T14:48:57Z | https://github.com/ets-labs/python-dependency-injector/issues/102 | [
"feature"
] | rmk135 | 0 | |
ibis-project/ibis | pandas | 10,764 | bug: [Athena] error when trying to force create a database, that already exists | ### What happened?
I already had a database named `mydatabase` in my aws athena instance.
I experimented with using `force=True`, expecting it to drop the existing table, and create a new one. I got an error instead.
My database does contain a table.
### What version of ibis are you using?
`main` branch commit 3d10def68cb7c2236ac65e8ebffee9007a3b4e93
### What backend(s) are you using, if any?
Athena
### Relevant log output
```sh
In [4]: con.create_database('mydatabase', force=True)
Failed to execute query.
Traceback (most recent call last):
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/pyathena/common.py", line 586, in _execute
query_id = retry_api_call(
^^^^^^^^^^^^^^^
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/pyathena/util.py", line 84, in retry_api_call
return retry(func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/tenacity/__init__.py", line 475, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/tenacity/__init__.py", line 376, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/tenacity/__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/tenacity/__init__.py", line 478, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/botocore/client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anja/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/botocore/client.py", line 1023, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.InvalidRequestException: An error occurred (InvalidRequestException) when calling the StartQueryExecution operation: line 1:19: mismatched input 'SCHEMA'. Expecting: 'MATERIALIZED', 'MULTI', 'PROTECTED', 'VIEW'
---------------------------------------------------------------------------
InvalidRequestException Traceback (most recent call last)
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/pyathena/common.py:586, in BaseCursor._execute(self, operation, parameters, work_group, s3_staging_dir, cache_size, cache_expiration_time, result_reuse_enable, result_reuse_minutes, paramstyle)
585 try:
--> 586 query_id = retry_api_call(
587 self._connection.client.start_query_execution,
588 config=self._retry_config,
589 logger=_logger,
590 **request,
591 ).get("QueryExecutionId")
592 except Exception as e:
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/pyathena/util.py:84, in retry_api_call(func, config, logger, *args, **kwargs)
69 retry = tenacity.Retrying(
70 retry=retry_if_exception(
71 lambda e: getattr(e, "response", {}).get("Error", {}).get("Code") in config.exceptions
(...)
82 reraise=True,
83 )
---> 84 return retry(func, *args, **kwargs)
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/tenacity/__init__.py:475, in Retrying.__call__(self, fn, *args, **kwargs)
474 while True:
--> 475 do = self.iter(retry_state=retry_state)
476 if isinstance(do, DoAttempt):
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/tenacity/__init__.py:376, in BaseRetrying.iter(self, retry_state)
375 for action in self.iter_state.actions:
--> 376 result = action(retry_state)
377 return result
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/tenacity/__init__.py:398, in BaseRetrying._post_retry_check_actions.<locals>.<lambda>(rs)
397 if not (self.iter_state.is_explicit_retry or self.iter_state.retry_run_result):
--> 398 self._add_action_func(lambda rs: rs.outcome.result())
399 return
File ~/anaconda3/envs/ibis-dev/lib/python3.11/concurrent/futures/_base.py:449, in Future.result(self, timeout)
448 elif self._state == FINISHED:
--> 449 return self.__get_result()
451 self._condition.wait(timeout)
File ~/anaconda3/envs/ibis-dev/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self)
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/tenacity/__init__.py:478, in Retrying.__call__(self, fn, *args, **kwargs)
477 try:
--> 478 result = fn(*args, **kwargs)
479 except BaseException: # noqa: B902
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/botocore/client.py:569, in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs)
568 # The "self" in this scope is referring to the BaseClient.
--> 569 return self._make_api_call(operation_name, kwargs)
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/botocore/client.py:1023, in BaseClient._make_api_call(self, operation_name, api_params)
1022 error_class = self.exceptions.from_code(error_code)
-> 1023 raise error_class(parsed_response, operation_name)
1024 else:
InvalidRequestException: An error occurred (InvalidRequestException) when calling the StartQueryExecution operation: line 1:19: mismatched input 'SCHEMA'. Expecting: 'MATERIALIZED', 'MULTI', 'PROTECTED', 'VIEW'
The above exception was the direct cause of the following exception:
DatabaseError Traceback (most recent call last)
Cell In[4], line 1
----> 1 con.create_database('mydatabase', force=True)
File ~/git/ibis/ibis/backends/athena/__init__.py:453, in Backend.create_database(self, name, catalog, force)
451 name = sg.table(name, catalog=catalog, quoted=self.compiler.quoted)
452 sql = sge.Create(this=name, kind="SCHEMA", replace=force)
--> 453 with self._safe_raw_sql(sql, unload=False):
454 pass
File ~/anaconda3/envs/ibis-dev/lib/python3.11/contextlib.py:137, in _GeneratorContextManager.__enter__(self)
135 del self.args, self.kwds, self.func
136 try:
--> 137 return next(self.gen)
138 except StopIteration:
139 raise RuntimeError("generator didn't yield") from None
File ~/git/ibis/ibis/backends/athena/__init__.py:291, in Backend._safe_raw_sql(self, query, unload, *args, **kwargs)
289 query = query.sql(self.dialect)
290 with self.con.cursor(unload=unload) as cur:
--> 291 yield cur.execute(query, *args, **kwargs)
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/pyathena/arrow/cursor.py:124, in ArrowCursor.execute(self, operation, parameters, work_group, s3_staging_dir, cache_size, cache_expiration_time, result_reuse_enable, result_reuse_minutes, paramstyle, **kwargs)
122 else:
123 unload_location = None
--> 124 self.query_id = self._execute(
125 operation,
126 parameters=parameters,
127 work_group=work_group,
128 s3_staging_dir=s3_staging_dir,
129 cache_size=cache_size,
130 cache_expiration_time=cache_expiration_time,
131 result_reuse_enable=result_reuse_enable,
132 result_reuse_minutes=result_reuse_minutes,
133 paramstyle=paramstyle,
134 )
135 query_execution = cast(AthenaQueryExecution, self._poll(self.query_id))
136 if query_execution.state == AthenaQueryExecution.STATE_SUCCEEDED:
File ~/anaconda3/envs/ibis-dev/lib/python3.11/site-packages/pyathena/common.py:594, in BaseCursor._execute(self, operation, parameters, work_group, s3_staging_dir, cache_size, cache_expiration_time, result_reuse_enable, result_reuse_minutes, paramstyle)
592 except Exception as e:
593 _logger.exception("Failed to execute query.")
--> 594 raise DatabaseError(*e.args) from e
595 return query_id
DatabaseError: An error occurred (InvalidRequestException) when calling the StartQueryExecution operation: line 1:19: mismatched input 'SCHEMA'. Expecting: 'MATERIALIZED', 'MULTI', 'PROTECTED', 'VIEW'
```
### Code of Conduct
- [x] I agree to follow this project's Code of Conduct | closed | 2025-02-01T05:09:52Z | 2025-02-02T06:30:04Z | https://github.com/ibis-project/ibis/issues/10764 | [
"bug"
] | anjakefala | 2 |
fastapi-users/fastapi-users | fastapi | 883 | How could I remove/Hide is_active, is_superuser and is_verified from register route? | ### Discussed in https://github.com/fastapi-users/fastapi-users/discussions/882
<div type='discussions-op-text'>
<sup>Originally posted by **DinaTaklit** January 19, 2022</sup>
Hello the `auth/register` endpoint offer all those fields to register the new user I want to remove/hide ` is_active`,
`is_superuser` and `is_verified`
```python
{
"email": "user@example.com",
"password": "string",
"is_active": true,
"is_superuser": false,
"is_verified": false,
"firstName": "string",
"lastName": "string",
"phoneNumber": "string"
}
```
How is it possible to do this?</div> | closed | 2022-01-19T20:36:35Z | 2022-01-20T06:57:24Z | https://github.com/fastapi-users/fastapi-users/issues/883 | [] | DinaTaklit | 0 |
TencentARC/GFPGAN | pytorch | 60 | 如何降低美颜效果 | 个人觉得增强后的人脸好像美颜太过了一点 过于平滑 细节不够。请问一下我自己重新训练能否降低美颜效果,应该修改哪里最好呢 | closed | 2021-09-08T00:38:33Z | 2021-09-24T07:57:10Z | https://github.com/TencentARC/GFPGAN/issues/60 | [] | jorjiang | 3 |
pytest-dev/pytest-cov | pytest | 93 | Incompatible with coverage 4.0? | I just ran a `pip upgrade` on my project which upgraded the coverage package from 3.7.1 to 4.0.0. When I ran `py.test --cov`, the output indicated that my test coverage had plummeted from 70% to 30%. A warning was also printed out: `Coverage.py warning: Trace function changed, measurement is likely wrong: None`. Downgrading the coverage package back to 3.7.1 fixes the problem. Has anyone else run into this?
| closed | 2015-09-28T23:01:59Z | 2015-09-29T07:38:12Z | https://github.com/pytest-dev/pytest-cov/issues/93 | [] | reywood | 3 |
pytorch/vision | machine-learning | 8,669 | performance degradation in to_pil_image after v0.17 | ### 🐛 Describe the bug
`torchvision.transforms.functional.to_pil_image `is much slower when converting torch.float16 image tensors to PIL Images based on my benchmarks (serializing 360 images):
Dependencies:
```
Python 3.11
Pillow 10.4.0
```
Before (torch 2.0.1, torchvision v0.15.2, [Code here](https://github.com/pytorch/vision/blob/fa99a5360fbcd1683311d57a76fcc0e7323a4c1e/torchvision/transforms/functional.py#L244)): 23 seconds
After ( torch 2.2.0, torchvision v0.17, [Code here](https://github.com/pytorch/vision/blob/b2383d44751bf85e58cfb9223bbf4e5961c09fa1/torchvision/transforms/functional.py#L245)): 53 seconds
How to reproduce:
```python
import torch
from torchvision.transforms.functional import to_pil_image
rand_img_tensor = torch.rand(3, 512, 512, dtype=torch.float16)
start_time = time.time()
for _ in range(50):
pil_img = to_pil_image(rand_img_tensor)
end_time = time.time()
print(end_time - start_time) # seconds
```
Run the above script with both versions of dependencies listed, and the time difference is apparent.
The cause seems to be [this PR](https://github.com/pytorch/vision/commit/15c166ac127db5c8d1541b3485ef5730d34bb68a) | open | 2024-10-02T08:25:01Z | 2024-10-25T13:06:15Z | https://github.com/pytorch/vision/issues/8669 | [] | seymurkafkas | 5 |
localstack/localstack | python | 11,555 | bug: fromDockerBuild makes error "spawnSync docker ENOENT" | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When I use `cdk.aws_lambda.Code.fromDockerBuild` to create code for lambda, it makes error `Error: spawnSync docker ENOENT`
### Expected Behavior
build without error
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
docker compose up, run cdk in `/etc/localstack/init/ready.d/init-aws.sh`
### Environment
```markdown
LocalStack version: 3.7.3.dev38
LocalStack build date: 2024-09-20
LocalStack build git hash: 5271fc02
```
### Anything else?
_No response_ | closed | 2024-09-21T16:20:43Z | 2024-11-08T18:03:30Z | https://github.com/localstack/localstack/issues/11555 | [
"type: bug",
"status: response required",
"area: integration/cdk",
"aws:lambda",
"status: resolved/stale"
] | namse | 3 |
Yorko/mlcourse.ai | seaborn | 371 | Validation form is out of date for the demo assignment 3 | Questions 3.6 and 3.7 in the [validation form ](https://docs.google.com/forms/d/1wfWYYoqXTkZNOPy1wpewACXaj2MZjBdLOL58htGWYBA/edit) for demo assignment 3 are incorrect. The questions are valid for the previous version of the assignment that is accessible by commit 152a534428d59648ebce250fd876dea45ad00429.
| closed | 2018-10-10T13:58:54Z | 2018-10-16T11:32:43Z | https://github.com/Yorko/mlcourse.ai/issues/371 | [
"enhancement"
] | fralik | 3 |
chatopera/Synonyms | nlp | 14 | two sentences are partly equal | # description
## current
>>> print(synonyms.compare('目前你用什么方法来保护自己', '目前你用什么方法'))
1.0
## expected
Two sentences are partly equal but not fully equal. It should not returns 1 here.
# solution
# environment
* version:
The commit hash (`git rev-parse HEAD`)
| closed | 2017-11-14T09:55:09Z | 2018-01-01T11:36:29Z | https://github.com/chatopera/Synonyms/issues/14 | [
"bug"
] | bobbercheng | 0 |
flavors/django-graphql-jwt | graphql | 299 | modulenotfounderror: no module named 'graphql_jwt' | when I'm trying to use this package this error appears:
modulenotfounderror: no module named 'graphql_jwt'
/usr/local/lib/python3.9/site-packages/graphene_django/settings.py, line 89, in import_from_string
I put "graphql_jwt.refresh_token.apps.RefreshTokenConfig", in the INSTALLED_APPS
and i did everything in the docs
and this is my requirements.txt
pytz==2021.1 # https://github.com/stub42/pytz
Pillow==8.3.2 # https://github.com/python-pillow/Pillow
argon2-cffi==21.1.0 # https://github.com/hynek/argon2_cffi
redis==3.5.3 # https://github.com/andymccurdy/redis-py
hiredis==2.0.0 # https://github.com/redis/hiredis-py
celery==5.1.2 # pyup: < 6.0 # https://github.com/celery/celery
django-celery-beat==2.2.1 # https://github.com/celery/django-celery-beat
flower==1.0.0 # https://github.com/mher/flower
uvicorn[standard]==0.15.0 # https://github.com/encode/uvicorn
django==3.1.13 # pyup: < 3.2 # https://www.djangoproject.com/
django-environ==0.7.0 # https://github.com/joke2k/django-environ
django-model-utils==4.1.1 # https://github.com/jazzband/django-model-utils
django-allauth==0.45.0 # https://github.com/pennersr/django-allauth
django-crispy-forms==1.12.0 # https://github.com/django-crispy-forms/django-crispy-forms
django-redis==5.0.0 # https://github.com/jazzband/django-redis
djangorestframework==3.12.4 # https://github.com/encode/django-rest-framework
django-cors-headers==3.8.0 # https://github.com/adamchainz/django-cors-headers
graphene-django==2.15.0
django-graphql-jwt==0.3.4
django-modeltranslation==0.17.3
drf-yasg2==1.19.4
django-filter==21.1
django-smart-selects==1.5.9
django-nested-inline==0.4.4
django-phonenumber-field==5.2.0
phonenumbers==8.12.33
djoser==2.1.0
dj-rest-auth==2.1.11
django-shortuuidfield==0.1.3
awesome-slugify==1.6.5
django-ckeditor==6.1.0
xlrd==2.0.1
pandas==1.3.5
django-cleanup==5.2.0
django-extensions==3.1.3 # https://github.com/django-extensions/django-extensions | open | 2022-04-03T16:13:02Z | 2022-04-03T16:16:19Z | https://github.com/flavors/django-graphql-jwt/issues/299 | [] | MuhammadAbdulqader | 0 |
apachecn/ailearning | scikit-learn | 590 | 第三个步骤是什么意思,一定要NLP才行吗 | 做图像的,计算机视觉应该也一样吧 | closed | 2020-05-15T02:36:38Z | 2020-05-15T02:40:44Z | https://github.com/apachecn/ailearning/issues/590 | [] | muyangmuzi | 1 |
babysor/MockingBird | pytorch | 223 | 有的时候点击合成,就出现报错 | 报错内容:
Loaded encoder "pretrained.pt" trained to step 1594501
Synthesizer using device: cuda
Trainable Parameters: 32.869M
Traceback (most recent call last):
File "C:\德丽莎\toolbox\__init__.py", line 123, in <lambda>
func = lambda: self.synthesize() or self.vocode()
File "C:\德丽莎\toolbox\__init__.py", line 238, in synthesize
specs = self.synthesizer.synthesize_spectrograms(texts, embeds, style_idx=int(self.ui.style_slider.value()), min_stop_token=min_token, steps=int(self.ui.length_slider.value())*200)
File "C:\德丽莎\synthesizer\inference.py", line 87, in synthesize_spectrograms
self.load()
File "C:\德丽莎\synthesizer\inference.py", line 65, in load
self._model.load(self.model_fpath)
File "C:\德丽莎\synthesizer\models\tacotron.py", line 547, in load
self.load_state_dict(checkpoint["model_state"], strict=False)
File "D:\anaconda3\envs\Theresa\lib\site-packages\torch\nn\modules\module.py", line 1482, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Tacotron:
size mismatch for gst.stl.attention.W_query.weight: copying a param with shape torch.Size([512, 256]) from checkpoint, the shape in current model is torch.Size([512, 512]). | closed | 2021-11-20T10:26:33Z | 2023-01-26T02:39:09Z | https://github.com/babysor/MockingBird/issues/223 | [] | huankong233 | 6 |
cvat-ai/cvat | computer-vision | 8,656 | Attribute Annotation is zooming too much when changing the frame | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
## With firefox
1. Create a task in CVAT with 2 images, like
```json
[
{
"name": "Test",
"id": 3693910,
"color": "#fb117d",
"type": "mask",
"attributes": []
}
]
```
2. Create one big mask for each image -> Save the job
3. Re-open the task
4. Go in 'Attribute annotation' mode
<img width="1274" alt="Capture d’écran 2024-11-07 à 10 59 02" src="https://github.com/user-attachments/assets/be6bb661-5e4c-4fa7-b9bb-ea84cb04d632">
5. Type "F" for next frame
<img width="1268" alt="Capture d’écran 2024-11-07 à 10 58 41" src="https://github.com/user-attachments/assets/c6d438b2-210b-44a3-befd-76b39dbf5d89">
### Expected Behavior
The next frame is visible on the screen
### Possible Solution
I feel the problem comes from [SHAPE_FOCUSED event](https://github.com/cvat-ai/cvat/blob/a56e94b00dfbd583a7e01cec19332a2b92f27067/cvat-canvas/src/typescript/canvasView.ts#L1921-L1929)
This problem makes the Attribute Annotation very hard to use, i'm wondering is a quick fix would be to just fit to the image, or have the ability to fit to the image in the Attribute Annotation mode.
### Context
I'm trying to annotate attributes on Mask
### Environment
```Markdown
This issue is reproductible both in cloud-hosted CVAT and self-hosted CVAT
```
## Note about behavior on Google Chrome
The same kind of problem appear on Google Chrome, but the behavior is a little bit different, and it is impacted by the size of the image and by AAM parameter (while on Firefox it's not)
Should i open another issue ?
## About the AAM parameter
Changing the AAM parameter on Firefox does not fix the issue
<img width="531" alt="Capture d’écran 2024-11-06 à 17 42 13" src="https://github.com/user-attachments/assets/4e7cbcaa-4555-4939-be2f-09c34a2550bb">
| closed | 2024-11-07T10:10:43Z | 2024-11-07T12:11:27Z | https://github.com/cvat-ai/cvat/issues/8656 | [
"bug"
] | piercus | 1 |
strawberry-graphql/strawberry | asyncio | 3,802 | "ModuleNotFoundError: No module named 'ddtrace'" when trying to use DatadogTracingExtension | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
When trying to use DatadogTracingExtension, the following error is raised while importing it:
```python
>>> from strawberry.extensions.tracing.datadog import DatadogTracingExtension
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.12/site-packages/strawberry/extensions/tracing/datadog.py", line 9, in <module>
from packaging import version
ModuleNotFoundError: No module named 'packaging'
```
This is a regression bug introduced in https://github.com/strawberry-graphql/strawberry/pull/3794.
We didn't initially detected that in unit tests because `packaging` is a pretty common dependency that's already required by dev packages such as `black` or `pytest`. However, our production build was missing it and the container wasn't able to start after the deployment.
<!-- A clear and concise description of what the bug is. -->
## System Information
- Strawberry version: `>= 0.260.4` | closed | 2025-03-10T19:55:49Z | 2025-03-12T14:43:55Z | https://github.com/strawberry-graphql/strawberry/issues/3802 | [
"bug"
] | jakub-bacic | 0 |
iperov/DeepFaceLab | machine-learning | 5,492 | SAEHD training on GPU run the pause command after start in Terminal | Hello,
My PC: Acer aspire 7, Core i 7 9th generation, nvidia geforce GTX 1050, Windows 10 home
When I run SAEHD-training on GPU he run the pause command and say some thing like: "Press any key to continue..." after Start. On CPU work every thing fine!
My batch size is 4!
So my CMD is on German but look:

"Drücken sie eine belibige Taste..." mean "Press any key to continue..."
Tanks for your help😀! | open | 2022-03-13T09:31:14Z | 2023-06-08T23:18:48Z | https://github.com/iperov/DeepFaceLab/issues/5492 | [] | Pips01 | 6 |
FactoryBoy/factory_boy | sqlalchemy | 530 | Include repr of model_class when instantiation fails | #### Description
When instantiation of a `model_class` with a dataclass fails, the exception (`TypeError: __init__() missing 1 required positional argument: 'phone_number'`) does not include the class name, which it makes it very difficult to find which fixture failed.
```
Traceback (most recent call last):
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/pytest_factoryboy/fixture.py", line 212, in model_fixture
instance = Factory(**kwargs)
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 46, in __call__
return cls.create(**kwargs)
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 592, in create
return cls._generate(enums.CREATE_STRATEGY, kwargs)
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 526, in _generate
return step.build()
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/builder.py", line 279, in build
kwargs=kwargs,
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 330, in instantiate
return self.factory._create(model, *args, **kwargs)
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 570, in _create
return model_class(*args, **kwargs)
TypeError: __init__() missing 1 required positional argument: 'phone_number'
```
#### To Reproduce
```python
from dataclasses import dataclass
import factory
@dataclass
class Person:
phone_number: str
class PersonFactory(factory.Factory):
class Meta:
model = Person
PersonFactory()
```
Will raise: `TypeError: __init__() missing 1 required positional argument: 'phone_number'`
### Potential solution:
Modify `_create` with something like this:
```python
def full_classname(o):
return o.__module__ + "." + o.__qualname__
...
@classmethod
def _create(cls, model_class, *args, **kwargs):
"""Actually create an instance of the model_class.
Customization point, will be called once the full set of args and kwargs
has been computed.
Args:
model_class (type): the class for which an instance should be
created
args (tuple): arguments to use when creating the class
kwargs (dict): keyword arguments to use when creating the class
"""
try:
return model_class(*args, **kwargs)
except Exception as e:
raise ValueError(
"Could not instantiate %s: %s" % (full_classname(model_class), e)
)
```
Which gives a much more readable exception:
```
Traceback (most recent call last):
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 572, in _create
return model_class(*args, **kwargs)
TypeError: __init__() missing 1 required positional argument: 'phone_number'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/pytest_factoryboy/fixture.py", line 212, in model_fixture
instance = Factory(**kwargs)
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 50, in __call__
return cls.create(**kwargs)
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 598, in create
return cls._generate(enums.CREATE_STRATEGY, kwargs)
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 530, in _generate
return step.build()
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/builder.py", line 279, in build
kwargs=kwargs,
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 334, in instantiate
return self.factory._create(model, *args, **kwargs)
File "/.local/share/virtualenvs/truc-uUs12mY4/lib/python3.6/site-packages/factory/base.py", line 575, in _create
"Could not instantiate %s: %s" % (full_classname(model_class), e)
ValueError: Could not instantiate person.Person: __init__() missing 1 required positional argument: 'phone_number'
``` | closed | 2018-10-22T07:38:56Z | 2020-05-23T10:52:51Z | https://github.com/FactoryBoy/factory_boy/issues/530 | [] | charlax | 4 |
pytest-dev/pytest-django | pytest | 528 | adding a user to a group | I'm testing a basic class-based view with a permission system based on the Django Group model. But before I can begin to test these permissions, I need to first create a user object (easy), then create a Group object, add the User object to the new Group object and save the results.
```
import pytest
from django.contrib.auth.models import User, Group
from django.test import RequestFactory
from ..views import TeacherView
@pytest.mark.django_db
def test_authenticated_user(self, rf):
request = rf.get('/myproj/myapp/teacher/')
user = User.objects.create_user('person', 'person@example.com', 'password')
parents = Group.objects.create(name='parents')
user.groups.add(parents)
user.save()
parents.save()
request.user = user
response = TeacherView.as_view()(request)
assert response.status_code != 200
```
It seems that creating the group works and after adding a member I can look at the group's members with parents.user_set.all(). But then the user object shows an empty list of group memberships:
- parents.user_set.all() # works
- user.groups # auth.Group.None
- parents.user_set.add(user) # auth.Group.None
I've done this sort of thing before with custom manage.py commands. Am I doing anything wrong here? | closed | 2017-10-18T00:06:35Z | 2017-10-20T04:55:18Z | https://github.com/pytest-dev/pytest-django/issues/528 | [] | highpost | 1 |
man-group/notebooker | jupyter | 80 | Grouped front page should be case-sensitive | e.g. if you run for Cowsay and cowsay, the capitalised version will take precendence. | open | 2022-02-25T12:30:36Z | 2022-03-08T22:57:26Z | https://github.com/man-group/notebooker/issues/80 | [
"bug"
] | jonbannister | 0 |
aiogram/aiogram | asyncio | 767 | Add support for Bot API 5.5 | • Bots can now contact users who sent a join request to a chat where the bot is an admin – even if the user never interacted with the bot before.
• Added support for protected content in groups and channels.
• Added support for users posting as a channel in public groups and channel comments.
• Added support for mentioning users by their ID in inline keyboards.
• And more, see the full changelog for details:
https://core.telegram.org/bots/api#december-7-2021
| closed | 2021-12-07T13:35:53Z | 2021-12-07T18:03:14Z | https://github.com/aiogram/aiogram/issues/767 | [
"api"
] | Olegt0rr | 0 |
dask/dask | scikit-learn | 11,768 | querying df.compute(concatenate=True) | https://github.com/dask/dask-expr/pull/1138 introduced the `concatenate` kwargs to dask-dataframe compute operations, and defaulted to True (a change in behaviour). This is now the default in core dask following the merger of expr into the main repo.
I am concerned that the linked PR did not provide any rationale for the change, nor document under what circumstances it should *not* be used.
> Concatenating enables more powerful optimizations but it also incurs additional
> data transfer cost. Generally, it should be enabled.
I suggest the following contraindications:
- worker memory limits are generally much more strict than in the client, so concatenating in-cluster can crash the specific worker and make the workflow unrunnable
- the concatenation task cannot begin until all of its inputs are ready, whereas the client can download each partition as it completes, so in the straggler case, concatenate=True will tend to be slower
I can see the option being useful in the case that:
- there are a large number of small partitions in the output, and we expect the inter-worker latency to be much more favourable than the client-worker latency
I can see the option making no difference in the case that:
- the number of partitions is small compared to the total volume of data in the output, but there is no worker memory issue
cf https://github.com/dask/community/issues/411 | open | 2025-02-20T19:50:49Z | 2025-02-26T18:05:03Z | https://github.com/dask/dask/issues/11768 | [
"needs triage"
] | martindurant | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.