repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
agronholm/anyio | asyncio | 565 | `TaskStatus.started` advertises type checking but it is not type-safe | ### Things to check first
- [X] I have searched the existing issues and didn't find my bug already reported there
- [X] I have checked that my bug is still present in the latest release
### AnyIO version
0414b4e705f21359c16240fc29006f3845cdaf06
### Python version
CPython 3.11.3
### What happened?
since c44c12fceeaa35bcd93009f66fb95cb32f0bffb5 (as well as its backport 75047d5594a9f3d5e122ed46d8ec9d819ca066e9 on 3.x), the following passes type-checking but is not type-safe:
```python
from anyio import TASK_STATUS_IGNORED
from anyio.abc import TaskStatus
async def foo(*, task_status: TaskStatus[int]) -> None:
task_status.started()
# or
# task_status.started(None)
```
### How can we reproduce the bug?
see above. | closed | 2023-05-09T11:21:00Z | 2023-05-10T19:27:34Z | https://github.com/agronholm/anyio/issues/565 | [
"bug"
] | gschaffner | 0 |
davidsandberg/facenet | computer-vision | 1,053 | how to use .pkl file to make a tensorflow lite model? | open | 2019-07-09T20:19:03Z | 2019-09-20T17:51:43Z | https://github.com/davidsandberg/facenet/issues/1053 | [] | MONIKA0307 | 5 | |
jmcnamara/XlsxWriter | pandas | 990 | Bug: Issue with worksheet's conditional_format method with type: Formula. Excel does not apply. | ### Current behavior
xlsxwriter creates a sheet where conditional formatting with type: formula doesn't apply until the conditional formatting rule is edited and saved (without any modification), after which it works. Upon opening, cell A1 does contain "2", but no formatting.
### Expected behavior
It would be expected that the spreadsheet would open with cell A1 containing "2" with formatting applied.
### Sample code to reproduce
```markdown
import xlsxwriter
workbook = xlsxwriter.Workbook('test.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write(0,0,"2")
editedCellsFormat = workbook.add_format(
{
'bg_color': '#FFFFB4',
'bold': True
})
worksheet.conditional_format(f'A1:A1',
{'type': 'formula',
'criteria': '=NOT(ISFORMULA($A$1))',
'format': editedCellsFormat})
workbook.set_properties({'calculation': 'auto'}) # Force recalculation
workbook.close()
```
```
### Environment
```markdown
- XlsxWriter version: 3.1.2
- Python version: 3.9.6
- Excel version: 16.71 (for Mac, but same issue occurs with Windows)
- OS: macOS 12.6.3
```
### Any other information
_No response_
### OpenOffice and LibreOffice users
- [X] I have tested the output file with Excel. | closed | 2023-06-01T18:52:50Z | 2023-06-01T19:19:15Z | https://github.com/jmcnamara/XlsxWriter/issues/990 | [
"bug"
] | louis-potvin | 2 |
Zeyi-Lin/HivisionIDPhotos | machine-learning | 224 | 怎么解决接口只限上传1024K以下的图片 |
上传图片大于1M报Part exceeded maximum size of 1024KB. | open | 2024-12-31T02:59:10Z | 2025-02-28T03:35:02Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/224 | [] | huotu | 2 |
marcomusy/vedo | numpy | 479 | How to get the coordinate of area points with a closed line? | How to get the coordinate of area points with a closed line?

https://github.com/marcomusy/vedo/blob/master/examples/advanced/spline_draw.py
After load a picture, I draw a closed line and I tend to get all coordinate of the area not the spline.
Thanks | closed | 2021-10-09T23:26:08Z | 2021-10-15T09:14:47Z | https://github.com/marcomusy/vedo/issues/479 | [] | timeanddoctor | 2 |
facebookresearch/fairseq | pytorch | 4,975 | CUDA error when evaluating fairseq with `torch.compile`. | ## 🐛 Bug
Hi, I'm training `roberta_large` with DDP with `torch.compile` API wrapping the model definition in trainer. This API is introduced PyTorch 2.0; This error doesn't happen without the `torch.compile` wrapper (so most likely this is a bug with the triton codegen; but given that it only happens with Fairseq && not the other models like huggingface GPT2, Bert_large suggests its worth auditing if fairseq is doing something extraordinary).
This is the one line change I've made to Fairseq in trainer model property. https://github.com/facebookresearch/fairseq/blob/main/fairseq/trainer.py#L253
I've also created a ticket in PyTorch issues: https://github.com/pytorch/pytorch/issues/93378
```
@property
def model(self):
if self._wrapped_model is None:
if self.use_distributed_wrapper:
self._wrapped_model = models.DistributedFairseqModel(
self.cfg.distributed_training,
self._model,
process_group=self.data_parallel_process_group,
device=self.device,
)
self._wrapped_model = torch.compile(self._wrapped_model) <- added line
else:
self._wrapped_model = self._modele("cuda")
return self._wrapped_model
```
### To Reproduce
Steps to reproduce the behavior (**always include the command you ran**):
1. Run cmd
I'm working with the wikitext dataset: https://huggingface.co/datasets/wikitext/tree/main
```bash
mpirun -np 8 \
fairseq-train wikitext-103 \
--adam-eps 1e-06 \
--arch roberta_large \
--attention-dropout 0.1 \
--clip-norm 0.0 \
--criterion masked_lm \
--distributed-backend nccl \
--distributed-no-spawn \
--dropout 0.1 \
--encoder-embed-dim 2048 \
--encoder-ffn-embed-dim 8192 \
--encoder-layers 24 \
--log-format simple \
--log-interval 10 \
--lr 0.0001 \
--lr-scheduler polynomial_decay \
--max-sentences 8 \
--max-update 500 \
--optimizer adam \
--sample-break-mode complete \
--skip-invalid-size-inputs-valid-test \
--task masked_lm \
--tokens-per-sample 512 \
--total-num-update 100 \
--update-freq 1 \
--weight-decay 0.01 \
--no-save \
--memory-efficient-fp16 \
--skip-invalid-size-inputs-valid-test \
--no-last-checkpoints
```
3. See error
Here is the stacktrace:
```
File "/opt/conda/bin/fairseq-train", line 8, in <module>
sys.exit(cli_main())
File "/fsx/roberta/fairseq_master/fairseq/fairseq_cli/train.py", line 574, in cli_main
distributed_utils.call_main(cfg, main)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/distributed/utils.py", line 389, in call_main
distributed_main(cfg.distributed_training.device_id, main, cfg, kwargs)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/distributed/utils.py", line 362, in distributed_main
main(cfg, **kwargs)
File "/fsx/roberta/fairseq_master/fairseq/fairseq_cli/train.py", line 205, in main
valid_losses, should_stop = train(cfg, trainer, task, epoch_itr)
File "/opt/conda/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/fsx/roberta/fairseq_master/fairseq/fairseq_cli/train.py", line 331, in train
log_output = trainer.train_step(samples)
File "/opt/conda/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/trainer.py", line 869, in train_step
raise e
File "/fsx/roberta/fairseq_master/fairseq/fairseq/trainer.py", line 844, in train_step
loss, sample_size_i, logging_output = self.task.train_step(
File "/fsx/roberta/fairseq_master/fairseq/fairseq/tasks/fairseq_task.py", line 531, in train_step
loss, sample_size, logging_output = criterion(model, sample)
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/criterions/masked_lm.py", line 58, in forward
logits = model(**sample["net_input"], masked_tokens=masked_tokens)[0]
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 211, in _fn
return fn(*args, **kwargs)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/distributed/module_proxy_wrapper.py", line 56, in forward
return self.module(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1157, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1111, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0]) # type: ignore[index]
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/models/roberta/model.py", line 255, in forward
x, extra = self.encoder(src_tokens, features_only, return_all_hiddens, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/models/roberta/model.py", line 601, in forward
x, extra = self.extract_features(
File "/fsx/roberta/fairseq_master/fairseq/fairseq/models/roberta/model.py", line 609, in extract_features
encoder_out = self.sentence_encoder(
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/models/transformer/transformer_encoder.py", line 165, in forward
return self.forward_scriptable(
File "/fsx/roberta/fairseq_master/fairseq/fairseq/models/transformer/transformer_encoder.py", line 173, in forward_scriptable
def forward_scriptable(
File "/fsx/roberta/fairseq_master/fairseq/fairseq/models/transformer/transformer_encoder.py", line 212, in <graph break in forward_scriptable>
x, encoder_embedding = self.forward_embedding(src_tokens, token_embeddings)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/models/transformer/transformer_encoder.py", line 230, in <graph break in forward_scriptable>
lr = layer(
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/modules/transformer_layer.py", line 197, in forward
x, _ = self.self_attn(
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/fsx/roberta/fairseq_master/fairseq/fairseq/modules/multihead_attention.py", line 469, in forward
def forward(
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 211, in _fn
return fn(*args, **kwargs)
File "<eval_with_key>.4171", line 16, in forward
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_dynamo/optimizations/distributed.py", line 239, in forward
x = self.submod(*args)
File "/opt/conda/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 211, in _fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2489, in forward
return compiled_fn(full_args)
File "/opt/conda/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 996, in g
return f(*args)
File "/opt/conda/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 2058, in debug_compiled_function
return compiled_function(*args)
File "/opt/conda/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1922, in compiled_function
all_outs = CompiledFunction.apply(*args_with_synthetic_bases)
File "/opt/conda/lib/python3.9/site-packages/torch/autograd/function.py", line 508, in apply
return super().apply(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1706, in forward
fw_outs = call_func_with_args(
File "/opt/conda/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 1021, in call_func_with_args
out = normalize_as_list(f(args))
File "/opt/conda/lib/python3.9/site-packages/torch/_inductor/compile_fx.py", line 220, in run
return model(new_inputs)
File "/tmp/torchinductor_ec2-user/qc/cqc4z4g2oi6k5dtsfp7vzop4z7ucxgcvqy64pfz464n5yjjemmhf.py", line 309, in call
triton__2.run(buf3, primals_5, buf7, 16384, 499, grid=grid(16384, 499), stream=stream4)
File "/opt/conda/lib/python3.9/site-packages/torch/_inductor/triton_ops/autotune.py", line 180, in run
self.autotune_to_one_config(*args, grid=grid)
File "/opt/conda/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 160, in time_wrapper
r = func(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_inductor/triton_ops/autotune.py", line 167, in autotune_to_one_config
timings = {
File "/opt/conda/lib/python3.9/site-packages/torch/_inductor/triton_ops/autotune.py", line 168, in <dictcomp>
launcher: self.bench(launcher, *cloned_args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_inductor/triton_ops/autotune.py", line 149, in bench
return do_bench(kernel_call, rep=40, fast_flush=True)
File "/opt/conda/lib/python3.9/site-packages/triton/testing.py", line 141, in do_bench
torch.cuda.synchronize()
File "/opt/conda/lib/python3.9/site-packages/torch/cuda/__init__.py", line 597, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
#### Code sample
One line code change to fairseq attached above.
### Expected behavior
Expect training to start.
### Environment
- fairseq Version (e.g., 1.0 or main): 0.12.2; built by checking out master
- PyTorch Version (e.g., 1.0): 2.0.0a0+git5876d91 <- working with nightly build from jan 28
- OS (e.g., Linux):
- How you installed fairseq (`pip`, source): self installed
- Build command you used (if compiling from source): `pip install --editable ./`
- Python version: `3.9.13`
- CUDA/cuDNN version: `CUDA Version: 11.7`
- GPU models and configuration: `VERSION="20.04.5 LTS (Focal Fossa)"`
- Any other relevant information: Running on `NVIDIA A100-SXM4-40GB` GPU.
### Additional context
<!-- Add any other context about the problem here. -->
| open | 2023-02-04T15:13:38Z | 2023-03-17T04:51:55Z | https://github.com/facebookresearch/fairseq/issues/4975 | [
"bug",
"needs triage"
] | 0x6b64 | 1 |
aio-libs/aiopg | sqlalchemy | 249 | Unable to detect disconnect when using NOTIFY/LISTEN | I want to ensure that if my database connection goes down that I can recover, but when listening on a notification channel there is no error raised.
I'm using this for testing, it sends a notification every couple seconds. If you start this and then shutdown the database, the NOTIFY commands reconnect but the listen task is unaware that it has lost its connection.
```
async def notify(pool):
while True:
try:
async with pool.acquire() as conn:
async with conn.cursor() as cur:
await cur.execute("NOTIFY channel")
print('NOTIFY channel')
await asyncio.sleep(2)
except Exception as exc:
print('NOTIFY error %s' % exc)
await asyncio.sleep(2)
async def listen(pool):
try:
async with pool.acquire() as conn:
async with conn.cursor() as cur:
await cur.execute("LISTEN channel")
print('LISTEN channel')
while True:
await conn.notifies.get()
print('Notified')
except Exception as exc:
print('LISTEN error %s' % exc)
async def main():
async with aiopg.create_pool(dsn) as pool:
listener = listen(pool)
notifier = notify(pool)
await asyncio.gather(listener, notifier)
print("ALL DONE")
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
``` | closed | 2017-01-04T23:51:37Z | 2021-06-28T04:16:29Z | https://github.com/aio-libs/aiopg/issues/249 | [] | danielnelson | 2 |
keras-team/keras | pytorch | 20,420 | keras.src vs keras.api design question | This is more of a question for me to better understand the codebase.
Working on #20399 , I realised since there's a distinction between `keras.src` and `keras.api` (which is exposed as `keras` in the end), makes it impossible to do certain things.
For instance, if you want to typehint an input as `keras.Model`, then you'd need to do a `import keras.Model` kinda thing. But that results in a circular import issue along these lines:
```py
Cell In[1], line 1
----> 1 from keras.wrappers import KerasClassifier
File ~/Projects/gh/me/keras/keras/__init__.py:4
1 import os
3 # DO NOT EDIT. Generated by api_gen.sh
----> 4 from keras.api import DTypePolicy
5 from keras.api import FloatDTypePolicy
6 from keras.api import Function
File ~/Projects/gh/me/keras/keras/api/__init__.py:7
1 """DO NOT EDIT.
2
3 This file was autogenerated. Do not edit it by hand,
4 since your modifications would be overwritten.
5 """
----> 7 from keras.api import _tf_keras
8 from keras.api import activations
9 from keras.api import applications
File ~/Projects/gh/me/keras/keras/api/_tf_keras/__init__.py:1
----> 1 from keras.api._tf_keras import keras
File ~/Projects/gh/me/keras/keras/api/_tf_keras/keras/__init__.py:28
26 from keras.api import utils
27 from keras.api import visualization
---> 28 from keras.api import wrappers
29 from keras.api._tf_keras.keras import backend
30 from keras.api._tf_keras.keras import layers
File ~/Projects/gh/me/keras/keras/api/wrappers/__init__.py:7
1 """DO NOT EDIT.
2
3 This file was autogenerated. Do not edit it by hand,
4 since your modifications would be overwritten.
5 """
----> 7 from keras.src.wrappers._sklearn import KerasClassifier
8 from keras.src.wrappers._sklearn import KerasRegressor
File ~/Projects/gh/me/keras/keras/src/wrappers/_sklearn.py:37
35 import keras
36 from keras.src import losses as losses_module
---> 37 from keras import Model
38 from keras.src.api_export import keras_export
39 from keras.src.wrappers._utils import accepts_kwargs
ImportError: cannot import name 'Model' from partially initialized module 'keras' (most likely due to a circular import) (/home/adrin/Projects/gh/me/keras/keras/__init__.py)
```
Checking the codebase, I realise typehints are not a thing we do here, so I'll remove them, but it still begs the question, what are the gains with the separation of the two folders, which adds quite a bit of complexity. In other projects, we tend to have a leading `_` on file names, and `__init__.py` exposes what needs to be _public_ on the user API level. | closed | 2024-10-28T09:17:36Z | 2025-03-13T03:10:09Z | https://github.com/keras-team/keras/issues/20420 | [
"type:support"
] | adrinjalali | 7 |
Crinibus/scraper | web-scraping | 63 | Add ability to scrape Amazon.com | closed | 2020-09-01T20:50:12Z | 2020-09-30T21:36:07Z | https://github.com/Crinibus/scraper/issues/63 | [
"enhancement"
] | Crinibus | 0 | |
deezer/spleeter | deep-learning | 173 | [Discussion] I tried this mp3,and always be failed |
[flower road.zip](https://github.com/deezer/spleeter/files/3940275/flower.road.zip)
pls unzip it and get a mp3. I do not know why this song can not be done. Pls help me. | closed | 2019-12-09T15:05:32Z | 2019-12-23T16:54:25Z | https://github.com/deezer/spleeter/issues/173 | [
"question"
] | eminfan | 3 |
alpacahq/alpaca-trade-api-python | rest-api | 222 | RuntimeError (no event loop.) | When the StreamConn run method is invoked from a thread that is _not the main thread_ a RuntimeError is thrown; asyncio loop not present.
**The cause:**
Exception in thread Thread-1:
Traceback (most recent call last):
File "/Users/dans-acc/miniconda3/envs/bot-env/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/Users/dans-acc/Workbench/Bot/bot/stream.py", line 171, in run
self._init_apca_conn()
File "/Users/dans-acc/Workbench/Bot/bot/stream.py", line 61, in _init_apca_conn
base_url=self.apca_url)
File "/Users/dans-acc/miniconda3/envs/bot-env/lib/python3.7/site-packages/alpaca_trade_api/stream2.py", line 27, in __init__
self.loop = asyncio.get_event_loop()
File "/Users/dans-acc/miniconda3/envs/bot-env/lib/python3.7/asyncio/events.py", line 644, in get_event_loop
% threading.current_thread().name)
RuntimeError: There is no current event loop in thread 'Thread-1'.
**Thrown by:**
```python
if self._local._loop is None:
raise RuntimeError('There is no current event loop in thread %r.'
% threading.current_thread().name)
```
**The fix needs to catch the RuntimeError analogous to the caught websockets.WebSocketException:**
```python
try:
self.loop = asyncio.get_event_loop()
except websockets.WebSocketException as wse:
logging.warn(wse)
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
```
_I will shortly make a pull request that catches this issues._
Best,
Dans
| closed | 2020-06-17T13:03:59Z | 2020-06-19T12:07:41Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/222 | [] | dans-acc | 3 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 32 | 为什么PC端打开页面正常 微信里打开显示Powered by pywebio | 哪里设置的不对呢 | closed | 2022-05-28T16:20:09Z | 2022-06-23T23:01:38Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/32 | [] | xbzu | 1 |
zappa/Zappa | django | 686 | [Migrated] AttributeError: 'ZappaCLI' object has no attribute 'apigateway_policy' | Originally from: https://github.com/Miserlou/Zappa/issues/1747 by [rajdotnet](https://github.com/rajdotnet)
zappa template dev --l your-lambda-arn -r your-role-arn cli command gives AttributeError: 'ZappaCLI' object has no attribute 'apigateway_policy' error in version 0.47.1
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
This command works fine in version 0.47.0
## Expected Behavior
the zappa template command should work and generate cloudfromation template in version 0.47.1 instead of erroring out
## Actual Behavior
Calling template for stage dev..
Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "c:\program files (x86)\python36-32\lib\site-packages\zappa\cli.py", line 2712, in handle
sys.exit(cli.handle())
File "c:\program files (x86)\python36-32\lib\site-packages\zappa\cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "c:\program files (x86)\python36-32\lib\site-packages\zappa\cli.py", line 553, in dispatch_command
json=self.vargs['json']
File "c:\program files (x86)\python36-32\lib\site-packages\zappa\cli.py", line 666, in template
policy=self.apigateway_policy
AttributeError: 'ZappaCLI' object has no attribute 'apigateway_policy'
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. run zappa init
2. run zappa template dev --l your-lambda-arn -r your-role-arn
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.47.1
* Operating System and Python version: Windows 10 python version 3.6.4
* The output of `pip freeze`:
argcomplete==1.9.3
arrow==0.12.1
astroid==2.0.4
aws-lambda-builders==0.0.4
aws-sam-cli==0.9.0
aws-sam-translator==1.9.0
binaryornot==0.4.4
boto3==1.9.67
botocore==1.12.67
certifi==2018.10.15
cfn-flip==1.0.3
chardet==3.0.4
chevron==0.13.1
click==6.7
colorama==0.3.9
cookiecutter==1.6.0
dateparser==0.7.0
docker==3.6.0
docker-pycreds==0.4.0
docutils==0.14
durationpy==0.5
Flask==1.0.2
future==0.16.0
hjson==3.0.1
idna==2.7
isort==4.3.4
itsdangerous==1.1.0
Jinja2==2.10
jinja2-time==0.2.0
jmespath==0.9.3
jsonschema==2.6.0
kappa==0.6.0
lambda-packages==0.20.0
lazy-object-proxy==1.3.1
MarkupSafe==1.1.0
mccabe==0.6.1
placebo==0.8.2
poyo==0.4.2
pylint==2.1.1
pypiwin32==223
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2018.7
pywin32==224
PyYAML==3.13
regex==2018.11.22
requests==2.20.0
s3transfer==0.1.13
six==1.11.0
toml==0.10.0
tqdm==4.19.1
troposphere==2.3.3
typed-ast==1.1.0
tzlocal==1.5.1
Unidecode==1.0.22
urllib3==1.24
virtualenv==16.0.0
websocket-client==0.54.0
Werkzeug==0.14.1
whichcraft==0.5.2
wrapt==1.10.11
wsgi-request-logger==0.4.6
zappa==0.47.1
* Link to your project (optional):
* Your `zappa_settings.py`:
{
"dev": {
"app_function": "app.app",
"aws_region": "us-east-1",
"profile_name": "dev",
"project_name": "ex-zappa",
"runtime": "python3.6",
"s3_bucket": "zappa-sample-flask-lambda"
}
}
| closed | 2021-02-20T12:32:56Z | 2024-04-13T18:14:02Z | https://github.com/zappa/Zappa/issues/686 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
koxudaxi/fastapi-code-generator | fastapi | 64 | Schema with enums optional fields are not generated correctly | When the schema includes optional string properties with enum format, the generated model crashes with the following exception:
`TypeError: Optional[t] requires a single type. Got FieldInfo(description='', extra={}).`
The reason is the generated Enum class has the same name as the Field:
from enum import Enum
from typing import Optional
from pydantic import BaseModel, Field
class Prop(Enum):
A = 'A'
B = 'B'
C = 'C'
class Request(BaseModel):
Prop: Optional[Prop] = Field(None, description='') # <-- the problem is here`
When run, `Optional.__getitem__()`' gets `Prop` - the `FieldInfo`, not the `Enum` class. Changing `class Prop(Enum)` to `class PropEnum(Enum)` solved the probelm.
To reproduce, run with the following schema:
`openapi: 3.0.0
servers:
- description: SwaggerHub API Auto Mocking
url: 'https://virtserver.swaggerhub.com/cropmodel/1.0.0'
info:
description: Demo
version: 1.0.0
title: Demo
paths:
/:
post:
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Request'
responses:
'200':
description: OK
'400':
description: bad input parameter
components:
schemas:
Request:
type: object
properties:
Prop:
type: string
enum: [A, B, C]
description: ""` | closed | 2020-11-24T15:48:17Z | 2020-12-24T10:12:11Z | https://github.com/koxudaxi/fastapi-code-generator/issues/64 | [
"bug",
"released"
] | boaza | 3 |
betodealmeida/shillelagh | sqlalchemy | 65 | Allow multi-line queries in REPL | We should allow user to write multi-line queries in the REPL, running them only after a semi-colon. | closed | 2021-07-04T15:52:08Z | 2022-07-25T17:31:21Z | https://github.com/betodealmeida/shillelagh/issues/65 | [
"enhancement",
"help wanted",
"good first issue"
] | betodealmeida | 1 |
microsoft/MMdnn | tensorflow | 821 | tf2caffe,slice layer cannot convert |

Has anyone else encountered this error? | open | 2020-04-13T12:16:26Z | 2020-04-16T05:07:22Z | https://github.com/microsoft/MMdnn/issues/821 | [
"bug"
] | bitwangdan | 7 |
graphql-python/gql | graphql | 54 | No documentation | Tried replicating the two tests against a publicly available end-point, but they fail. Also tried lifting implementation ideas from Apollo documentation into Python, but the differences were not intuitively discoverable. Unable to proceed with any implementation. | closed | 2020-01-24T17:30:25Z | 2020-03-11T11:07:45Z | https://github.com/graphql-python/gql/issues/54 | [] | codepoet80 | 5 |
kiwicom/pytest-recording | pytest | 38 | Use default mode `once` in README | I'm new to `VCR.py` and `pytest-recording`. Just setup for my scraper project today.
Followed by the `README` doc, I used `--record-mode=all` to run `pytest`. However, the tests will failed in CI (I used CircleCI) and the YAMLs under cassettes will be changed.
After read [VCR.py doc](https://vcrpy.readthedocs.io/en/latest/usage.html#once), I change to `--record-mode=once`. Now all the cassettes won't be updated and all tests passed in CI.
IMO, I suggest to use `--record-mode=once` in README, which may be easier for beginner.
| closed | 2020-02-10T03:53:10Z | 2020-02-10T10:29:42Z | https://github.com/kiwicom/pytest-recording/issues/38 | [] | northtree | 1 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 312 | [BUG] 简短明了的描述问题 | ***发生错误的平台?***
如:抖音/TikTok
***发生错误的端点?***
如:API-V1/API-V2/Web APP
***提交的输入值?***
如:短视频链接
***是否有再次尝试?***
如:是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
| closed | 2023-11-02T02:11:16Z | 2024-02-07T03:44:32Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/312 | [
"BUG"
] | RobinsChens | 0 |
ultralytics/ultralytics | python | 19,090 | Mask RTDETR | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
Will Mask RTDETR be integrated into the project in the future?
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-06T02:16:28Z | 2025-02-06T21:02:50Z | https://github.com/ultralytics/ultralytics/issues/19090 | [
"enhancement",
"segment"
] | wyk-study | 2 |
PeterL1n/RobustVideoMatting | computer-vision | 133 | A question about training | In training stage 2, `T` will be increased to 50 to force the network see longer sequences and learn long-term dependencies.
However, I think we will never set `T` as 50 in practical applications, It's a conflict for `RVM` which is a real-time matting method.
And the training only lasts 2 epoch.
So is the stage 2 necessary?Is there any experiments to prove the role of stage 2?
Looking forward to your reply! | closed | 2022-01-14T11:45:12Z | 2022-01-19T04:40:32Z | https://github.com/PeterL1n/RobustVideoMatting/issues/133 | [] | Asthestarsfalll | 1 |
ageitgey/face_recognition | machine-learning | 992 | Weird error: "RuntimeError: Error while calling cudnnConvolutionForward ... code: 7, reason: A call to cuDNN failed" | I tried another face detection method which returns bounding boxes values of float type. Then I convert them with int() and feed them to face_recognition.face_encodings() (which I assign as get_face_encoding()). Then I get error detailed as below:
Traceback (most recent call last):
File "predict_tensorrt_video.py", line 665, in <module>
main()
File "predict_tensorrt_video.py", line 88, in inner
retval = fnc(*args, **kwargs)
File "predict_tensorrt_video.py", line 659, in main
run_inference(args.video_in, args.video_out, candidate_id, current_time)
File "predict_tensorrt_video.py", line 549, in run_inference
face_encoding = get_face_encodings(frame, css_type_face_location, 0)[0]
File "/home/gate/.virtualenvs/lffd/lib/python3.6/site-packages/face_recognition/api.py", line 210, in face_encodings
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
File "/home/gate/.virtualenvs/lffd/lib/python3.6/site-packages/face_recognition/api.py", line 210, in <listcomp>
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
RuntimeError: Error while calling cudnnConvolutionForward( context(), &alpha, descriptor(data), data.device(), (const cudnnFilterDescriptor_t)filter_handle, filters.device(), (const cudnnConvolutionDescriptor_t)conv_handle, (cudnnConvolutionFwdAlgo_t)forward_algo, forward_workspace, forward_workspace_size_in_bytes, &beta, descriptor(output), output.device()) in file /home/gate/dlib-19.17/dlib/cuda/cudnn_dlibapi.cpp:1007. code: 7, reason: A call to cuDNN failed
cudaStreamDestroy() failed. Reason: invalid device ordinal
cudaFree() failed. Reason: invalid device pointer
cudaFreeHost() failed. Reason: invalid argument
cudaStreamDestroy() failed. Reason: unknown error
cudaFree() failed. Reason: invalid device pointer
cudaFreeHost() failed. Reason: invalid argument
cudaFree() failed. Reason: invalid device pointer
Segmentation fault (core dumped)
Anyone knows how to solve it? I've struggled with it all day. Thank you a lot!! | open | 2019-12-03T14:09:36Z | 2022-08-19T17:29:03Z | https://github.com/ageitgey/face_recognition/issues/992 | [] | congphase | 8 |
blacklanternsecurity/bbot | automation | 1,408 | Badsecrets taking a long time | ```
[DBUG] badsecrets.finished: False
[DBUG] running: True
[DBUG] tasks:
[DBUG] - badsecrets.handle_event(HTTP_RESPONSE("{'url': 'http://www.engage.tesla.com/', 'timestamp': '2024-05-28T05:36:15.965878...", module=httpx, tags={'in-scope', 'cloud-amazon', 'http-title-301-moved-permanently', 'status-301', 'ip-13-35-93-129', 'dir'})) running for 3 minutes, 52 seconds:
[DBUG] incoming_queue_size: 111
[DBUG] outgoing_queue_size: 0
``` | closed | 2024-05-28T05:44:41Z | 2024-06-12T13:29:55Z | https://github.com/blacklanternsecurity/bbot/issues/1408 | [
"bug"
] | TheTechromancer | 3 |
opengeos/leafmap | streamlit | 311 | Measure distances and area tool bugging? | The measure distances and area tool appears to have a bug when used in Jupyter notebook, has anyone else experienced an issue?
When I click on the map to begin a measurement the screen shifts on each click? This video demonstrates the issue:
https://user-images.githubusercontent.com/93473831/202700247-cc343b03-d62a-4233-a035-2b9b76ae4346.mp4
This error occurs on the basemap, without having added additional layers, ruling out a projection issue.
I also created a new environment containing only Leafmap, to confirm it was not a package conflict, and the error still occurs.
The use of the measure tool is integral to my use of Leafmap, I thank you for any time, support and guidance you can offer. | closed | 2022-11-18T12:47:06Z | 2022-11-23T04:27:49Z | https://github.com/opengeos/leafmap/issues/311 | [] | PennyJClarke | 3 |
pytest-dev/pytest-selenium | pytest | 139 | "'geckodriver' executable needs to be in PATH" error in 1.11.2 | After update of pytest-selenium to 1.11.2 (https://github.com/pytest-dev/pytest-selenium/commit/0773c93a54f3218d960f4b11b026f2b58a918922 to be more precise) my selenium tests stopped working (using Firefox).
I started to receive `WebDriverException: Message: 'geckodriver' executable needs to be in PATH.` error.
Debugging has shown that `capabilities` now are getting set to `{'acceptInsecureCerts': True, 'browserName': 'firefox', 'marionette': True}`, instead of `{'browserName': 'firefox'}`, which was the case before the update (now it takes them from selenium's `DesiredCapabilities`) | closed | 2017-11-17T13:51:37Z | 2017-11-28T11:10:26Z | https://github.com/pytest-dev/pytest-selenium/issues/139 | [] | bavaria95 | 19 |
plotly/plotly.py | plotly | 4,306 | Long labels are getting cut off when using write_image in plotly when creating polar plots. | I created spider plots using plotly. The labels are shown properly when visualizing the image but during write_image the labels are cut off.
```
import plotly.express as px
import pandas as pd
import plotly.graph_objects as go
if 1:
radii=[0.5, 0.8, 0.3 , 0.7, 0.9]
labels=['left -A long label 1', 'left -A long label 2','left -A long label 3','left -A long label 4', 'left -A long label 5']
percentiles=[0.25,0.5,0.75]
colors = ['rgba(255, 0, 0, 0.8)', 'rgba(0, 255, 0, 0.8)', 'rgba(0, 0,255, 0.8)']
#create the background circles
barpolar_plots = [go.Barpolar(r=[percentiles[t]], width=360, marker_color=[colors[t]], opacity=0.3,showlegend=False) for t in range(0, len(colors))]
layout = go.Figure()
layout.add_traces(barpolar_plots)
layout.update_polars(radialaxis_autorange=False, radialaxis_range=[0,1])
#this removes anglular and radial ticks on the plot
layout.update_layout(polar = dict(radialaxis = dict(showgrid = False, showticklabels=True), angularaxis = dict(showticklabels = False, showgrid=False)))
layout.update_layout(polar=dict(radialaxis=dict(tick0=0,dtick=0.25)))
#bg layout ends
df = pd.DataFrame(dict(r=radii, theta=labels))
#add line plot on top of spider plot and replace degree angles from numbers ot labels
fig=px.line_polar(df, r='r', theta='theta', line_close=True)
fig.update_traces(subplot="polar2")
#this line is suppose to remve ticklabels but it does not
fig.update_layout(polar = dict(radialaxis = dict(showticklabels = False)))
layout = layout.add_traces(fig.data).update_layout({ax:{"domain":{"x":[0,1]}} for ax in ["polar","polar2"]})
layout.update_layout(polar2={"bgcolor":"rgba(0,0,0,0)"})
layout.update_layout(polar2 = dict(radialaxis = dict(showticklabels = False)))
layout.update_layout(font=dict(family="Arial Black",size=20, color="black"))
layout.write_image( 'test_spider.png', scale=2.5,width=800, height=800)
```
| closed | 2023-08-01T22:44:04Z | 2024-07-11T23:34:20Z | https://github.com/plotly/plotly.py/issues/4306 | [] | saurabhpre | 3 |
microsoft/MMdnn | tensorflow | 231 | failed to convert MXnet model to TF | ### Platform (like ubuntu 16.04/win10):
Manjaro 17.1
### Python version:
Python 3.6.5
### Source framework with version (like Tensorflow 1.4.1 with GPU):
mxnet-latest with GPU
### Destination framework with version (like CNTK 2.3 with GPU):
tensorflow with GPU
### Pre-trained model path (webpath or webdisk path)
https://github.com/deepinsight/insightface
https://pan.baidu.com/s/1If28BkHde4fiuweJrbicVA
### Running scripts:
``` bash
python -m mmdnn.conversion._script.convertToIR -f mxnet -n model-symbol.json -w model-0000.params -d my_net --inputShape 3 112 112
```
### Output:
``` bash
$ python -m mmdnn.conversion._script.convertToIR -f mxnet -n model-symbol.json -w model-0000.params -d resnet50 --inputShape 3 112 112
[14:37:14] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.0.0. Attempting to upgrade...
[14:37:14] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
/usr/lib/python3.6/site-packages/mxnet/module/base_module.py:54: UserWarning: You created Module with Module(..., label_names=['softmax_label']) but input with name 'softmax_label' is not found in symbol.list_arguments(). Did you mean one of:
data
warnings.warn(msg)
Warning: MXNet Parser has not supported operator null with name data.
Warning: convert the null operator with name [data] into input layer.
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 186, in <module>
_main()
File "/usr/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 181, in _main
ret = _convert(args)
File "/usr/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 99, in _convert
parser.run(args.dstPath)
File "/usr/lib/python3.6/site-packages/mmdnn/conversion/common/DataStructure/parser.py", line 22, in run
self.gen_IR()
File "/usr/lib/python3.6/site-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 262, in gen_IR
func(current_node)
File "/usr/lib/python3.6/site-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 398, in rename_FullyConnected
weight = self.weight_data.get(source_node.name + "_weight").asnumpy().transpose((1, 0))
AttributeError: 'NoneType' object has no attribute 'asnumpy'
``` | closed | 2018-06-05T11:48:44Z | 2021-02-24T10:34:54Z | https://github.com/microsoft/MMdnn/issues/231 | [] | IdeoG | 3 |
dmlc/gluon-nlp | numpy | 1,089 | [website] SageMaker/Colab companion for tutorials | d2l.ai currently supports viewing notebooks in colab (see http://d2l.ai/chapter_multilayer-perceptrons/mlp.html as an example). It also contains instructions to run notebooks on Sagemaker http://d2l.ai/chapter_appendix-tools-for-deep-learning/sagemaker.html and colab http://d2l.ai/chapter_appendix-tools-for-deep-learning/aws.html
this makes running tutorial much easier and it would be a good thing to add to gluonnlp website. The only thing is that we need to configure the colab/sagemaker environment the same as the one used on CI to build our website. | open | 2020-01-03T23:49:09Z | 2020-01-05T05:10:51Z | https://github.com/dmlc/gluon-nlp/issues/1089 | [
"enhancement",
"documentation"
] | eric-haibin-lin | 1 |
sunscrapers/djoser | rest-api | 494 | SIMPLE_JWT default settings recomendation in your docs. | Hi
Please check this setting recomendation in your docs.
```
SIMPLE_JWT = {
'AUTH_HEADER_TYPES': ('JWT',),
}
```
With this setting JWT authorization doesnt work for some reason. Without it -works fine. | open | 2020-05-07T06:55:29Z | 2021-04-07T14:44:12Z | https://github.com/sunscrapers/djoser/issues/494 | [] | AlekseiKhatkevich | 3 |
microsoft/MMdnn | tensorflow | 498 | could I convert self-made caffe models to pytorch models ? | Platform (like ubuntu 16.04/win10):
> archlinux
>
Python version:
>python3.5
>
Source framework with version (like Tensorflow 1.4.1 with GPU):
> caffe
>
Destination framework with version (like CNTK 2.3 with GPU):
> pytorch
>
Pre-trained model path (webpath or webdisk path):
>prototxt: http://www.cs.jhu.edu/~alanlab/ccvl/DeepLab-COCO-LargeFOV/train.prototxt
model: http://www.cs.jhu.edu/~alanlab/ccvl/init_models/vgg16_20M.caffemodel
>
Running scripts:
>mmconvert -sf caffe -in train.prototxt -iw vgg16_20M.caffemodel -df pytorch -om vgg16.dnn
>
| open | 2018-11-13T04:20:24Z | 2018-12-19T13:42:53Z | https://github.com/microsoft/MMdnn/issues/498 | [] | CoinCheung | 3 |
davidteather/TikTok-Api | api | 943 | create a live broadcast room | Hello,
how should I create a live broadcast room after the user login, and then start the broadcast? | closed | 2022-09-06T06:15:45Z | 2023-08-08T22:05:03Z | https://github.com/davidteather/TikTok-Api/issues/943 | [
"bug"
] | MrsZ | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,141 | The solution is probably just installing the latest supported pip version. | The solution is probably just installing the latest supported pip version.
```py
python.exe -m pip install pip==21.3.0
```
__Originally posted by @ApaxPhoenix in https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1113#issuecomment-1328110070__ | closed | 2022-11-28T14:11:44Z | 2022-11-28T14:12:03Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1141 | [] | ImanuillKant1 | 0 |
igorbenav/FastAPI-boilerplate | sqlalchemy | 41 | Inhance import system and folder structure | **Inhance Import**
condensing import statements for brevity:
`from app.core import async_get_db, TokenData, is_rate_limited, logging`
This makes imports more concise and groups related items together
**folder structure**
Moving schemas inside the version folder could enhance the overall project organization. It might make the versioning more intuitive.
I believe middlewares should be in separate folder thats tell what it contain not like cache middleware its inside app/core/cache.py
Also exceptions file should be folder for each "source of error" not sure about the right word for it but for other services/third parties should be in service.py file (as example) inside exceptions folder users as well UserNotFound as example
**logging System**
logger configrations are on a file but logger object is being declared on each folder you are using it in
this code from rate_limit.py file
```
from app.core.logger import logging
from app.schemas.rate_limit import sanitize_path
logger = logging.getLogger(__name__)
pool: ConnectionPool | None = None
client: Redis | None = None
logger = logging.getLogger(__name__) # its being declared twice
```
my suggestion will make it easier u will be only need to import the logger (object)
`from app.core import ( logger, ...)`
* This is my first time opening a github issue
* sorry for my bad english
| closed | 2023-11-13T16:46:47Z | 2023-11-14T05:24:52Z | https://github.com/igorbenav/FastAPI-boilerplate/issues/41 | [] | YousefAldabbas | 2 |
exaloop/codon | numpy | 240 | Recursively calling a function in "try" branch leads to segfault | The following code recursively calls function foo() in try branch, then it crashes codon in release mode.
test.py
```
def foo():
try:
foo()
except Exception as e:
pass
foo()
```
Reproduce:
codon/codon-linux-x86_64/codon-deploy/bin/codon' run -release test.py
Crash message: Segmentation Fault
Behavior on Python 3.10.8: work well
Environment:
codon: v0.15.5 on Feb 6
Ubuntu 18.04 | closed | 2023-03-15T08:18:18Z | 2024-11-09T19:31:19Z | https://github.com/exaloop/codon/issues/240 | [] | xiaxinmeng | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 385 | PSPNet blank prediction result | Sorry for my broke english
I was trying to train my model using Unet and PSPnet, the both models result of iou and loss scores are pretty, but when i tried to do prediction with my trained model, i found out that pspnet model prediction result is blank/empty, i was make sure to make the all parameters are correct. Since i was train the model of multi classes, both softmax2d and softmax activations were tried, the result still blank/empty. As mentioned on #176 , looks like activation by torch implementation as the main reason, When i used only one class with sigmoid activation, the PSPnet and Unet model works well. But when using softmax as activation, PSPnet can't predict correctly. If you have any suggestion, please comment, thank you.
my ground truth and prediction result (PSPNet) :

the multiclass implementation similar with the code below :
[https://github.com/shirokawakita/multiclass-segmentation/blob/main/example_camvid_multiclassB_quita.ipynb](https://github.com/shirokawakita/multiclass-segmentation/blob/main/example_camvid_multiclassB_quita.ipynb)
| closed | 2021-04-20T16:21:29Z | 2021-04-28T03:26:05Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/385 | [] | mesakh123 | 1 |
nolar/kopf | asyncio | 261 | 01-minimal example does not work as expected. | > <a href="https://github.com/zakkg3"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/25042095?v=4"></a> An issue by [zakkg3](https://github.com/zakkg3) at _2019-12-03 14:41:42+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/261
>
## Expected Behavior
As the https://github.com/nolar/kopf/blob/master/examples/01-minimal/README.md says:
```
kubectl get kopfexamples
NAME DURATION CHILDREN MESSAGE
kopf-example-1 1m hello world
```
and
```
$ kubectl get KopfExample kopf-example-1 -o yaml
apiVersion: zalando.org/v1
kind: KopfExample
metadata:
...
spec:
duration: 1m
field: value
items:
- item1
- item2
status:
message: hello world
```
## Actual Behavior
as documented : `Whatever is returned from any handler, is stored in the object’s status under that handler id (which is the function name by default).`
so I get:
```
# kubectl get KopfExample kopf-example-1 -o yaml
...
status:
create_fn:
message: hello world
```
and then the message is not picked up:
```
kubectl get kopfexamples
NAME DURATION CHILDREN MESSAGE
kopf-example-1 1m
```
## Steps to Reproduce the Problem
1. run the minimal example.
## Specifications
- Platform: metal K8s
- Kubernetes version: *(use `kubectl version`)* 1.15.5
- Python version: *(use `python --version`)* 3.8
- Python packages installed: *(use `pip freeze --all`)* k8s and kopf
```
```
Am I doing something wrong? do I miss something?
thanks!
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-12-03 14:54:53+00:00_
>
Thanks for reporting. Indeed, that is the definition of `crd.yaml`. It should be:
```yaml
additionalPrinterColumns:
- name: Duration
type: string
priority: 0
JSONPath: .spec.duration
description: For how long the pod should sleep.
- name: Children
type: string
priority: 0
JSONPath: .status.create_fn.children
description: The children pods created.
- name: Message
type: string
priority: 0
JSONPath: .status.create_fn.message
description: As returned from the handler (sometimes).
```
(With `.create_fn.` in two fields).
---
> <a href="https://github.com/zakkg3"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/25042095?v=4"></a> Commented by [zakkg3](https://github.com/zakkg3) at _2019-12-03 15:09:36+00:00_
>
Yes, that will fix all the examples :). Thanks for the super-fast answer. we are thinking about use this framework, and with this response, I think we will go this path. thanks!
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-12-23 13:24:09+00:00_
>
Released as [`kopf==0.24`](https://github.com/nolar/kopf/releases/tag/0.24). | closed | 2020-08-18T20:02:08Z | 2020-08-23T20:53:10Z | https://github.com/nolar/kopf/issues/261 | [
"documentation",
"archive"
] | kopf-archiver[bot] | 0 |
agronholm/anyio | asyncio | 805 | pytest-anyio and crashed background task in taskgroup fixture | ### Things to check first
- [X] I have searched the existing issues and didn't find my bug already reported there
- [X] I have checked that my bug is still present in the latest release
### AnyIO version
4.6.0
### Python version
3.12.4
### What happened?
I'm encountering several weird things, where it will either hang in weird places or crash.
This is from trying to rewrite pytest-trio and encountering the test that was added after https://github.com/python-trio/pytest-trio/pull/77 in https://github.com/python-trio/pytest-trio/pull/83
### How can we reproduce the bug?
```python
import anyio
import pytest
from contextlib import asynccontextmanager
my_event = anyio.Event()
async def die_soon(task_status):
task_status.started()
await my_event.wait()
raise RuntimeError('OOPS')
@asynccontextmanager
async def my_simple_fixture():
async with anyio.create_task_group() as tg:
await tg.start(die_soon)
yield
@pytest.mark.anyio
async def test_try():
async with my_simple_fixture():
my_event.set()
```
Running this with trio as the backend gives:
```
[...]
| File "/tmp/anyio_pytest/bar.py", line 14, in my_simple_fixture
| async with anyio.create_task_group() as tg:
| File "/tmp/anyio_pytest/.venv/lib/python3.12/site-packages/anyio/_backends/_trio.py", line 187, in __aexit__
| return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/tmp/anyio_pytest/.venv/lib/python3.12/site-packages/trio/_core/_run.py", line 959, in __aexit__
| raise combined_error_from_nursery
| ExceptionGroup: Exceptions from Trio nursery (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/tmp/anyio_pytest/bar.py", line 8, in die_soon
| await my_event.wait()
| File "/tmp/anyio_pytest/.venv/lib/python3.12/site-packages/anyio/_core/_synchronization.py", line 130, in wait
| await self._event.wait()
| File "/tmp/anyio_pytest/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 1716, in wait
| await AsyncIOBackend.checkpoint()
| File "/tmp/anyio_pytest/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2264, in checkpoint
| await sleep(0)
| File "/usr/lib/python3.12/asyncio/tasks.py", line 656, in sleep
| await __sleep0()
| File "/usr/lib/python3.12/asyncio/tasks.py", line 650, in __sleep0
| yield
| TypeError: trio.run received unrecognized yield message None. Are you trying to use a library written for some other framework like asyncio? That won't work without some kind of compatibility shim.
```
if I remove the decorator and directly run `anyio.run(test_try, backend="trio")` it correctly gives a group with our "OOPS" `RuntimeError`, same if running anyio-pytest with asyncio as backend.
### 2
This gives a teardown error and a messy traceback
```python
import anyio
import pytest
@pytest.fixture
def anyio_backend():
return 'asyncio'
async def die_soon():
raise RuntimeError('OOPS')
@pytest.fixture
async def my_simple_fixture():
async with anyio.create_task_group() as tg:
tg.start_soon(die_soon)
yield
async def test_try(my_simple_fixture, anyio_backend):
...
```
Error:
<details>
```
$ pytest bar.py -sv
===================================== test session starts =====================================
platform linux -- Python 3.12.4, pytest-8.3.3, pluggy-1.5.0 -- /tmp/anyio_pytest/.venv/bin/python
cachedir: .pytest_cache
rootdir: /tmp/anyio_pytest
plugins: anyio-4.6.0, trio-0.8.0
collected 1 item
bar.py::test_try FAILED
bar.py::test_try ERROR
=========================================== ERRORS ============================================
________________________________ ERROR at teardown of test_try ________________________________
anyio_backend = 'asyncio', args = (), kwargs = {}, backend_name = 'asyncio'
backend_options = {}, runner = <anyio._backends._asyncio.TestRunner object at 0x71c2350cf260>
def wrapper(*args, anyio_backend, **kwargs): # type: ignore[no-untyped-def]
backend_name, backend_options = extract_backend_and_options(anyio_backend)
if has_backend_arg:
kwargs["anyio_backend"] = anyio_backend
with get_runner(backend_name, backend_options) as runner:
if isasyncgenfunction(func):
> yield from runner.run_asyncgen_fixture(func, kwargs)
.venv/lib/python3.12/site-packages/anyio/pytest_plugin.py:81:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2187: in run_asyncgen_fixture
self.get_loop().run_until_complete(
/usr/lib/python3.12/asyncio/base_events.py:687: in run_until_complete
return future.result()
.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2170: in _call_in_runner_task
self._send_stream.send_nowait((coro, future))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MemoryObjectSendStream(_state=MemoryObjectStreamState(max_buffer_size=1, buffer=deque([]), open_send_channels=0, open_receive_channels=0, waiting_receivers=OrderedDict(), waiting_senders=OrderedDict()), _closed=True)
item = (<async_generator_asend object at 0x71c23487d2c0>, <Future pending>)
def send_nowait(self, item: T_contra) -> None:
"""
Send an item immediately if it can be done without waiting.
:param item: the item to send
:raises ~anyio.ClosedResourceError: if this send stream has been closed
:raises ~anyio.BrokenResourceError: if the stream has been closed from the
receiving end
:raises ~anyio.WouldBlock: if the buffer is full and there are no tasks waiting
to receive
"""
if self._closed:
> raise ClosedResourceError
E anyio.ClosedResourceError
.venv/lib/python3.12/site-packages/anyio/streams/memory.py:211: ClosedResourceError
========================================== FAILURES ===========================================
__________________________________________ test_try ___________________________________________
pyfuncitem = <Function test_try>
@pytest.hookimpl(tryfirst=True)
def pytest_pyfunc_call(pyfuncitem: Any) -> bool | None:
def run_with_hypothesis(**kwargs: Any) -> None:
with get_runner(backend_name, backend_options) as runner:
runner.run_test(original_func, kwargs)
backend = pyfuncitem.funcargs.get("anyio_backend")
if backend:
backend_name, backend_options = extract_backend_and_options(backend)
if hasattr(pyfuncitem.obj, "hypothesis"):
# Wrap the inner test function unless it's already wrapped
original_func = pyfuncitem.obj.hypothesis.inner_test
if original_func.__qualname__ != run_with_hypothesis.__qualname__:
if iscoroutinefunction(original_func):
pyfuncitem.obj.hypothesis.inner_test = run_with_hypothesis
return None
if iscoroutinefunction(pyfuncitem.obj):
funcargs = pyfuncitem.funcargs
testargs = {arg: funcargs[arg] for arg in pyfuncitem._fixtureinfo.argnames}
with get_runner(backend_name, backend_options) as runner:
try:
> runner.run_test(pyfuncitem.obj, testargs)
.venv/lib/python3.12/site-packages/anyio/pytest_plugin.py:131:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2217: in run_test
self._raise_async_exceptions()
.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2121: in _raise_async_exceptions
raise exceptions[0]
.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2211: in run_test
self.get_loop().run_until_complete(
/usr/lib/python3.12/asyncio/base_events.py:687: in run_until_complete
return future.result()
.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2170: in _call_in_runner_task
self._send_stream.send_nowait((coro, future))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MemoryObjectSendStream(_state=MemoryObjectStreamState(max_buffer_size=1, buffer=deque([]), open_send_channels=0, open_receive_channels=0, waiting_receivers=OrderedDict(), waiting_senders=OrderedDict()), _closed=True)
item = (<coroutine object test_try at 0x71c2350a3280>, <Future pending>)
def send_nowait(self, item: T_contra) -> None:
"""
Send an item immediately if it can be done without waiting.
:param item: the item to send
:raises ~anyio.ClosedResourceError: if this send stream has been closed
:raises ~anyio.BrokenResourceError: if the stream has been closed from the
receiving end
:raises ~anyio.WouldBlock: if the buffer is full and there are no tasks waiting
to receive
"""
if self._closed:
> raise ClosedResourceError
E anyio.ClosedResourceError
.venv/lib/python3.12/site-packages/anyio/streams/memory.py:211: ClosedResourceError
=================================== short test summary info ===================================
FAILED bar.py::test_try - anyio.ClosedResourceError
ERROR bar.py::test_try - anyio.ClosedResourceError
================================= 1 failed, 1 error in 0.30s ==================================
```
</details>
### 3
but if we make `anyio_backend` return `"trio"` we instead get a hang. KeyboardInterrupt traceback ends with
<details>
```
self = <Condition(<unlocked _thread.lock object at 0x7951089ac5c0>, 0)>, timeout = None
def wait(self, timeout=None):
"""Wait until notified or until a timeout occurs.
If the calling thread has not acquired the lock when this method is
called, a RuntimeError is raised.
This method releases the underlying lock, and then blocks until it is
awakened by a notify() or notify_all() call for the same condition
variable in another thread, or until the optional timeout occurs. Once
awakened or timed out, it re-acquires the lock and returns.
When the timeout argument is present and not None, it should be a
floating point number specifying a timeout for the operation in seconds
(or fractions thereof).
When the underlying lock is an RLock, it is not released using its
release() method, since this may not actually unlock the lock when it
was acquired multiple times recursively. Instead, an internal interface
of the RLock class is used, which really unlocks it even when it has
been recursively acquired several times. Another internal interface is
then used to restore the recursion level when the lock is reacquired.
"""
if not self._is_owned():
raise RuntimeError("cannot wait on un-acquired lock")
waiter = _allocate_lock()
waiter.acquire()
self._waiters.append(waiter)
saved_state = self._release_save()
gotit = False
try: # restore state no matter what (e.g., KeyboardInterrupt)
if timeout is None:
> waiter.acquire()
E KeyboardInterrupt
/usr/lib/python3.12/threading.py:355: KeyboardInterrupt
====================================== 1 passed in 1.16s ======================================
Exception ignored in: <async_generator object my_simple_fixture at 0x7951089bd000>
Traceback (most recent call last):
File "/tmp/anyio_pytest/.venv/lib/python3.12/site-packages/trio/_core/_asyncgens.py", line 123, in finalizer
raise RuntimeError(
RuntimeError: Non-Trio async generator 'bar.my_simple_fixture' awaited something during finalization; install a finalization hook to support this, or wrap it in 'async with aclosing(...):'
```
</details> | open | 2024-10-11T13:46:03Z | 2025-03-21T23:02:59Z | https://github.com/agronholm/anyio/issues/805 | [
"bug"
] | jakkdl | 18 |
piskvorky/gensim | nlp | 3,500 | Vocabulary size is much smaller than requested | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/g/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I was training a w2v model on a rather large corpus (about 35B tokens). I set the `min_count` to 50 and `max_vocab_size` to 250,000. I expected at the end of the training to have a vocabulary of 250k words. Instead, I got one at around 70k.
The logs are telling:
```
PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
PROGRESS: at sentence #10000, processed 1149505 words, keeping 149469 word types
PROGRESS: at sentence #20000, processed 2287292 words, keeping 232917 word types
pruned out 0 tokens with count <=1 (before 250001, after 250001)
pruned out 140618 tokens with count <=2 (before 250007, after 109389)
PROGRESS: at sentence #30000, processed 3442707 words, keeping 179514 word types
pruned out 148589 tokens with count <=3 (before 250005, after 101416)
...
pruned out 179627 tokens with count <=16330 (before 250006, after 70379)
PROGRESS: at sentence #301310000, processed 35302183879 words, keeping 92987 word types
collected 112874 word types from a corpus of 35302368099 raw words and 301311561 sentences
Creating a fresh vocabulary
Word2Vec lifecycle event {'msg': 'effective_min_count=50 retains 70380 unique words (62.35% of original 112874, drops 42494)', 'datetime': '2023-09-26T15:19:19.866236', 'gensim': '4.3.2', 'python': '3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0]', 'platform': 'Linux-5.4.0-150-generic-x86_64-with-glibc2.31', 'event' : 'prepare_vocab'}
Word2Vec lifecycle event {'msg': 'effective_min_count=50 leaves 30437195857 word corpus (100.00% of original 30437248987, drops 53130)', ' datetime': '2023-09-26T15:19:19.870161', 'gensim': '4.3.2', 'python': '3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0]', 'platform': 'Linux-5.4.0-150-generic-x86_64-with-glibc2.31 ', 'event': 'prepare_vocab'}
deleting the raw counts dictionary of 112874 items
sample=0.001 downsamples 21 most-common words
Word2Vec lifecycle event {'msg': 'downsampling leaves estimated 24482242512.167667 word corpus (80.4%% of prior 30437195857)', 'datetime': '2023-09-26T15:19:20.211104', 'gensim': '4.3.2', 'python': '3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0]', 'platform': 'Linux-5.4.0-150-generic-x86_64-with-glibc2.31', 'event' : 'prepare_vocab'}
estimated required memory for 70380 words and 300 dimensions: 204102000 bytes
```
So it seems as if `min_count` is only taken into consideration **after** the vocabulary has been pruned with a continuously increasing threshold. However, the threshold throws away a lot of words that otherwise should be in the vocabulary.
A few observations about this:
1. I am not sure thresholding works really well, especially in the latter stages: how could a word amass 16,000 occurrences if its previous (say, 15,998) occurrences have been pruned previously? Even if it occurs 100 times in the new batch, it will just be pruned again.
1. The log mentions `effective_min_count=50`, which then manages to prune 20k words at the end. I mean, if the final threshold was over 16,000, how could a threshold of 50 result in any more pruning?
1. [The documentation](https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec) only says this about `min_count`: _Ignores all words with total frequency lower than this._ Which is clearly not what it does.
So the questions that naturally follow:
1. **Can we switch of the increasing thresholding?**
2. What does `min_count` **actually** do?
#### Steps/code/corpus to reproduce
```python
model = models.Word2Vec(
sentences=corpus, vector_size=300, min_count=50,
max_vocab_size=250000, workers=processes, epochs=1,
compute_loss=True, sg=int(args.sg)
)
```
#### Versions
```
Linux-5.4.0-150-generic-x86_64-with-glibc2.31
Python 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0]
Bits 64
NumPy 1.25.2
SciPy 1.11.2
gensim 4.3.2
FAST_VERSION 0
``` | closed | 2023-10-09T07:30:21Z | 2023-10-17T08:40:52Z | https://github.com/piskvorky/gensim/issues/3500 | [] | DavidNemeskey | 2 |
docarray/docarray | pydantic | 1,152 | Dynamic class creation | **Is your feature request related to a problem? Please describe.**
In some scenarios users might want to dynamically create a class.
For example, in the search apps, data might be given through different sources, it might have a folder structure, and we are then responsible for converting it to a suitable docarray format, where this feature will come handy.
**Describe the solution you'd like**
Having a function that takes pairs of field names and their corresponding types and returns a class
**Describe alternatives you've considered**
Tried with [Pydantic's `create_model`](https://docs.pydantic.dev/usage/models/#dynamic-model-creation), and it works ok, but this doesn't return a `BaseDocument` so would be nice to have it handled inside docarray.
| closed | 2023-02-20T13:44:25Z | 2023-02-28T14:19:51Z | https://github.com/docarray/docarray/issues/1152 | [] | jupyterjazz | 0 |
scikit-hep/awkward | numpy | 2,884 | `ak.from_arrow` broken for ChunkedArrays | ### Version of Awkward Array
2.5.0
### Description and code to reproduce
Tested with awkward 2.4.6 through 2.5.1rc1 and PyArrow 13.0.0 through 14.0.1. Conda environment created with, e.g., `mamba install -y jupyterlab awkward=2.5.0 pyarrow=14.0.1`.
```python
import pyarrow as pa
import awkward as ak
ary = pa.chunked_array([["foo", "bar"], ["blah", "bleh"]])
ak.from_arrow(ary)
```
results in `AttributeError: 'UnmaskedArray' object has no attribute '__pyarrow_original'`.
<details>
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[12], line 1
----> 1 ak.from_arrow(ary)
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/_dispatch.py:39, in named_high_level_function.<locals>.dispatch(*args, **kwargs)
35 @wraps(func)
36 def dispatch(*args, **kwargs):
37 # NOTE: this decorator assumes that the operation is exposed under `ak.`
38 with OperationErrorContext(name, args, kwargs):
---> 39 gen_or_result = func(*args, **kwargs)
40 if isgenerator(gen_or_result):
41 array_likes = next(gen_or_result)
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/operations/ak_from_arrow.py:45, in from_arrow(array, generate_bitmasks, highlevel, behavior, attrs)
15 @high_level_function()
16 def from_arrow(
17 array, *, generate_bitmasks=False, highlevel=True, behavior=None, attrs=None
18 ):
19 """
20 Args:
21 array (`pyarrow.Array`, `pyarrow.ChunkedArray`, `pyarrow.RecordBatch`, or `pyarrow.Table`):
(...)
43 See also #ak.to_arrow, #ak.to_arrow_table, #ak.from_parquet, #ak.from_arrow_schema.
44 """
---> 45 return _impl(array, generate_bitmasks, highlevel, behavior, attrs)
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/operations/ak_from_arrow.py:67, in _impl(array, generate_bitmasks, highlevel, behavior, attrs)
65 if awkwardarrow_type is None:
66 if isinstance(out, ak.contents.UnmaskedArray):
---> 67 out = awkward._connect.pyarrow.remove_optiontype(out)
68 else:
69 if awkwardarrow_type.mask_type in (None, "IndexedArray"):
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/_connect/pyarrow.py:913, in remove_optiontype(akarray)
912 def remove_optiontype(akarray):
--> 913 return akarray.__pyarrow_original
AttributeError: 'UnmaskedArray' object has no attribute '__pyarrow_original'
This error occurred while calling
ak.from_arrow(
ChunkedArray-instance
)
```
</details>
```python
import pyarrow as pa
import awkward as ak
ary = pa.chunked_array([[1,2,3], [4,5]])
ak.from_arrow(ary)
```
results in `ModuleNotFoundError: No module named 'pandas'`.
<details>
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[2], line 5
3 #ary = pa.chunked_array([["foo", "bar"], ["blah", "bleh"]])
4 ary = pa.chunked_array([[1,2,3], [4,5]])
----> 5 ak.from_arrow(ary)
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/_dispatch.py:39, in named_high_level_function.<locals>.dispatch(*args, **kwargs)
35 @wraps(func)
36 def dispatch(*args, **kwargs):
37 # NOTE: this decorator assumes that the operation is exposed under `ak.`
38 with OperationErrorContext(name, args, kwargs):
---> 39 gen_or_result = func(*args, **kwargs)
40 if isgenerator(gen_or_result):
41 array_likes = next(gen_or_result)
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/operations/ak_from_arrow.py:45, in from_arrow(array, generate_bitmasks, highlevel, behavior, attrs)
15 @high_level_function()
16 def from_arrow(
17 array, *, generate_bitmasks=False, highlevel=True, behavior=None, attrs=None
18 ):
19 """
20 Args:
21 array (`pyarrow.Array`, `pyarrow.ChunkedArray`, `pyarrow.RecordBatch`, or `pyarrow.Table`):
(...)
43 See also #ak.to_arrow, #ak.to_arrow_table, #ak.from_parquet, #ak.from_arrow_schema.
44 """
---> 45 return _impl(array, generate_bitmasks, highlevel, behavior, attrs)
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/operations/ak_from_arrow.py:55, in _impl(array, generate_bitmasks, highlevel, behavior, attrs)
51 pyarrow = awkward._connect.pyarrow.pyarrow
53 ctx = HighLevelContext(behavior=behavior, attrs=attrs).finalize()
---> 55 out = awkward._connect.pyarrow.handle_arrow(
56 array, generate_bitmasks=generate_bitmasks, pass_empty_field=True
57 )
59 if isinstance(array, (pyarrow.lib.Array, pyarrow.lib.ChunkedArray)):
60 (
61 awkwardarrow_type,
62 storage_type,
63 ) = awkward._connect.pyarrow.to_awkwardarrow_storage_types(array.type)
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/_connect/pyarrow.py:933, in handle_arrow(obj, generate_bitmasks, pass_empty_field)
930 return out
932 elif isinstance(obj, pyarrow.lib.ChunkedArray):
--> 933 layouts = [handle_arrow(x, generate_bitmasks) for x in obj.chunks if len(x) > 0]
935 if len(layouts) == 1:
936 return layouts[0]
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/_connect/pyarrow.py:926, in handle_arrow(obj, generate_bitmasks, pass_empty_field)
922 buffers = obj.buffers()
924 awkwardarrow_type, storage_type = to_awkwardarrow_storage_types(obj.type)
--> 926 out = popbuffers(
927 obj, awkwardarrow_type, storage_type, buffers, generate_bitmasks
928 )
929 assert len(buffers) == 0
930 return out
File ~/micromamba/envs/test/lib/python3.12/site-packages/awkward/_connect/pyarrow.py:667, in popbuffers(paarray, awkwardarrow_type, storage_type, buffers, generate_bitmasks)
665 data = numpy.astype(numpy.frombuffer(data, dtype=np.int32), dtype=np.int64)
666 if dt is None:
--> 667 dt = storage_type.to_pandas_dtype()
669 out = ak.contents.NumpyArray(
670 numpy.frombuffer(data, dtype=dt),
671 parameters=node_parameters(awkwardarrow_type),
672 backend=NumpyBackend.instance(),
673 )
674 return popbuffers_finalize(
675 out, paarray, validbits, awkwardarrow_type, generate_bitmasks
676 )
File ~/micromamba/envs/test/lib/python3.12/site-packages/pyarrow/types.pxi:336, in pyarrow.lib.DataType.to_pandas_dtype()
File ~/micromamba/envs/test/lib/python3.12/site-packages/pyarrow/types.pxi:154, in pyarrow.lib._to_pandas_dtype()
File ~/micromamba/envs/test/lib/python3.12/site-packages/pyarrow/pandas-shim.pxi:158, in pyarrow.lib._PandasAPIShim.is_v1()
File ~/micromamba/envs/test/lib/python3.12/site-packages/pyarrow/pandas-shim.pxi:100, in pyarrow.lib._PandasAPIShim._check_import()
File ~/micromamba/envs/test/lib/python3.12/site-packages/pyarrow/pandas-shim.pxi:53, in pyarrow.lib._PandasAPIShim._import_pandas()
File ~/micromamba/envs/test/lib/python3.12/site-packages/pyarrow/pandas-shim.pxi:48, in pyarrow.lib._PandasAPIShim._import_pandas()
ModuleNotFoundError: No module named 'pandas'
This error occurred while calling
ak.from_arrow(
ChunkedArray-instance
)
```
</details>
For the integer ChunkedArray case, if I install pandas, then I get the same error as for the string ChunkedArray.
For both integer and string arrays, `ak.from_arrow(ary.combine_chunks())` works. | closed | 2023-12-11T05:17:44Z | 2023-12-11T16:02:30Z | https://github.com/scikit-hep/awkward/issues/2884 | [
"bug"
] | shenker | 0 |
ansible/awx | django | 15,633 | Cannot build the containers when the awx_devel image has already been pulled from a private repository | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
1. ```make docker-compose-build``` is replaced by a ```docker pull ${private_registry_fqdn}:${port}/awx_devel:${awx_devel_version}```.
2. ```make docker-compose``` tries to pull the awx_devel image although it has already been pulled in the previous step, resulting in an error message ```pull access denied for awx_devel```.
### AWX version
24.6.1
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [X] Other
### Installation method
docker development environment
### Modifications
yes
### Ansible version
2.17.6
### Operating system
Ubuntu 24.04
### Web browser
_No response_
### Steps to reproduce
```
git clone -b 24.6.1 https://github.com/ansible/awx.git git-awx && cd git-awx
git checkout -b $awx_devel_version 24.6.1
docker pull ${private_registry_fqdn}:${port}/awx_devel:${awx_devel_version}
export RECEPTOR_IMAGE=quay.io/ansible/receptor:v1.4.9
export COMPOSE_TAG=${awx_devel_version}
export DEVEL_IMAGE_NAME=awx_devel:${awx_devel_version}
sed -i '/^[[:blank:]]*DEV_DOCKER_TAG_BASE[[:blank:]]*?=[[:blank:]]*ghcr.io.*/d' Makefile
sed -Ei 's|^[[:blank:]]*DEVEL_IMAGE_NAME[[:blank:]]*\?=.*|DEVEL_IMAGE_NAME ?= awx_devel:$(COMPOSE_TAG)|g' Makefile
sed -Ei 's|^([[:blank:]]*-e[[:blank:]]+awx_image[[:blank:]]*=[[:blank:]]*)\$\(DEV_DOCKER_TAG_BASE\)/awx_devel[[:blank:]]*\\|\1awx_devel \\|g' Makefile
make docker-compose
```
### Expected results
Built & started AWX tools_* containers
### Actual results
```
ansible-playbook -e ansible_python_interpreter=python3.11 -i tools/docker-compose/inventory tools/docker-compose/ansible/sources.yml \
-e awx_image=awx_devel \
-e awx_image_tag=<awx_devel_version> \
-e receptor_image=quay.io/ansible/receptor:v1.4.9 \
-e control_plane_node_count=1 \
-e execution_node_count=0 \
-e minikube_container_group=false \
-e enable_pgbouncer=false \
-e enable_keycloak=false \
-e enable_ldap=false \
-e enable_splunk=false \
-e enable_prometheus=false \
-e enable_grafana=false \
-e enable_vault=false \
-e vault_tls=false \
-e enable_tacacs=false \
-e enable_otel=false \
-e enable_loki=false \
-e install_editable_dependencies=false \
-e pg_tls=false \
...
ansible-playbook -e ansible_python_interpreter=python3.11 -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=false \
-e vault_tls=false \
-e enable_ldap=false; \
make docker-compose-up
...
docker compose -f tools/docker-compose/_sources/docker-compose.yml up --remove-orphans
...
[+] Running 3/3
✘ postgres Error context canceled 1.2s
✘ redis_1 Error context canceled 1.2s
✘ awx_1 Error pull access denied for awx_devel, repository does not exist or may require 'docker login': denied: requested access to... 1.2s
Error response from daemon: pull access denied for awx_devel, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
```
### Additional information
As shown in the ```Steps to reproduce```, the Makefile has been modified.
The modifications are probably incomplete.
What should be done to allow this workflow? | closed | 2024-11-13T16:51:12Z | 2024-11-14T13:40:38Z | https://github.com/ansible/awx/issues/15633 | [
"type:bug",
"needs_triage",
"community"
] | jean-christophe-manciot | 2 |
huggingface/datasets | computer-vision | 6,912 | Add MedImg for streaming | ### Feature request
Host the MedImg dataset (similar to Imagenet but for biomedical images).
### Motivation
There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community.
### Your contribution
MedImg can be found [here](https://www.cuilab.cn/medimg/#). | open | 2024-05-22T00:55:30Z | 2024-09-05T16:53:54Z | https://github.com/huggingface/datasets/issues/6912 | [
"dataset request"
] | lhallee | 8 |
horovod/horovod | machine-learning | 3,857 | Horovod with MPI and NCCL | If I have installed NCCL and MPI, and want to install horovod from source code. But I'm confused about some parameters.
**HOROVOD_GPU_OPERATIONS**,**HOROVOD_GPU_ALLREDUCE** and **HOROVOD_GPU_BROADCAST**
How to set this three parameters ? Which use NCCL and which use MPI ? Anyone can help to answer this question? Thanks a lot in advance!!! | closed | 2023-03-01T07:27:23Z | 2023-03-01T10:08:37Z | https://github.com/horovod/horovod/issues/3857 | [
"question"
] | yjiangling | 2 |
hbldh/bleak | asyncio | 720 | Devices that do not include a service UUID in advertising data cannot be detected on macOS 12.0 to 12.2 unless running in app created by py2app | * bleak version: 0.14.0
* Python version: all
* Operating System: macOS 12.x
### Description
This is basically the same issue as #635. That issue was closed because there is now a workaround available in Bleak v0.14.0 that works in most cases. However, there is one case that doesn't have an acceptable workaround. If a device does not include a service UUID in the advertising data, then the advertising data from that device cannot be received in Bleak.
### Call to action
This is a regression from previous versions of macOS. Please everyone report this issue to Apple using the Feedback Assistant app. If enough people report it, hopefully they will prioritize fixing it.
### Workaround
This isn't great, but it is possible to convert your Python script to an app using [py2app](https://py2app.readthedocs.io/en/latest/index.html) to work around the issue. Here is an example `setup.py` that can be uses as a starting point:
```python
from setuptools import setup
setup(
app=["my_script.py"],
setup_requires=["py2app"],
options=dict(
py2app=dict(
plist=dict(
NSBluetoothAlwaysUsageDescription="This app uses Bluetooth.",
),
),
),
)
```
Then run `python setup.py py2app` to build the .app.
If your script does not have a graphical user interface, you can run it with:
open -n -W --stdin $TTY --stdout $TTY --stderr $TTY dist/my_script.app
See `man open` for more info.
Control C doesn't work to stop the program, but you can secondary-click on the icon that is created in the dock and force quit if needed.
| closed | 2022-01-05T18:03:16Z | 2022-03-05T04:04:32Z | https://github.com/hbldh/bleak/issues/720 | [
"Backend: Core Bluetooth"
] | dlech | 6 |
pydantic/pydantic-ai | pydantic | 1,198 | Message history procssing behavior | Does `pydantic-ai` not only include the `result_of_tool_call_msg` but also include the `tool_call_request_msg` in the message history?
For example, in `pydantic-ai`, the message history appears to follow this structure:
`[system_msg, human_msg, tool_call_request_msg, result_of_tool_call_msg]`
This differs from how OpenAI and LangChain handle message history, where the structure seems to be:
`[system_msg, human_msg, result_of_tool_call_msg]`
It looks like OpenAI and LangChain include only `result_of_tool_call_msg` but not `tool_call_request_msg`.
Is this design choice in `pydantic-ai` intentional?
---
It turns out they do include tool_call_request_msg—it’s simply the AIMessage generated after invoking the LLM with [system_msg, human_msg]. | closed | 2025-03-21T07:54:26Z | 2025-03-21T08:38:24Z | https://github.com/pydantic/pydantic-ai/issues/1198 | [] | pleomax0730 | 0 |
arogozhnikov/einops | numpy | 130 | Torch `rearrange` throws warning about incorrect division when running `torch.jit.trace` | **Describe the bug**
When running `torch.jit.trace` on a `nn.Module` that contains a `rearrange` operation, the following warning is raised:
```
/home/shogg/.cache/pypoetry/virtualenvs/mldi-96vt4Weu-py3.8/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
```
Two questions arise from this:
1) Why is there a division step happening?
2) What is the potential impact?
**Reproduction steps**
Code to reproduce this is as follows:
```
class TestModule(nn.Module):
def forward(self, x: torch.Tensor) -> torch.Tensor:
return einops.rearrange(x, '1 i f x y -> 1 i x y f')
test_tensor = torch.rand([1, 2, 376, 16, 16])
test_module = TestModule()
output_module = torch.jit.trace(test_module, test_tensor)
```
**Expected behavior**
rearrange shouldn't be throwing this warning
**Your platform**
```
python==3.8.0
torch==1.9.0
einops==0.3.0
```
| closed | 2021-08-04T02:36:57Z | 2023-03-16T17:27:53Z | https://github.com/arogozhnikov/einops/issues/130 | [
"backend bug"
] | StephenHogg | 10 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,473 | [Bug]: Batch size, batch count | never mind | closed | 2024-04-09T18:37:00Z | 2024-04-09T18:38:34Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15473 | [
"bug-report"
] | Petomai | 0 |
comfyanonymous/ComfyUI | pytorch | 6,844 | Loading Workflows with Missing Nodes | ### Expected Behavior
When loading a workflow, the interface should open up the workflow, and present missing nodes. Missing nodes are replaced by red squares.
### Actual Behavior
When loading a workflow, in certain cases, it only displays the missing nodes and then doesn't actually load the workflow. It would be helpful to just load the workflow and leave the missing nodes as red, so that the rest of the workflow could be inspected.

### Steps to Reproduce
**Example Workflow not amenable to loading:**

### Debug Logs
```powershell
N/A
```
### Other
_No response_ | closed | 2025-02-17T15:04:55Z | 2025-03-06T22:14:07Z | https://github.com/comfyanonymous/ComfyUI/issues/6844 | [
"Potential Bug"
] | brucew4yn3rp | 11 |
harry0703/MoneyPrinterTurbo | automation | 244 | search videos failed: {'status': 401, 'code': 'Unauthorized'} | 
ship视频生成后search videos failed | closed | 2024-04-12T03:17:08Z | 2024-04-12T03:30:22Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/244 | [] | a736875071 | 1 |
databricks/koalas | pandas | 2,030 | pip installed failed on EMR | FloatProgress(value=0.0, bar_style='info', description='Progress:', layout=Layout(height='25px', width='50%'),…
Collecting koalas
Downloading https://files.pythonhosted.org/packages/1d/91/58c88fc3221d7c21d854d7d9c0fe081bf1ac244c1e4496bb2b56e1f31e25/koalas-1.6.0-py3-none-any.whl (668kB)
Requirement already satisfied: numpy>=1.14 in /usr/local/lib64/python3.7/site-packages (from koalas)
Collecting pyarrow>=0.10 (from koalas)
Downloading https://files.pythonhosted.org/packages/62/d3/a482d8a4039bf931ed6388308f0cc0541d0cab46f0bbff7c897a74f1c576/pyarrow-3.0.0.tar.gz (682kB)
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/mnt/tmp/pip-build-if6ktjjx/pyarrow/setup.py", line 37, in <module>
from Cython.Distutils import build_ext as _build_ext
ModuleNotFoundError: No module named 'Cython'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /mnt/tmp/pip-build-if6ktjjx/pyarrow/ | closed | 2021-02-01T19:00:35Z | 2021-02-01T20:48:11Z | https://github.com/databricks/koalas/issues/2030 | [
"not a koalas issue"
] | pmittaldev | 3 |
pydantic/logfire | fastapi | 99 | Ability to configure additional span processors (e.g. to export to another sink) | ### Description
https://pydanticlogfire.slack.com/archives/C06EDRBSAH3/p1714686467955179
> feature request! being able to add another span processor while still including the default logfire one. what I’m doing right now is hacking it by not providing the second processor up front, then doing:
```python
from logfire._internal.config import GLOBAL_CONFIG
from logfire._internal.exporters.processor_wrapper import SpanProcessorWrapper
span_processor = SpanProcessorWrapper(SentrySpanProcessor(), GLOBAL_CONFIG.scrubber)
GLOBAL_CONFIG.processors = [span_processor]
```
| closed | 2024-05-02T22:31:35Z | 2024-06-04T17:48:58Z | https://github.com/pydantic/logfire/issues/99 | [
"Feature Request"
] | adriangb | 0 |
scikit-optimize/scikit-optimize | scikit-learn | 707 | Wrong iteration number after loading CheckPointSaver object | Hi,
I ran 2 iterations and got an object from CheckPointSaver, I then load the object and run the program again. skopt starts with iteration number 2, not 3.
I want to know it is intended behaviour or not?
Thanks,
Tuan
| closed | 2018-08-06T04:50:17Z | 2018-12-04T00:07:31Z | https://github.com/scikit-optimize/scikit-optimize/issues/707 | [] | anhnt1 | 6 |
sigmavirus24/github3.py | rest-api | 941 | Read timed out. (read timeout=10) |
## Versions
- python 2.7
- pip 18.1
- github3 1.3.0
## Minimum Reproducible Example
Happened once when creating a issue. Another time when creating a repo. Tried again and error did not occur.
## Exception information
```
Traceback (most recent call last):
...
File "env/python2.7/site-packages/github3/decorators.py", line 31, in auth_wrapper
return func(self, *args, **kwargs)
File "env/python2.7/site-packages/github3/github.py", line 725, in create_repository
json = self._json(self._post(url, data=data), 201)
File "env/python2.7/site-packages/github3/models.py", line 221, in _post
return self._request("post", url, data, **kwargs)
File "env/python2.7/site-packages/github3/models.py", line 201, in _request
raise exceptions.ConnectionError(exc)
ConnectionError: <class 'requests.exceptions.ReadTimeout'>: A connection-level exception occurred: HTTPSConnectionPool(host='api.github.com', port=443): Read timed out. (read timeout=10)
```
<!-- links -->
[search]: https://github.com/sigmavirus24/github3.py/issues?utf8=%E2%9C%93&q=is%3Aissue
| closed | 2019-05-22T16:54:26Z | 2019-05-26T13:08:05Z | https://github.com/sigmavirus24/github3.py/issues/941 | [] | unformatt | 3 |
iterative/dvc | machine-learning | 10,165 | Add tags to DVC experiments | with dvc experiments, allow the option to add tags to it so that we can easily group/filter them easily!
1. Creating an experiment `dvc exp run --name "name" --tag "tag1","tag2"`.
2. For existing experiments `dvc exp tag --name "name" --tag "tag3","tag4"`
3. To remove tags `dvc exp tag --name "name --tag "tag4" --untag`
4. For filtering `dvc exp list --tag "tag1","tag2"`, `dvc exp list --not-tag "tag1","tag2"`
This makes it very easy to use. Would be very useful especially when we need to clean up the experiments. | open | 2023-12-14T14:49:16Z | 2024-10-16T13:37:09Z | https://github.com/iterative/dvc/issues/10165 | [
"feature request",
"p2-medium",
"A: experiments"
] | legendof-selda | 1 |
zihangdai/xlnet | nlp | 120 | code errors in train_gpu.py | ```
train_input_fn, record_info_dict = data_utils.get_input_fn(
tfrecord_dir=FLAGS.record_info_dir,
split="train",
bsz_per_host=FLAGS.train_batch_size,
...)
```
`bsz_per_host` should be ` bsz_per_host=FLAGS.train_batch_size // FLAGS.num_hosts` | closed | 2019-07-04T06:40:46Z | 2019-07-09T06:42:56Z | https://github.com/zihangdai/xlnet/issues/120 | [] | SuMarsss | 0 |
twelvedata/twelvedata-python | matplotlib | 64 | [Question] split query example | I am struggling to query split dates. I keep getting errors even with simple arguments. Have you implemented the SplitsEndpoint API?
I tried the following but non is working:
resp = td.get_splits(symbol="AAPL", exchange="NASDAQ", country="US", type="Stock")
_TypeError: __init__() got an unexpected keyword argument 'type'_
resp = td.get_splits("AAPL")
_TypeError: get_splits() takes 1 positional argument but 2 were given_
Can you give an example of a split dates query?
PS: I am on the free basic version. | closed | 2023-05-19T16:26:32Z | 2023-05-19T17:52:35Z | https://github.com/twelvedata/twelvedata-python/issues/64 | [] | Sinansi | 1 |
nschloe/tikzplotlib | matplotlib | 224 | Mistakes in legend for fillbetween | Hi,
I'm using the following code to plot my figure:
```python
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
mpl.style.use('seaborn-colorblind')
from matplotlib2tikz import save as tikz_save
fig = plt.figure(figsize=(7,4))
ax = fig.add_subplot(111)
ax.fill_between([1, 2],[2, 2],[3, 3],color = 'red',alpha=0.2, label ='roh')
ax.fill_between([1, 2],[4, 4],[5, 5],color = 'blue',alpha=0.2, label ='kal')
ax.plot([1, 2],[2, 5],'k',label='ref')
ax.grid()
plt.legend()
tikz_save('minimalbeispiel.pgf', figureheight='4in', figurewidth='5in')
```
The python export of the figure looks fine:

But unfortunately the generated code in Tex looks like that:

I don't know how to deal with it.
Best regards,
Axel | closed | 2018-02-10T05:37:19Z | 2019-03-19T20:34:45Z | https://github.com/nschloe/tikzplotlib/issues/224 | [] | AxelS83 | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 934 | File structure for training (encoder, synthesizer (vocoder)) | I want to train my own model on the mozilla common voice dataset.
All .mp3s are delivered in one folder with accompanying .tsv lists. I understood, that next to an utterance the corresponding .txt has to reside.
But what about folder structre. Can I leave all .mp3s in that one folder or do I have to split them into one subdirectory for every speaker (i'd hate to do that.).
I would be very thankful if somebody could help me with the code adjustments since I am quite new to all of this :)
| open | 2021-12-02T06:38:49Z | 2022-09-01T14:43:28Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/934 | [] | Dannypeja | 24 |
Kludex/mangum | asyncio | 262 | Overwriting read-only Lambda@Edge headers | Hooking up a Mangum lambda to CloudFront as `EventType: origin-request` returns a 502 response: "The Lambda function result failed validation: The function tried to add, delete, or change a read-only header."
According to [the documentation](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-restrictions.html), the `Content-Length` Header is one of the read-only headers for `origin-request` events. Not quite sure why. But it's certainly one of the headers returned when calling the lambda. I use the lambda to handle API requests, so it needs `IncludeBody`, which is only available with origin-request.
I was able to get around this by hijacking the response:
```python
def handler(event, context):
response = Mangum(app, lifespan="off")(event, context)
if 'headers' in response:
response['headers'].pop('content-length')
return response
``` | open | 2022-05-05T23:03:15Z | 2023-04-11T02:46:29Z | https://github.com/Kludex/mangum/issues/262 | [
"help wanted",
"improvement"
] | UnderSampled | 3 |
tflearn/tflearn | tensorflow | 989 | Can't load trained SequenceGenerator. | Hi, when I try to load() my SequenceGenerator (which was saved using simply model.save(path)), I encounter the following error:
`NotFoundError (see above for traceback): Key BinaryAccuracy/Mean/moving_avg not found in checkpoint`
I've tried load(path), and load(path, trainable_variable_only=True). In the latter case, I encounter a different error:
`InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1] rhs shape= [7316]`
What special options do I need to pass to load() and/or save() in order to load the trained generator? | open | 2018-01-02T23:06:51Z | 2018-01-02T23:06:51Z | https://github.com/tflearn/tflearn/issues/989 | [] | simra | 0 |
praw-dev/praw | api | 1,076 | time_filter week might be broken | ## Issue Description
When getting post from a subreddit there are less entries with `time_filter='week'` than with `time_filter='day'`, which seems broken. A few days ago my code worked as expected.
Consider this example:
```
reddit = praw.Reddit([Censored for privacy])
posts_iter = reddit.subreddit('worldnews').top(time_filter='week', limit=1000)
entries = 0
for post in posts_iter:
entries += 1
print(entries)
```
Result curently is 255 for `day` but only 17 for `week`.
## System Information
- PRAW Version: 6.2.0 (latest dev version)
- Python Version: 3.4.3
- Operating System: Ubuntu 14.04
| closed | 2019-05-28T16:17:57Z | 2019-05-28T16:30:25Z | https://github.com/praw-dev/praw/issues/1076 | [] | derSeddy | 1 |
akfamily/akshare | data-science | 5,670 | AKShare 接口问题报告 | AKShare Interface Issue Report | 1. 接口的名称和相应的调用代码: stock_zh_a_hist
```
import akshare as ak
stock_zh_a_hist_df = ak.stock_zh_a_hist(symbol="000001", period="daily", start_date="20170301", end_date='20240528', adjust="")
print(stock_zh_a_hist_df)
```
2. 接口报错的截图或描述 | Screenshot or description of the error
```
File "/home/rshun/.local/lib/python3.11/site-packages/akshare/stock_feature/stock_hist_em.py", line 1051, in stock_zh_a_hist
"secid": f"{code_id_dict[symbol]}.{symbol}",
~~~~~~~~~~~~^^^^^^^^
KeyError: '000001'
```
3. akshare的版本是1.15.2
周末就出现该情况了,烦请查看一下,谢谢
| closed | 2025-02-17T12:11:51Z | 2025-02-17T12:21:35Z | https://github.com/akfamily/akshare/issues/5670 | [
"bug"
] | rshun | 1 |
roboflow/supervision | deep-learning | 990 | [RichLabelAnnotator] - add support for unicode labels | ### Description
[`LabelAnnotator`](https://github.com/roboflow/supervision/blob/b68e7c2059f1da9eee8c3cdc66f50f3898fcb6ba/supervision/annotators/core.py#L902) uses OpenCV as a rendering engine. Unfortunately, `cv2.putText`, which we use underneath, only supports ASCII characters. A solution to this problem would be the implementation of a new annotator that adds support for Unicode characters based on Pillow.
We previously considered a similar idea, but we decided not to implement it due to time constraints. https://github.com/roboflow/supervision/pull/159
### API
```python
class RichLabelAnnotator:
def __init__(
self,
color: Union[Color, ColorPalette] = ColorPalette.DEFAULT,
text_color: Color = Color.WHITE,
font_path: str = "path/to/default/font.ttf",
font_size: int = 10,
text_padding: int = 10,
text_position: Position = Position.TOP_LEFT,
color_lookup: ColorLookup = ColorLookup.CLASS,
):
pass
def annotate(
self,
scene: ImageType,
detections: Detections,
labels: List[str] = None,
custom_color_lookup: Optional[np.ndarray] = None,
) -> ImageType:
pass
```
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | closed | 2024-03-12T06:38:31Z | 2024-07-03T11:39:11Z | https://github.com/roboflow/supervision/issues/990 | [
"enhancement",
"api:annotator",
"Q2.2024"
] | Ying-Kang | 17 |
deeppavlov/DeepPavlov | nlp | 1,080 | FAQ new dataset | Здравствуйте, не могли бы вы уточнить?
Вопрос: тренируется ли **уже загруженная** модель?
Описание: В конфиге tfidf_autofaq заменяю data_path на путь с данными формата csv. После этого я могу построить и тренировать эту модель, однако после загрузки этой модели она остается **фиксированной**. Если в дальнейшем тренировать модель по тому же конфигу, но подгружать в агент именно ту модель, что загружена ранее, то и поведение остается старое.
Возможно ли, и если да, то как, изменять загруженную модель, при редактировании файла с информацией?
Из ответов, которые нашел, был только способ удаления предыдущей модели. Почти уверен, что существуют альтернативы, но не могу найти | closed | 2019-11-24T00:50:06Z | 2019-11-26T12:01:19Z | https://github.com/deeppavlov/DeepPavlov/issues/1080 | [] | Elfreezy | 4 |
gunthercox/ChatterBot | machine-learning | 1,645 | Is there a way to set language for tagging.py other then English and how? Or is English the only supported language? | closed | 2019-02-28T08:06:59Z | 2020-01-17T16:16:19Z | https://github.com/gunthercox/ChatterBot/issues/1645 | [] | DuffyTheDuck | 2 | |
xuebinqin/U-2-Net | computer-vision | 35 | About inference speed | Thanks for your great job!
From paper I know that U2Net runs at a speed of 30 FPS, with input size of 320×320×3) on a 1080Ti, U2-Net+(4.7 MB) runs at 40 FPS, but on which GPU U2Net running? 1080Ti?
| closed | 2020-06-18T04:50:53Z | 2020-10-06T01:34:31Z | https://github.com/xuebinqin/U-2-Net/issues/35 | [] | phy12321 | 3 |
pydantic/pydantic-core | pydantic | 837 | Mismatch in `TypedDict` from `typing` versioning between pydantic and pydantic-core | `pydantic-core` uses `TypedDict` from `typing` if the Python version is 3.9 or higher. See code in `pydantic_core/__init__.py`:
```
if _sys.version_info < (3, 9):
from typing_extensions import TypedDict as _TypedDict
else:
from typing import TypedDict as _TypedDict
```
But Pydantic throws a `PydanticUserError` in `pydantic/_internal/_generate_schema.py` if `TypedDict` from `typing` is used from python versions below 3.12.
```
if not _SUPPORTS_TYPEDDICT and type(typed_dict_cls).__module__ == 'typing':
raise PydanticUserError(
'Please use `typing_extensions.TypedDict` instead of `typing.TypedDict` on Python < 3.12.',
code='typed-dict-version',
)
```
I am not sure if `pydantic-core` or `pydantic` needs to be updated?
Selected Assignee: @dmontagu | closed | 2023-07-28T07:02:36Z | 2023-09-20T15:03:04Z | https://github.com/pydantic/pydantic-core/issues/837 | [
"unconfirmed"
] | nss-csis | 1 |
huggingface/pytorch-image-models | pytorch | 1,794 | resample_abs_pos_embed does not apply when checkpoint_path is used | I'm creating a ViT model with following code:
```python
model = timm.create_model('vit_base_patch16_224', pretrained=False, img_size=64,
checkpoint_path=local_checkpoint_path,
global_pool='avg')
```
Where I use a img_size 64.
I expect resample_abs_pos_embed will also applies to this use case like loading pretrained model from timm. | closed | 2023-05-06T05:34:38Z | 2023-05-06T20:45:10Z | https://github.com/huggingface/pytorch-image-models/issues/1794 | [
"bug"
] | Luciennnnnnn | 1 |
Nekmo/amazon-dash | dash | 29 | Migrate from argparse to click | closed | 2018-02-06T00:02:54Z | 2018-02-21T18:29:49Z | https://github.com/Nekmo/amazon-dash/issues/29 | [
"enhancement"
] | Nekmo | 0 | |
streamlit/streamlit | data-science | 9,906 | Add 'key' to static widgets like st.link_button, st.popover | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Most widgets have a `key` parameter - but not `st.link_button`. I can see why - because it is mostly "static", but it's still useful to be able to give an explicit key especially to map a specific style now that key can be used in the element class.
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | open | 2024-11-22T15:14:15Z | 2025-01-03T15:39:23Z | https://github.com/streamlit/streamlit/issues/9906 | [
"type:enhancement",
"feature:st.link_button"
] | arnaudmiribel | 4 |
marshmallow-code/flask-smorest | rest-api | 692 | Handling multiple dynamic nested query parameters in Flask-Smorest | ## Context
I'm trying to handle multiple dynamic filters in query parameters with Flask-Smorest. The query string looks like this:
```
filters[status][0]=value1&filters[status][1]=value2&filters[type][0]=valueA
```
## Question
What's the recommended way in Flask-Smorest to handle this type of query parameters without manually parsing `request.args`? Specifically:
1. Is there a built-in method to automatically parse these into a nested dictionary or list structure?
2. Can we configure a schema to handle this pattern of parameters?
3. What's the most idiomatic approach in Flask-Smorest for this scenario?
## Desired Outcome
Ideally, I'd like to end up with a structure like this:
```python
{
"filters": {
"status": ["value1", "value2"],
"type": ["valueA"]
}
}
```
Any insights or best practices for handling such dynamic, nested query parameters in Flask-Smorest would be greatly appreciated. | closed | 2024-10-09T19:23:27Z | 2024-12-11T16:14:10Z | https://github.com/marshmallow-code/flask-smorest/issues/692 | [
"question"
] | mbernabo | 2 |
hbldh/bleak | asyncio | 1,285 | Writing more than 20 bytes to a device causes the device to disconnect | * bleak version: 0.20.1
* Python version: 3.11.0
* Operating System: Windows 11
### Description
I am using bleak to write to a characteristic on a Nordic device. Anything less than or equal to 20 bytes is able to be written but once I send anything greater than 20 bytes, bleak throws this error.
`OSError: [WinError -2147023673] The operation was canceled by the user`
At first I thought it was the MTU size but when I used the bleak library to check MTU size, it says that the size is 247 bytes. This is also indicated by the Nordic device.
I am also able to read more than 20 bytes from other characteristics on the device.
### What I Did
```python
import asyncio
from bleak import BleakClient, BleakScanner
address = 'AA:BB:CC:DD:EE:FF'
UUID = '00000000-0000-0000-0000-000000000000'
async def main():
client = BleakClient(address)
await asyncio.sleep(1)
await client.connect()
await asyncio.sleep(1)
await client.write_gatt_char(UUID, b'11111111111111111111', response=True)
asyncio.run(main())
```
### Logs
```
Traceback (most recent call last):
File "write_test.py", line 14, in <module>
asyncio.run(main())
File "AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 650, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "write_test.py", line 12, in main
await client.write_gatt_char(UUID, b'111111111111111111111', response=True)
File "AppData\Local\Programs\Python\Python311\Lib\site-packages\bleak\__init__.py", line 659, in write_gatt_char
await self._backend.write_gatt_char(char_specifier, data, response)
File "AppData\Local\Programs\Python\Python311\Lib\site-packages\bleak\backends\winrt\client.py", line 869, in write_gatt_char
await characteristic.obj.write_value_with_result_async(buf, response),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError -2147023673] The operation was canceled by the user
```
| open | 2023-04-20T19:34:37Z | 2023-07-26T20:22:44Z | https://github.com/hbldh/bleak/issues/1285 | [
"3rd party issue",
"Backend: WinRT"
] | michaelFluid | 4 |
jupyter-incubator/sparkmagic | jupyter | 551 | Create official Docker images (was: docker-compose build failure) | docker-compose build
Not able to build spark core.
Error logs:
>[error] /apps/build/spark/core/src/test/java/test/org/apache/spark/JavaAPISuite.java:36: warning: [deprecation] Accumulator in org.apache.spark has been deprecated
[error] import org.apache.spark.Accumulator;
[error] ^
[error] /apps/build/spark/core/src/test/java/test/org/apache/spark/JavaAPISuite.java:37: warning: [deprecation] AccumulatorParam in org.apache.spark has been deprecated
[error] import org.apache.spark.AccumulatorParam;
[error] ^
[error] Compile failed at Jul 4, 2019 8:47:28 AM [1:25.010s]
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM ........................... SUCCESS [05:03 min]
[INFO] Spark Project Tags ................................. SUCCESS [ 35.560 s]
[INFO] Spark Project Sketch ............................... SUCCESS [ 4.462 s]
[INFO] Spark Project Local DB ............................. SUCCESS [ 20.445 s]
[INFO] Spark Project Networking ........................... SUCCESS [01:03 min]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 5.563 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 24.045 s]
[INFO] Spark Project Launcher ............................. SUCCESS [01:39 min]
[INFO] Spark Project Core ................................. FAILURE [05:06 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [ 38.587 s]
[INFO] Spark Project GraphX ............................... SKIPPED
[INFO] Spark Project Streaming ............................ SKIPPED
[INFO] Spark Project Catalyst ............................. SKIPPED
[INFO] Spark Project SQL .................................. SKIPPED
[INFO] Spark Project ML Library ........................... SKIPPED
[INFO] Spark Project Tools ................................ SUCCESS [ 34.015 s]
[INFO] Spark Project Hive ................................. SKIPPED
[INFO] Spark Project REPL ................................. SKIPPED
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [ 31.793 s]
[INFO] Spark Project YARN ................................. SKIPPED
[INFO] Spark Project Hive Thrift Server ................... SKIPPED
[INFO] Spark Project Assembly ............................. SKIPPED
[INFO] Spark Integration for Kafka 0.10 ................... SKIPPED
[INFO] Kafka 0.10 Source for Structured Streaming ......... SKIPPED
[INFO] Spark Project Examples ............................. SKIPPED
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 12:26 min (Wall Clock)
[INFO] Finished at: 2019-07-04T08:47:28+00:00
[INFO] Final Memory: 57M/764M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:testCompile (scala-test-compile-first) on project spark-core_2.11: Execution scala-test-compile-first of goal net.alchim31.maven:scala-maven-plugin:3.2.2:testCompile failed. CompileFailed -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :spark-core_2.11
ERROR: Service 'spark' failed to build: The command '/bin/sh -c mkdir -p /apps/build && cd /apps/build && git clone https://github.com/apache/spark.git spark && cd $SPARK_BUILD_PATH && git checkout v$SPARK_BUILD_VERSION && dev/make-distribution.sh --name spark-$SPARK_BUILD_VERSION -Phive -Phive-thriftserver -Pyarn && cp -r /apps/build/spark/dist $SPARK_HOME && rm -rf $SPARK_BUILD_PATH' returned a non-zero code: 1 | open | 2019-07-04T09:01:29Z | 2023-04-11T08:13:04Z | https://github.com/jupyter-incubator/sparkmagic/issues/551 | [
"kind:enhancement"
] | devender-yadav | 17 |
mckinsey/vizro | pydantic | 675 | [Tech] Consider enabling html in markdown components | I'll outline my thoughts here for us to discuss properly:
With the new header and footer arguments being enabled, we want to allow users to add more information to their charts. Currently, doing everything in pure Markdown feels cumbersome because many tasks are either impossible or not straightforward to implement. Here are some limitations where enabling HTML would be beneficial:
## 1. Customising fonts (e.g. apply coloring, apply different font-sizes, etc)
Markdown does not support text coloring: [Markdown Font Color](https://bookdown.org/yihui/rmarkdown-cookbook/font-color.html)
However, I believe text coloring is an essential feature that HTML could easily solve. Here are some visual examples of why I'd like to be able to color fonts within Markdown. It's mainly to emphasize a narrative within the visualization or even replace a legend. Remember, AgGrid and DataTable do not come with an embedded color scale legend natively if you apply cell coloring. With AgGrid, there's a [workaround by installing a third-party library](https://github.com/plotly/dash-ag-grid/issues/309), but I haven't explored a solution for DataTable yet. Enabling font colors in the footer could easily address this without requiring another library or a hacky solution. I also often see colors used in subtitles to emphasize certain data points.
<img width="387" alt="Screenshot 2024-09-03 at 10 04 44" src="https://github.com/user-attachments/assets/4894fd2f-36a6-49d1-bcc8-723a760607d1">:
<img width="247" alt="Screenshot 2024-09-03 at 10 04 51" src="https://github.com/user-attachments/assets/cb020c6a-2870-40ab-81b8-26d2e88605f4">
While coloring is the most crucial argument for me, other considerations include applying custom font families, sizes, text alignments etc, which are either only possible via HTML or just simpler to do.
## 2. Apply CSS styling
This one is normally simpler to do in HTML as you can provide classNames or inline styles directly, which is simpler to do for CSS beginners. However, as soon as someone wants to style something with pure markdown - it is possible, but they have to know CSS selectors really well. They have to know the hierarchies and have to know how to selectively target elements in their hierarchy (p, h1, h2, img) if they do not want to apply it to all.
For me either of this works, but I am wondering how intuitive this actually is for CSS beginners and whether they even need more sophisticated styling. Or if we can assume that people who would like more sophisticated styling are automatically familiar with CSS?
## 3. Embedding interactive elements (e.g. buttons)
This actually seems fine in both markdown and HTML. For example, to create a button that proceeds to a link when clicked, use the following format:
`[](https://example.com/...)`
However, positioning it and styling it properly requires point 2) applying CSS via selectors.
| open | 2024-09-03T07:52:59Z | 2024-09-18T09:18:41Z | https://github.com/mckinsey/vizro/issues/675 | [
"Feature Request :nerd_face:"
] | huong-li-nguyen | 1 |
robinhood/faust | asyncio | 230 | Feature request: expose non-windowed table with expiration | I'm using windowed tables as a local cache for an agent. In order to get expiration I have to use a windowed table with an expiration of a day, but this results in multiple versions of a key being stored over the course of the day. In my instance I'm caching last message from an IoT device so that I can compute deltas. Storing last key results in 10s of GB stored per node. Storing N versions of a key for the entire day will result in possible terabytes of storage per node. Using current abstractions I'd need to implement my own background GC/expiration process for expired keys to reclaim space from non-reporting users.
The current expiration implementation doesn't really make sense for this use case because it's tied to windows. RocksDB does have a native TTL enforcement mechanism that could be used in its stead, I'm not sure how difficult it would be to integrate given current design assumptions in the existing table code. | open | 2018-11-29T17:30:17Z | 2020-05-17T10:17:23Z | https://github.com/robinhood/faust/issues/230 | [
"Component: Table",
"Component: RocksDB Store"
] | mstump | 2 |
yzhao062/pyod | data-science | 121 | HBOS model failed when sample size > 10000 compared to isolated forest and LOF models | Hi, YZhao;
I am writing to resort to advise how to enable HBOS model to work out more accureate outlier score?
Pyod is owesome Outlier tools in python environment. I has already tried to integrated it with the airflow for large data set to find batch & stream job outlier.
I tried HBOS for sample size larget than 20000 and failed to identied far less points than other models. I tried looking into source code and guess if it was lead by low accurate outlier score by _calculate_outlier_scores function.
Here is the detail:
For the models based on sklearn, they worked well not matter for larget dataset or small one. How,every for certain models they failed when handling large dataset. Previosuly, I raise similar question, and give up some LOF based mode. Recently, I focused on Isolated Forest, LOF and HBOS, they all worked well in small & mediam dataset.
However, the HBOS mode keep failed to identifed outliers when dataset more than 20000.
When sample size <= 10000, it can identifed similar outliers as Iforest and LOF.
When sample size >= 20000, it can only identified outliers quantities as much as the number of features.
I has tried adjusts bin size from 10 to 1000, alpha from 0.1 to 0.0001 as well as tol. these parameter doesnot work .
I has looked in to the source code, and noticed that the outliers score was culculated based on bins, alpha, features, and tols. I am wondering if the issue is derived from the code.
for example, if I has 50000 sample gps datas, some outliers can be identifed by eye. some not. when using HBSO, the outliers score was only very small unique values which is similar to the feature numbers. It is far different than isolated forest & LOF model outlier detection result.
I uploaded below info for your reference:
1. HOBS parameters:

2. Data:


above data has change category data into dummy and standardized later.
3. Outlier detect result:

4. score value_counts

![Uploading image.png…])
I roughtly check the source code of HBOS and found the score was by _calculate_outlier_scores functions. I tried but not figure out how to more accurate outlier_score
| open | 2019-07-03T09:34:45Z | 2019-07-08T03:08:38Z | https://github.com/yzhao062/pyod/issues/121 | [] | Wang-Yong2018 | 4 |
exaloop/codon | numpy | 301 | @par inside .py @codon.jit function | a `@par` loop that works inside a codon compiled file fails to compile inside a `.py` file inside a` @codon.jit`, no debug info available, fails with syntax error on line following `@par` decorator.
is `@par` not available yet in `@codon.jit`? or do I need to do something else to get to work?
```python
import codon
@codon.jit(debug=True)
def is_prime_codon_par(n):
if n <= 1:
return False
@codon.par(schedule='dynamic', chunk_size=10000, num_threads=16)
for i in range(2, n):
if n % i == 0:
return False
return True
```
```
for i in range(2, n):
^^^
SyntaxError: invalid syntax
```
thanks | closed | 2023-03-28T02:42:57Z | 2024-11-09T20:10:04Z | https://github.com/exaloop/codon/issues/301 | [] | pjcrosbie | 2 |
jina-ai/clip-as-service | pytorch | 548 | ImportError: libcuda.so.1: cannot open shared object file: No such file or directory | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- TensorFlow installed from (source or binary): source
- TensorFlow version: 1.14.0
- Python version: 3.7.4
- `bert-as-service` version: 1.10
- GPU model and memory:
- CPU model and memory:
---
### Description
Bert service is working fine for me in the local.
However, when I am trying to run a docker image, I am facing an issue.
Then this issue shows up:
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory.
What can be done on this? | closed | 2020-05-07T13:38:22Z | 2020-12-18T01:51:10Z | https://github.com/jina-ai/clip-as-service/issues/548 | [] | theCuriousHAT | 3 |
davidsandberg/facenet | computer-vision | 321 | Embeddings for a single image | I am new to deep learning. I wanted to calculate embeddings for a single person in the image like this.
```
feed_dict = { images_placeholder: image , phase_train_placeholder:False}
emb = sess.run(embeddings, feed_dict=feed_dict)
```
here image only contain a single image.
but this results in the same embeddings for different people when running with single image.
The model i am using is the pre-trained model
> 20170216-091149
| closed | 2017-06-08T06:56:31Z | 2018-08-10T12:14:22Z | https://github.com/davidsandberg/facenet/issues/321 | [] | rishiraicom | 5 |
JaidedAI/EasyOCR | deep-learning | 667 | How do I train only text detection algorithm? (CRAFT) | Hi, First of all, Amazing work!
Second of all, I would like to train only the text detection part of the "EasyOCR" (which is CRAFT) to detect specific text (like only dates in the image), is there a way I can achieve this?
Please can anyone guide me on the right path?
Thank you. | closed | 2022-02-15T12:39:43Z | 2022-08-25T10:52:30Z | https://github.com/JaidedAI/EasyOCR/issues/667 | [] | martian1231 | 0 |
flasgger/flasgger | rest-api | 510 | External YAML with multiple specs not working | I'm trying to document multiple endpoints by reference to the same YAML file but it just won't work. Is this not supported? Or am I doing something wrong?
# app.py
```
from flask import Flask, jsonify
from flasgger import Swagger
app = Flask(__name__)
Swagger(app)
@app.route('/colors1/<palette>/')
def colors1(palette):
"""
file: ../colors.yaml
"""
all_colors = {
'cmyk': ['cyan', 'magenta', 'yellow', 'black'],
'rgb': ['red', 'green', 'blue']
}
if palette == 'all':
result = all_colors
else:
result = {palette: all_colors.get(palette)}
return jsonify(result)
@app.route('/colors2/<palette>/')
def colors2(palette):
"""
file: ../colors.yaml
"""
all_colors = {
'cmyk': ['cyan', 'magenta', 'yellow', 'black'],
'rgb': ['red', 'green', 'blue']
}
if palette == 'all':
result = all_colors
else:
result = {palette: all_colors.get(palette)}
return jsonify(result)
```
# colors.yaml
```
Multiple endpoint specs
---
paths:
/colors1/:
get:
parameters:
- name: palette
in: path
type: string
enum: ['all', 'rgb', 'cmyk']
required: true
default: all
responses:
200:
description: A list of colors (may be filtered by palette)
schema:
$ref: '#/components/schema/Palette'
examples:
rgb: ['red', 'green', 'blue']
/colors2/:
get:
parameters:
- name: palette
in: path
type: string
enum: ['all', 'rgb', 'cmyk']
required: true
default: all
responses:
200:
description: A list of colors (may be filtered by palette)
schema:
$ref: '#/components/schema/Palette'
examples:
rgb: ['red', 'green', 'blue']
components:
schemas:
Palette:
type: object
properties:
palette_name:
type: array
items:
$ref: '#/components/schema/Color'
Color:
type: string
```
# Expected result
Two endpoints with full specs (examples, response, parameters, etc), just like the specs in the screenshot in the README.
# Actual result
Incomplete specs with a lot of details missing:

| open | 2021-12-14T13:36:25Z | 2022-07-11T16:35:01Z | https://github.com/flasgger/flasgger/issues/510 | [] | arielpontes | 4 |
iterative/dvc | machine-learning | 9,730 | pull: The specified blob does not exist | # Bug Report
## Description
I am pushing the files with `dvc push` to an Azure blob. I then try to `dvc pull` the files from another machine/repo using the exact same `dvc.lock` and authentication method to Azure. Now I get:
```
ERROR: unexpected error - : The specified blob does not exist.
RequestId:8fefd37f-601e-0047-12b6-b494cc000000
Time:2023-07-12T11:43:23.9683742Z
ErrorCode:BlobNotFound
Content: <?xml version="1.0" encoding="utf-8"?><Error><Code>BlobNotFound</Code><Message>The specified blob does not exist.
RequestId:8fefd37f-601e-0047-12b6-b494cc000000
Time:2023-07-12T11:43:23.9683742Z</Message></Error>
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
```
If I try previous versions of the `dvc.lock` in this other machine/repo, it will work as expected. The latest version just fails.
I think the files are being pushed correctly because if I revert to a previous version of the `dvc.lock` file on my local machine (the one where I originally pushed the data) and do pull on that, I will get the old version of the data, and if I then go back to the current version, the new version of the data will be pulled and replaces the old one.
So this error message is super confusing because it seems like the data has been pushed in the remote.
So to sum up:
- Dvc remote config is the same
- `dvc.lock` is the same
- Azure authentication method is the same
- Dvc version is the same (also tried updating to the latest version `3.5.1` but didn't make a diff)
But `dvc pull` doesn't work as expected.
### Environment information
```
❯ dvc doctor
DVC version: 3.4.0 (pip)
------------------------
Platform: Python 3.10.6 on Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Subprojects:
dvc_data = 2.3.3
dvc_objects = 0.23.0
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.0.4
Supports:
azure (adlfs = 2023.4.0, knack = 0.10.1, azure-identity = 1.13.0),
http (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.4, aiohttp-retry = 2.8.3)
Config:
Global: /home/rlleshi/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/sdb
Caches: local
Remotes: azure
Workspace directory: ext4 on /dev/sdb
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/251a55eb30f4d39de05d3ba1c5c045e9
```
| closed | 2023-07-12T11:59:44Z | 2023-08-10T02:27:05Z | https://github.com/iterative/dvc/issues/9730 | [
"fs: azure"
] | rlleshi | 3 |
marcomusy/vedo | numpy | 329 | save to STL file for 3d printing | Hi,
This is a question about VEDO, not an 'issue'.
I am completely new to vedo and VTK. Following the cutWithMesh example, I created a cube with an isosurface (gyroid). It looks nice. However, how can I export it to an STL file so I can modify in other CAD software and 3d-print it? I looked at IO but did not get any idea. I tried vedo.io.write(c, 'cube.stl') but it did not work.
The cutWithMesh example is in the following link.
https://github.com/marcomusy/vedo/blob/master/examples/advanced/cutWithMesh2.py
Thank you! | closed | 2021-03-03T17:08:58Z | 2021-03-10T17:21:34Z | https://github.com/marcomusy/vedo/issues/329 | [
"help wanted"
] | ylada | 9 |
gradio-app/gradio | data-science | 10,681 | The properties and event design of this UI are simply a piece of dog poop | The properties and event design of this UI are simply a piece of dog poop! | closed | 2025-02-26T09:00:12Z | 2025-02-26T18:58:43Z | https://github.com/gradio-app/gradio/issues/10681 | [] | string199 | 0 |
zwczou/weixin-python | flask | 88 | 退款报:400 No required SSL certificate was sent | 我用的是fastapi,支付和查询一切正常,但是申请退款的时候就报400 No required SSL certificate was sent,两个pem文件不管是本地测试还是服务器部署都是用了绝对路径,都报这个错误,这是要怎么去解决呢?
但是在下载的证书文件中,有一个pem文件在https://myssl.com/cert_decode.html 这里面校验是非法pem文件,会不会设这个文件引起的错误? | closed | 2022-02-17T05:28:28Z | 2022-04-26T02:05:07Z | https://github.com/zwczou/weixin-python/issues/88 | [] | sky-chy | 19 |
flairNLP/fundus | web-scraping | 397 | [Bug]: WAZ not parsing properly | ### Describe the bug
This is an example article that does not get parsed (plaintext is None) https://www.waz.de/sport/lokalsport/bochum/article241975812/Tabubruch-Wattenscheid-Fans-schreiben-an-die-Stadt-Bochum.html
### How to reproduce
```python
from fundus import Crawler, PublisherCollection
crawler = Crawler(PublisherCollection.de)
for article in crawler.crawl(max_articles=50, only_complete=False):
print(article.title)
print(article.free_access)
print(article.html.responded_url)
print(article.authors)
print(article.topics)
print(article.plaintext)
print("\n")
```
### Expected behavior.
Parsed Plaintext
### Logs and Stack traces
```stacktrace
„Tabubruch“: Wattenscheid-Fans schreiben Offenen Brief an die Stadt Bochum
True
https://www.waz.de/sport/lokalsport/bochum/article241975812/Tabubruch-Wattenscheid-Fans-schreiben-an-die-Stadt-Bochum.html
['Philipp Ziser']
['Sporteinrichtung', 'Sportpolitik', 'Verein', 'Bochum', 'Milton Keynes', 'SG Wattenscheid 09', 'VfL Bochum', 'WAZ-Gruppe']
None
```
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
```markdown
MACOS
```
| closed | 2024-03-26T17:22:40Z | 2024-03-28T17:03:58Z | https://github.com/flairNLP/fundus/issues/397 | [
"bug"
] | addie9800 | 2 |
hbldh/bleak | asyncio | 1,061 | access denied error in windows backend due to GattServicesChanged event | I had a very similar problem while writing data to a Nordic UART service. I had many access denies errors when I started with bleak version 0.13 and after updating to 0.15 it change more to "The object is already closed" or "A method was called at an unexpected time" (don't know the exact error messages as my system reported them in German)
The problem for me was that the device kept updating the service list during service discovery. I added a gatt_services_changed listener to the WinRT client and started over with service discovery if it is was currently running or started a new discovery before accessing the services list the next time.
The documentation of the GattServicesChanged event (https://docs.microsoft.com/en-us/uwp/api/windows.devices.bluetooth.bluetoothledevice.gattserviceschanged) says:
> This event is raised when the remote device changes its services, or an unpaired device is disconnecting. All services are cleared because unpaired device services can't be cached between connections.
The object parameter in this event is null for every event that is raised.
In your handler for this event, do the following in order to get the services available. Call [BluetoothLEDevice.GetGattServicesAsync](https://docs.microsoft.com/en-us/uwp/api/windows.devices.bluetooth.bluetoothledevice.getgattservicesasync) on the [BluetoothLEDevice](https://docs.microsoft.com/en-us/uwp/api/windows.devices.bluetooth.bluetoothledevice) that's passed to your handler. When calling GetGattServicesAsync, use the [Cached](https://docs.microsoft.com/en-us/uwp/api/windows.devices.bluetooth.bluetoothcachemode) option. This can be used to refresh the list of services and re-initialize the services on the device if they are gone.
For my device I get the event 11 times during the first service discovery and if I reload them after I got the event everything works as expected.
I can open a PR in the next days with a suggestion how to deal with this issue.
_Originally posted by @jochenjagers in https://github.com/hbldh/bleak/issues/849#issuecomment-1245549149_
| closed | 2022-10-03T23:58:00Z | 2022-10-13T16:23:25Z | https://github.com/hbldh/bleak/issues/1061 | [
"bug",
"Backend: WinRT"
] | dlech | 1 |
open-mmlab/mmdetection | pytorch | 11,896 | 如何设置 同时用两张gpu去训练 ,是CUDA_VISIBLE_DEVICES=0,1 这样吗,感觉并没有启用 | open | 2024-08-05T09:47:09Z | 2024-08-31T01:41:30Z | https://github.com/open-mmlab/mmdetection/issues/11896 | [] | 1999luodi | 1 | |
modoboa/modoboa | django | 3,072 | High Cpu Usage after new installation during this week | # Impacted versions
* OS Type: Debian
* OS Version: Bullseye
* Database Type: PostgreSQL
* Database version: 13
* Modoboa: Last
* installer used: Yes
* Webserver: Nginx
# Steps to reproduce
Complete a fresh installation and look at task manager you see amavis foreground process run 99% all the time.
# Current behavior
<!--
This occurred during this week, last week install I've had a normal system
-->
# Expected behavior
System don't run at 100%
# Video/Screenshot link (optional)
| closed | 2023-09-22T01:02:09Z | 2024-10-08T14:41:34Z | https://github.com/modoboa/modoboa/issues/3072 | [
"stale"
] | samuraikid0 | 8 |
allenai/allennlp | data-science | 5,628 | Missing `f` prefix on f-strings | Some strings looks like they're meant to be f-strings but are missing the `f` prefix meaning variable interpolation won't happen.
https://github.com/allenai/allennlp/blob/0d25f967c7996ad4980c7ee2f4c71294f51fef80/allennlp/nn/util.py#L758
https://github.com/allenai/allennlp/blob/0d25f967c7996ad4980c7ee2f4c71294f51fef80/scripts/py2md.py#L112
https://github.com/allenai/allennlp/blob/0d25f967c7996ad4980c7ee2f4c71294f51fef80/scripts/py2md.py#L113
I found this issue automatically. I'm a bot. Beep Boop 🦊. See other issues I found in your repo [here](https://codereview.doctor/allenai/allennlp) | closed | 2022-04-23T22:18:39Z | 2022-04-25T16:21:52Z | https://github.com/allenai/allennlp/issues/5628 | [] | code-review-doctor | 0 |
liangliangyy/DjangoBlog | django | 384 | ModuleNotFoundError: No module named 'mdeditor' | 师兄你好,我这个项目启动或者是移植都会报这个错,这是怎么回事啊,求解答,谢谢您 | closed | 2020-03-19T04:39:13Z | 2024-10-28T08:48:32Z | https://github.com/liangliangyy/DjangoBlog/issues/384 | [] | DaobinZhu | 11 |
mage-ai/mage-ai | data-science | 4,862 | [BUG] Triggers duplicating after renaming | ### Mage version
0.9.68
### Describe the bug
When I renaming trigger - its dulicate
example:
https://demo.mage.ai/pipelines/test_ntf_2/triggers
I create two triggers and rename it twice and four time

### To reproduce
1. Open triggers
2. Create new
3. Save
4. Open trigger
5. Rename
6. Save
7. You have two triggers instead one.
### Video
https://github.com/mage-ai/mage-ai/assets/13833432/9e0b1777-2fa3-4140-aff2-4ed3492ab84d
### Expected behavior
### Screenshots
_No response_
### Operating system
_No response_
### Additional context
setting Save triggers in code automatically - is ON | closed | 2024-04-01T19:08:35Z | 2024-04-29T18:42:58Z | https://github.com/mage-ai/mage-ai/issues/4862 | [
"bug"
] | mrykin | 3 |
chainer/chainer | numpy | 8,560 | Release Tasks for v8.0.0b3 / v7.5.0 | Previous release (v7.4.0): #8548
## Chainer v7.6.0 (no v8.x / v7.5.0 release)
- [x]
## CuPy v8.0.0b3 / v7.5.0
- [ ] CUB source tree integration https://github.com/cupy/cupy/pull/2584
- [ ] CUB Test https://github.com/cupy/cupy/pull/2598
## Release Plans (subject to change)
* v8.0.0b3 / v7.5.0: May 28th
* v8.0.0b4 / v7.6.0: June
* v8.0.0rc1 / v7.7.0: July
* v8.0.0: August | closed | 2020-04-28T07:10:25Z | 2020-06-02T08:54:35Z | https://github.com/chainer/chainer/issues/8560 | [] | kmaehashi | 3 |
nschloe/tikzplotlib | matplotlib | 271 | encoding option of save is not documented | In the docstring of matplotlib2tikz.save the optional parameter `encoding` should be mentioned. | closed | 2019-02-18T07:51:28Z | 2019-03-07T13:30:21Z | https://github.com/nschloe/tikzplotlib/issues/271 | [] | Aikhjarto | 0 |
python-restx/flask-restx | flask | 64 | Add pre-flight requests | Hello,
Whenever I'm making a request using the axes of the vue I have problems due to the pre-flight call, as it sends an Option before doing the GET / POST / ETC
I think it would be interesting to have an annotation that allows me to enable a pre-flight check on my endpoints.
if you had something like this it would be great:
```
class SignIn (Resource):
"" "Sign in User and return jwt token" ""
@ api.expect (sign_in)
@ api.marshal_with (auth_user, code = 200)
@ pre-flight.enabled()
def post (self):
"" "Return Login User" ""
api_payload = api.payload
return signin (api_payload)
```
| closed | 2020-02-18T12:28:52Z | 2020-02-26T15:59:08Z | https://github.com/python-restx/flask-restx/issues/64 | [
"enhancement"
] | jonatasoli | 4 |
huggingface/transformers | pytorch | 36,773 | Inconsistent Documentation for `dataset_index` Requirement Across ViTPose Models | ### System Info
## Description
There's confusion regarding the `dataset_index` parameter requirement across the ViTPose model family. The documentation only mentions this requirement for **some** of the models, initially when the model was released it was only for the `usyd-community/vitpose-plus-base` checkpoint.
But now other checkpoints also fail without it. I'm concerned about:
1. Whether ALL models in the family require this parameter, even if some don't explicitly fail without it
2. Whether adding this parameter to models that don't explicitly require it could affect inference results
## Current Behavior
When following the examples in the model cards, these checkpoints fail with:
```
ValueError: dataset_index must be provided when using multiple experts (num_experts=6). Please provide dataset_index to the forward pass.
```
Affected checkpoints:
- `usyd-community/vitpose-plus-small`
- `usyd-community/vitpose-plus-large`
- `usyd-community/vitpose-plus-huge`
## Questions
1. Should the `dataset_index` parameter be added to ALL ViTPose+ models for consistency?
2. Will adding `inputs["dataset_index"] = torch.tensor([0], device=device)` to models that don't explicitly require it affect inference results?
3. Is there a recommended approach for handling this parameter in applications that need to work with multiple ViTPose+ variants?
## Context
I've implemented a plugin using these models, and it's failing on certain checkpoints. I want to ensure any fix I implement won't compromise results for other checkpoints that currently work without this parameter.
## Suggested Documentation Update
If all models require this parameter, please update all model cards to include:
```python
inputs["dataset_index"] = torch.tensor([0], device=device)
```
If only certain models require it, please clearly indicate which ones do and which ones don't.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Steps to Reproduce
1. Load any of the affected models:
- `usyd-community/vitpose-plus-small`
- `usyd-community/vitpose-plus-large`
- `usyd-community/vitpose-plus-huge`
2. Process an image following the model card example and run inference
3. Observe the error:
```
ValueError: dataset_index must be provided when using multiple experts (num_experts=6). Please provide dataset_index to the forward pass.
```
### Expected behavior
1. All model cards should document whether the `dataset_index` parameter is required for that specific checkpoint.
2. Documentation should be consistent across all models in the ViTPose+ family that share the same requirements.
3. Ideally, the model cards for all affected checkpoints should include the necessary code snippet:
```python
inputs["dataset_index"] = torch.tensor([0], device=device)
```
4. Clear guidance on whether this parameter affects inference results and if it should be applied universally across all ViTPose models for consistency.
| open | 2025-03-17T18:37:19Z | 2025-03-22T15:27:53Z | https://github.com/huggingface/transformers/issues/36773 | [
"bug"
] | harpreetsahota204 | 2 |
ARM-DOE/pyart | data-visualization | 998 | User defined sweep in det_sys_phase | Hi,
Our radar volumes in Australia now use a top-down scanning strategy, so the lowest sweep is last. When reading odimh5 files in pyart, it preserves this order. Most functions in pyart are not impacted (as far as I'm aware), expect for det_sys_phase, which defaults to the first sweep. I've made some changes to this function and the internal function it calls to permit user defined sweep. Let me know if this can be merged into pyart! See attached
def det_sys_phase(radar, ncp_lev=0.4, rhohv_lev=0.6,
ncp_field=None, rhv_field=None, phidp_field=None, sweep=0):
"""
Determine the system phase.
Parameters
----------
radar : Radar
Radar object for which to determine the system phase.
ncp_lev : float, optional
Miminum normal coherent power level. Regions below this value will
not be included in the phase calculation.
rhohv_lev : float, optional
Miminum copolar coefficient level. Regions below this value will not
be included in the phase calculation.
ncp_field, rhv_field, phidp_field : str, optional
Field names within the radar object which represent the normal
coherent power, the copolar coefficient, and the differential phase
shift. A value of None for any of these parameters will use the
default field name as defined in the Py-ART configuration file.
sweep : int, optional
Define the sweep index from the radar object to use for detecting
the system phase. Default to the first sweep.
Returns
-------
sys_phase : float or None
Estimate of the system phase. None is not estimate can be made.
"""
# parse the field parameters
if ncp_field is None:
ncp_field = get_field_name('normalized_coherent_power')
if rhv_field is None:
rhv_field = get_field_name('cross_correlation_ratio')
if phidp_field is None:
phidp_field = get_field_name('differential_phase')
ncp = radar.fields[ncp_field]['data'][:, 30:]
rhv = radar.fields[rhv_field]['data'][:, 30:]
phidp = radar.fields[phidp_field]['data'][:, 30:]
first_ray_idx = radar.sweep_start_ray_index['data'][sweep]
last_ray_idx = radar.sweep_end_ray_index['data'][sweep]
return _det_sys_phase(ncp, rhv, phidp, first_ray_idx, last_ray_idx, ncp_lev,
rhohv_lev)
def _det_sys_phase(ncp, rhv, phidp, first_ray_idx, last_ray_idx, ncp_lev=0.4,
rhv_lev=0.6):
""" Determine the system phase, see :py:func:`det_sys_phase`. """
good = False
phases = []
for radial in range(first_ray_idx, last_ray_idx + 1):
meteo = np.logical_and(ncp[radial, :] > ncp_lev,
rhv[radial, :] > rhv_lev)
mpts = np.where(meteo)
if len(mpts[0]) > 25:
good = True
msmth_phidp = smooth_and_trim(phidp[radial, mpts[0]], 9)
phases.append(msmth_phidp[0:25].min())
if not good:
return None
return np.median(phases)
Cheers,
Joshua | open | 2021-06-09T06:55:49Z | 2024-05-15T19:51:30Z | https://github.com/ARM-DOE/pyart/issues/998 | [] | joshua-wx | 3 |
ultralytics/yolov5 | deep-learning | 13,185 | mAP of nano and small models for different image sizes | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I have trained and tested yolov5 models with my custom dataset considering different image sizes. For medium and large models the mAP increase with image sizes but for nano and small models, it decreases for higher image dimensions. Do nano and small models face difficulties to process the higher dimension images?
### Additional
_No response_ | closed | 2024-07-11T14:48:54Z | 2024-10-20T19:49:59Z | https://github.com/ultralytics/yolov5/issues/13185 | [
"question",
"Stale"
] | Avishek-Das-Gupta | 5 |
glumpy/glumpy | numpy | 243 | offscreen rendering | hi, I'm trying to use glumpy to realize offscreen rendering, I found there's just makecurrent function belongs to window, but there's no GUI in my environment, how can I realize offscreen rendering? | open | 2020-04-12T19:20:20Z | 2020-04-13T10:56:51Z | https://github.com/glumpy/glumpy/issues/243 | [] | ocean1100 | 1 |
miguelgrinberg/python-socketio | asyncio | 1,240 | Message queue optimizations | When using a message queue, it would be an interesting optimization for the originating node to handle a requested operation directly, instead of publishing it on the queue and acting on it when it receives it back along with the other nodes. | closed | 2023-09-15T23:16:37Z | 2023-09-16T19:20:19Z | https://github.com/miguelgrinberg/python-socketio/issues/1240 | [
"enhancement"
] | miguelgrinberg | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.