repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ultralytics/yolov5 | deep-learning | 13,252 | Hi @7rkMnpl, | Hi @7rkMnpl,
To integrate a custom callback with early stopping in YOLOv5, you would need to modify the training script to include your custom callback logic. Here's a general outline of how you can achieve this:
1. **Create Your Custom Callback:**
Define your custom callback class. For example, you might want to create a callback that monitors a specific metric and stops training based on that metric.
```python
class CustomEarlyStopping:
def __init__(self, patience=10, min_delta=0):
self.patience = patience
self.min_delta = min_delta
self.best_score = None
self.counter = 0
def __call__(self, current_score):
if self.best_score is None:
self.best_score = current_score
elif current_score < self.best_score + self.min_delta:
self.counter += 1
if self.counter >= self.patience:
return True
else:
self.best_score = current_score
self.counter = 0
return False
```
2. **Integrate the Callback into the Training Loop:**
Modify the training loop in `train.py` to include your custom callback. You will need to check the callback condition at the end of each epoch.
```python
from train import train
# Initialize your custom callback
custom_early_stopping = CustomEarlyStopping(patience=10, min_delta=0.01)
# Modify the training loop to include the callback check
for epoch in range(epochs):
# Training code...
# Calculate your custom metric (e.g., recall)
current_score = calculate_recall()
# Check the custom early stopping condition
if custom_early_stopping(current_score):
print(f"Early stopping at epoch {epoch}")
break
```
3. **Run Your Training Script:**
Execute your modified training script to train your YOLOv5 model with the custom early stopping callback.
This is a basic example to get you started. Depending on your specific requirements, you might need to adjust the callback logic and how you integrate it into the training loop.
Feel free to ask if you have any further questions or need additional assistance. Happy training! 😊
_Originally posted by @glenn-jocher in https://github.com/ultralytics/yolov5/issues/5561#issuecomment-2275619501_
| open | 2024-08-09T04:57:44Z | 2024-10-27T13:30:46Z | https://github.com/ultralytics/yolov5/issues/13252 | [
"documentation",
"enhancement"
] | 7rkMnpl | 2 |
AutoGPTQ/AutoGPTQ | nlp | 609 | [BUG] Regression due to merged PR #607 | Changes `tests/test_quantization.py` examples from 1 row to 2 row makes the test fail. The new 2 row examples test passes in Pre #607 code.
@fxmarty
```python
examples = [
tokenizer(
"auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."
),
tokenizer(
"auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."
),
]
```
Stacktrace
```
___________________________ TestQuantization.test_quantize_1 ____________________________
a = (<tests.test_quantization.TestQuantization testMethod=test_quantize_1>,), kw = {}
@wraps(func)
def standalone_func(*a, **kw):
> return func(*(a + p.args), **p.kwargs, **kw)
/root/miniconda3/envs/autogptq/lib/python3.11/site-packages/parameterized/parameterized.py:620:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_quantization.py:37: in test_quantize
model.quantize(examples)
/root/miniconda3/envs/autogptq/lib/python3.11/site-packages/torch/utils/_contextlib.py:115: in decorate_context
return func(*args, **kwargs)
../auto_gptq/modeling/_base.py:448: in quantize
layer(*layer_input, **additional_layer_inputs)
/root/miniconda3/envs/autogptq/lib/python3.11/site-packages/torch/nn/modules/module.py:1511: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=F...bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
args = (tensor([[[-0.0026, -0.0094, 0.0219, ..., -0.0062, -0.0137, -0.0159],
[ 0.0197, -0.0070, -0.0347, ..., 0....
[-0.0059, 0.0028, 0.0032, ..., 0.0060, 0.0083, -0.0062]]],
device='cuda:0', dtype=torch.float16))
kwargs = {'attention_mask': tensor([[[[ -0., -65504., -65504., ..., -65504., -65504., -65504.],
[ -0., -0.... 23, 24, 25, 26, 27, 28, 29, 30, 31],
device='cuda:0'), 'output_attentions': False, 'past_key_value': None, ...}
forward_call = <bound method LlamaDecoderLayer.forward of LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(...ias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)>
def _call_impl(self, *args, **kwargs):
forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.forward)
# If we don't have any hooks, we want to skip the rest of the logic in
# this function, and just call forward.
if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
or _global_backward_pre_hooks or _global_backward_hooks
or _global_forward_hooks or _global_forward_pre_hooks):
> return forward_call(*args, **kwargs)
E TypeError: LlamaDecoderLayer.forward() got multiple values for argument 'attention_mask'
/root/miniconda3/envs/autogptq/lib/python3.11/site-packages/torch/nn/modules/module.py:1520: TypeError
``` | closed | 2024-03-26T08:28:33Z | 2024-03-26T09:14:16Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/609 | [
"bug"
] | Qubitium | 1 |
huggingface/transformers | nlp | 35,959 | Cannot import name 'LRScheduler' from 'torch.optim.lr_scheduler' | Run the following command and get an error
<img width="361" alt="Image" src="https://github.com/user-attachments/assets/9565b42c-6aba-486b-8713-f05766ca5acb" />
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'LRScheduler' from 'torch.optim.lr_scheduler'
How can I fix this issue? Any suggestion welcome!
Attach pytorch and transformer version information
<img width="411" alt="Image" src="https://github.com/user-attachments/assets/94eabc8a-f650-41bb-bbf4-7e2f21560730" /> | closed | 2025-01-29T13:47:41Z | 2025-01-29T14:10:10Z | https://github.com/huggingface/transformers/issues/35959 | [] | maverick-2030 | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 648 | Substantive differences between pytorch & tensorflow versions of FPN? | Reviewing https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/decoders/fpn/model.py and comparing it to https://github.com/qubvel/segmentation_models/blob/master/segmentation_models/models/fpn.py a couple of things stand out:
* The pytorch FPN produces a `[classes, h/4, w/4]`-sized intermediate result that is then upsampled bilinearly to `[classes, h, w]` and output. That seems to inevitably produce blurry output!?
* This can be seen in the [SegmentationHead](https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/base/heads.py#L5) implementation where we Conv2d to `[classes, w/4, h/4]` before upsampling.
* By contrast, the tensorflow FPN produces `[segmentation_filters, h/2, w/2]`-sized intermediate result that is upsampled bilinearly to `[segmentation_filters, h, w]` and then `Conv2d -> [classes, h, w]`. That has a decent shot at producing sharp, high-res results.
Any particular reason it was done this way? And for the 2x->4x resolution difference? Was the tensorflow FPN implementation "wrong" compared to the paper? Something else? Thanks! | closed | 2022-08-29T23:46:35Z | 2022-11-06T02:17:35Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/648 | [
"Stale"
] | jxtps | 5 |
supabase/supabase-py | fastapi | 882 | should_create_user isn't working on otp sign in | - [X] I confirm this is a bug with Supabase, not with my own application.
- [X] I confirm I have searched the [Docs](https://docs.supabase.com), GitHub [Discussions](https://github.com/supabase/supabase/discussions), and [Discord](https://discord.supabase.com).
## Describe the bug
When using the "should_create_user" option under the "sign_in_with_otp" method, it's not having an affect (it creates a user every time regardless).
## To Reproduce
supabase.auth.sign_in_with_otp({
"email": email,
"options": {"should_create_user": False}
})
## Expected behavior
When the flag is False, a new user should not be created.
## Screenshots
NA
## System information
Python library
## Additional context
NA | closed | 2024-08-09T11:51:18Z | 2024-08-23T12:17:17Z | https://github.com/supabase/supabase-py/issues/882 | [
"bug"
] | hhubble | 3 |
yt-dlp/yt-dlp | python | 12,437 | downloading twitch vods chat replays. any working workarounds? | ### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar questions **including closed ones**. DO NOT post duplicates
### Please make sure the question is worded well enough to be understood
right now, it is not possible to download twitch chat replays when downloading a vod. i have read all the related discussions but they are all relatively old.
are there current, working workarounds to download chat replays or is this still issue still open?
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
``` | closed | 2025-02-22T12:42:47Z | 2025-02-22T16:05:39Z | https://github.com/yt-dlp/yt-dlp/issues/12437 | [
"duplicate",
"incomplete"
] | 111100001 | 1 |
codertimo/BERT-pytorch | nlp | 31 | Single Sentence Input support | In the paper, they note that they optionally use single sentence input for some classification tasks. I'll try to take a look at doing it myself, as it looks like it is not currently supported. | closed | 2018-10-23T08:40:36Z | 2018-10-23T09:17:21Z | https://github.com/codertimo/BERT-pytorch/issues/31 | [] | nateraw | 2 |
gradio-app/gradio | data-science | 10,519 | [Gradio 5.15 container] - Width size: Something changed | ### Describe the bug
I was controlling width of main interface with custom css in class:
but in this new version its is not working.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
css ="""
.gradio-container {width: 95% !important}
div.gradio-container{
max-width: unset !important;
}
"""
with gr.Blocks(css=css) as app:
with gr.Tabs():
with gr.TabItem("Test"):
gallery = gr.Gallery(label="Generated Images", interactive=True, show_label=True, preview=True, allow_preview=True)
app.launch(inbrowser=True)
```
### Screenshot

### Logs
```shell
N/A
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.15.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.7.0 is not installed.
httpx: 0.27.0
huggingface-hub: 0.28.1
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 1.26.3
orjson: 3.10.6
packaging: 24.1
pandas: 2.2.2
pillow: 11.0.0
pydantic: 2.8.2
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0.1
ruff: 0.9.4
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit: 0.12.0
typer: 0.12.3
typing-extensions: 4.12.2
urllib3: 2.2.2
uvicorn: 0.30.5
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.2.0
httpx: 0.27.0
huggingface-hub: 0.28.1
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | closed | 2025-02-05T21:34:22Z | 2025-02-27T07:03:10Z | https://github.com/gradio-app/gradio/issues/10519 | [
"bug"
] | elismasilva | 4 |
rio-labs/rio | data-visualization | 141 | Handle special characters in URLs | It should be possible to use most characters in page urls (i.e. `rio.Page(url_segment=...)`). We must take care to properly handle (escape/unescape) weird characters in URLs.
- When a user connects and we figure out the current page
- In `Session.navigate_to`
- When updating the URL in the browser
- ... more? | open | 2024-09-15T11:23:10Z | 2024-09-15T11:23:11Z | https://github.com/rio-labs/rio/issues/141 | [] | Aran-Fey | 0 |
Miserlou/Zappa | flask | 1,694 | Zappa Dash Deployment: A GET request to '/' yielded a 504 response code. | <!--- Provide a general summary of the issue in the Title above -->
## Context
I'm trying to deploy my Dash app through Zappa, but it keeps reporting this error.
There is a database connection inside, which locates in the same AWS EC2, I already configure VPC.
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
<!--- Tell us what should happen -->
Get the data through a database on the same AWS EC2. Based on that data, generating the Dash App
## Actual Behavior
<!--- Tell us what happens instead -->
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
zappa==0.46.1
* Operating System and Python version:
Mac 10.12.6, Python 3.6
* The output of `pip freeze`:
argcomplete==1.9.3
asn1crypto==0.24.0
base58==1.0.0
bcrypt==3.1.4
boto3==1.9.37
botocore==1.12.37
certifi==2018.10.15
cffi==1.11.5
cfn-flip==1.0.3
chardet==3.0.4
Click==7.0
cryptography==2.3.1
dash==0.29.0
dash-core-components==0.36.0
dash-html-components==0.13.2
dash-renderer==0.14.3
dash-table-experiments==0.6.0
decorator==4.3.0
docutils==0.14
durationpy==0.5
Flask==0.12.2
Flask-Compress==1.4.0
future==0.16.0
hjson==3.0.1
idna==2.7
ipython-genutils==0.2.0
itsdangerous==1.1.0
Jinja2==2.10
jmespath==0.9.3
jsonschema==2.6.0
jupyter-core==4.4.0
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.0
nbformat==4.4.0
numpy==1.15.4
pandas==0.23.4
paramiko==2.4.2
pipenv==2018.10.13
placebo==0.8.2
plotly==3.3.0
pyasn1==0.4.4
pycparser==2.19
PyMySQL==0.9.2
PyNaCl==1.3.0
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2018.7
PyYAML==3.12
requests==2.20.0
retrying==1.3.3
s3transfer==0.1.13
six==1.11.0
sshtunnel==0.1.4
toml==0.10.0
tqdm==4.19.1
traitlets==4.3.2
troposphere==2.3.3
Unidecode==1.0.22
urllib3==1.24.1
virtualenv==16.1.0
virtualenv-clone==0.4.0
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
zappa==0.46.1
* Link to your project (optional):
* Your `zappa_settings.py`:
{
"dev": {
"app_function": "app.server",
"profile_name": "eb-cli",
"project_name": "test2",
"runtime": "python3.6",
"s3_bucket": "zappa-churn",
"aws_region":"us-west-2",
"slim_handler":"True",
"environment_variables":{
"host":"XXXXXXXXX.us-west-2.rds.amazonaws.com",
"user":"XXXX",
"password":"XXXX"
}
}
} | open | 2018-11-07T23:31:23Z | 2018-11-19T15:52:16Z | https://github.com/Miserlou/Zappa/issues/1694 | [] | amy09 | 1 |
PokemonGoF/PokemonGo-Bot | automation | 5,730 | NotLoggedInException | ### Expected Behavior
Working without errors.
### Actual Behavior
[2016-09-28 10:49:12] [pgoapi.pgoapi] [ERROR] Request for new Access Token failed! Logged out...
[2016-09-28 10:49:13] [PokemonGoBot] [INFO] Not logged in, reconnecting in 900 seconds
Exception in thread Thread-898:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in **bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 1082, in run
self.function(_self.args, *_self.kwargs)
File "/home/taken/PokemonGo-Bot/pokemongo_bot/__init**.py", line 1335, in heartbeat
responses = request.call()
File "/home/taken/PokemonGo-Bot/pokemongo_bot/api_wrapper.py", line 192, in call
if not self.can_call():
File "/home/taken/PokemonGo-Bot/pokemongo_bot/api_wrapper.py", line 151, in can_call
raise NotLoggedInException()
NotLoggedInException
### Steps to Reproduce
1. Run a bot
2. Every few minutes got errors...
### Other Information
OS: Debian
Branch: Master
Git Commit: fd495448a6393e886a74d846ad086e0fed45f986
Python Version: Python 2.7.9
| open | 2016-09-29T09:18:02Z | 2016-10-01T13:10:54Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5730 | [] | takenek | 4 |
iperov/DeepFaceLab | machine-learning | 512 | OOM but i got enough memory | Running trainer.
Loading model...
Model first run.
Enable autobackup? (y/n ?:help skip:n) : y
Write preview history? (y/n ?:help skip:n) : n
Target iteration (skip:unlimited/default) : n
0
Batch_size (?:help skip:0) : 0
Feed faces to network sorted by yaw? (y/n ?:help skip:n) : n
Flip faces randomly? (y/n ?:help skip:y) : y
Src face scale modifier % ( -30...30, ?:help skip:0) : 0
Use lightweight autoencoder? (y/n, ?:help skip:n) : n
Use pixel loss? (y/n, ?:help skip: n/default ) : n
Using TensorFlow backend.
Loading: 100%|########################################################################| 65/65 [00:00<00:00, 181.68it/s]
Loading: 100%|####################################################################| 3688/3688 [00:16<00:00, 228.42it/s]
============== Model Summary ===============
== ==
== Model name: H64 ==
== ==
== Current iteration: 0 ==
== ==
==------------ Model Options -------------==
== ==
== autobackup: True ==
== sort_by_yaw: False ==
== random_flip: True ==
== lighter_ae: False ==
== pixel_loss: False ==
== batch_size: 4 ==
== ==
==-------------- Running On --------------==
== ==
== Device index: 0 ==
== Name: GeForce GTX 1050 Ti ==
== VRAM: 4.00GB ==
== ==
============================================
Starting. Press "Enter" to stop training and save model.
[03:00:51][#000001][10.98s][2.7535][2.8256]
Error: OOM when allocating tensor with shape[2048,1024,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/Conv2DBackpropFilter}} = Conv2DBackpropFilter[T=DT_FLOAT, _class=["loc:@training/Adam/gradients/AddN_25"], data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer, ConstantFolding/training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/ShapeN-matshapes-1, training/Adam/gradients/AddN_22)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Traceback (most recent call last):
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\mainscripts\Trainer.py", line 109, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\models\ModelBase.py", line 525, in train_one_iter
losses = self.onTrainOneIter(sample, self.generator_list)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\models\Model_H64\Model.py", line 89, in onTrainOneIter
total, loss_src_bgr, loss_src_mask, loss_dst_bgr, loss_dst_mask = self.ae.train_on_batch( [warped_src, target_src_full_mask, warped_dst, target_dst_full_mask], [target_src, target_src_full_mask, target_dst, target_dst_full_mask] )
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
return self._call(inputs)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
fetched = self._callable_fn(*array_vals)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__
run_metadata_ptr)
File "C:\Users\*****\Downloads\deepfacelab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[2048,1024,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/Conv2DBackpropFilter}} = Conv2DBackpropFilter[T=DT_FLOAT, _class=["loc:@training/Adam/gradients/AddN_25"], data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer, ConstantFolding/training/Adam/gradients/model_1_1/conv2d_5/convolution_grad/ShapeN-matshapes-1, training/Adam/gradients/AddN_22)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Done.
Press any key to continue . . . | closed | 2019-12-06T01:05:06Z | 2019-12-06T17:34:48Z | https://github.com/iperov/DeepFaceLab/issues/512 | [] | N1el132 | 2 |
skypilot-org/skypilot | data-science | 4,192 | [Core][Tests] Several smoke test failed on latest master | <!-- Describe the bug report / feature request here -->
`test_cancel_azure`, `test_gcp_force_enable_external_ips` and `test_managed_jobs_cancellation_gcp` failed on latest master.
<details>
<summary>Error log for `test_cancel_azure`:</summary>
<pre><code class="language-bash">
+ sky launch -c t-cancel-azure-f6 examples/resnet_app.yaml --cloud azure -y -d
I 10-26 12:39:50 optimizer.py:881] [1mConsidered resources (1 node):[0m
I 10-26 12:39:50 optimizer.py:951] -----------------------------------------------------------------------------------------------
I 10-26 12:39:50 optimizer.py:951] CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
I 10-26 12:39:50 optimizer.py:951] -----------------------------------------------------------------------------------------------
I 10-26 12:39:50 optimizer.py:951] Azure Standard_NC6s_v3 6 112 V100:1 eastus 3.06 [32m ✔[0m
I 10-26 12:39:50 optimizer.py:951] -----------------------------------------------------------------------------------------------
Running task on cluster t-cancel-azure-f6...
D 10-26 12:39:50 cloud_vm_ray_backend.py:4423] cluster_ever_up: False
D 10-26 12:39:50 cloud_vm_ray_backend.py:4424] record: None
D 10-26 12:39:51 azure_catalog.py:203] Refreshing the image catalog and trying again.
[?25hTraceback (most recent call last):
File "/home/memory/install/miniconda3/envs/sky/bin/sky", line 8, in <module>
sys.exit(cli())
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/memory/skypilot/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/cli.py", line 818, in invoke
return super().invoke(ctx)
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/cli.py", line 1131, in launch
_launch_with_confirm(task,
File "/home/memory/skypilot/sky/cli.py", line 609, in _launch_with_confirm
sky.launch(
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/execution.py", line 454, in launch
return _execute(
File "/home/memory/skypilot/sky/execution.py", line 280, in _execute
handle = backend.provision(task,
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/backends/backend.py", line 60, in provision
return self._provision(task, to_provision, dryrun, stream_logs,
File "/home/memory/skypilot/sky/backends/cloud_vm_ray_backend.py", line 2825, in _provision
config_dict = retry_provisioner.provision_with_retries(
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/backends/cloud_vm_ray_backend.py", line 1988, in provision_with_retries
config_dict = self._retry_zones(
File "/home/memory/skypilot/sky/backends/cloud_vm_ray_backend.py", line 1399, in _retry_zones
config_dict = backend_utils.write_cluster_config(
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/backends/backend_utils.py", line 801, in write_cluster_config
resources_vars = to_provision.make_deploy_variables(
File "/home/memory/skypilot/sky/resources.py", line 1033, in make_deploy_variables
cloud_specific_variables = self.cloud.make_deploy_resources_variables(
File "/home/memory/skypilot/sky/clouds/azure.py", line 333, in make_deploy_resources_variables
if image_id.startswith(
AttributeError: 'NoneType' object has no attribute 'startswith'
D 10-26 12:39:52 skypilot_config.py:228] Using config path: /home/memory/.sky/config.yaml
D 10-26 12:39:52 skypilot_config.py:233] Config loaded:
D 10-26 12:39:52 skypilot_config.py:233] {'kubernetes': {'remote_identity': 'SERVICE_ACCOUNT'},
D 10-26 12:39:52 skypilot_config.py:233] 'serve': {'controller': {'resources': {'cloud': 'aws'}}}}
D 10-26 12:39:52 skypilot_config.py:245] Config syntax check passed.
Cluster t-cancel-azure-f6 not found.
Cluster(s) not found (tip: see `sky status`).
[31mFailed[0m.
Reason: sky launch -c t-cancel-azure-f6 examples/resnet_app.yaml --cloud azure -y -d
Log: less /tmp/azure-cancel-task-ukwza7py.log
</code></pre></details>
This seems to be introduced by a PR using different default image 2 days ago.
<details>
<summary>Error log for `test_gcp_force_enable_external_ips `</summary>
<pre><code class="language-bash">
+ sky launch -y -c t-gcp-force-enable-ex-es-8c --cloud gcp --cpus 2 tests/test_yamls/minimal.yaml
D 10-26 12:11:34 skypilot_config.py:228] Using config path: tests/test_yamls/force_enable_external_ips_config.yaml
D 10-26 12:11:34 skypilot_config.py:233] Config loaded:
D 10-26 12:11:34 skypilot_config.py:233] {'gcp': {'force_enable_external_ips': True,
D 10-26 12:11:34 skypilot_config.py:233] 'use_internal_ips': True,
D 10-26 12:11:34 skypilot_config.py:233] 'vpc_name': 'default'}}
D 10-26 12:11:34 skypilot_config.py:245] Config syntax check passed.
Task from YAML spec: tests/test_yamls/minimal.yaml
I 10-26 12:11:37 optimizer.py:881] [1mConsidered resources (1 node):[0m
I 10-26 12:11:37 optimizer.py:951] ----------------------------------------------------------------------------------------------
I 10-26 12:11:37 optimizer.py:951] CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
I 10-26 12:11:37 optimizer.py:951] ----------------------------------------------------------------------------------------------
I 10-26 12:11:37 optimizer.py:951] GCP n2-standard-2 2 8 - us-central1-a 0.10 [32m ✔[0m
I 10-26 12:11:37 optimizer.py:951] ----------------------------------------------------------------------------------------------
Running task on cluster t-gcp-force-enable-ex-es-8c...
D 10-26 12:11:37 cloud_vm_ray_backend.py:4408] cluster_ever_up: False
D 10-26 12:11:37 cloud_vm_ray_backend.py:4409] record: None
D 10-26 12:11:39 skypilot_config.py:146] User config: gcp.force_enable_external_ips -> True
D 10-26 12:11:39 backend_utils.py:864] Using ssh_proxy_command: None
D 10-26 12:11:39 skypilot_config.py:146] User config: gcp.use_internal_ips -> True
D 10-26 12:11:39 skypilot_config.py:146] User config: gcp.vpc_name -> default
I 10-26 12:11:40 cloud_vm_ray_backend.py:1505] [0m⚙︎ Launching on GCP us-central1[0m (us-central1-a).
D 10-26 12:11:40 provisioner.py:135] SkyPilot version: 1.0.0-dev0; commit: 0e915d3430d8027aa40b766605bb13c889ffc62f
D 10-26 12:11:40 provisioner.py:137]
D 10-26 12:11:40 provisioner.py:137]
D 10-26 12:11:40 provisioner.py:137] ==================== Provisioning ====================
D 10-26 12:11:40 provisioner.py:137]
D 10-26 12:11:40 provisioner.py:138] Provision config:
D 10-26 12:11:40 provisioner.py:138] {
D 10-26 12:11:40 provisioner.py:138] "provider_config": {
D 10-26 12:11:40 provisioner.py:138] "type": "external",
D 10-26 12:11:40 provisioner.py:138] "module": "sky.provision.gcp",
D 10-26 12:11:40 provisioner.py:138] "region": "us-central1",
D 10-26 12:11:40 provisioner.py:138] "availability_zone": "us-central1-a",
D 10-26 12:11:40 provisioner.py:138] "cache_stopped_nodes": true,
D 10-26 12:11:40 provisioner.py:138] "project_id": "skypilot-375900",
D 10-26 12:11:40 provisioner.py:138] "vpc_name": "default",
D 10-26 12:11:40 provisioner.py:138] "use_internal_ips": true,
D 10-26 12:11:40 provisioner.py:138] "force_enable_external_ips": true,
D 10-26 12:11:40 provisioner.py:138] "disable_launch_config_check": true,
D 10-26 12:11:40 provisioner.py:138] "use_managed_instance_group": false
D 10-26 12:11:40 provisioner.py:138] },
D 10-26 12:11:40 provisioner.py:138] "authentication_config": {
D 10-26 12:11:40 provisioner.py:138] "ssh_user": "gcpuser",
D 10-26 12:11:40 provisioner.py:138] "ssh_private_key": "~/.ssh/sky-key"
D 10-26 12:11:40 provisioner.py:138] },
D 10-26 12:11:40 provisioner.py:138] "docker_config": {},
D 10-26 12:11:40 provisioner.py:138] "node_config": {
D 10-26 12:11:40 provisioner.py:138] "labels": {
D 10-26 12:11:40 provisioner.py:138] "skypilot-user": "memory",
D 10-26 12:11:40 provisioner.py:138] "use-managed-instance-group": "0"
D 10-26 12:11:40 provisioner.py:138] },
D 10-26 12:11:40 provisioner.py:138] "machineType": "n2-standard-2",
D 10-26 12:11:40 provisioner.py:138] "disks": [
D 10-26 12:11:40 provisioner.py:138] {
D 10-26 12:11:40 provisioner.py:138] "boot": true,
D 10-26 12:11:40 provisioner.py:138] "autoDelete": true,
D 10-26 12:11:40 provisioner.py:138] "type": "PERSISTENT",
D 10-26 12:11:40 provisioner.py:138] "initializeParams": {
D 10-26 12:11:40 provisioner.py:138] "diskSizeGb": 256,
D 10-26 12:11:40 provisioner.py:138] "sourceImage": "projects/sky-dev-465/global/images/skypilot-gcp-cpu-ubuntu-20241017184242",
D 10-26 12:11:40 provisioner.py:138] "diskType": "zones/us-central1-a/diskTypes/pd-balanced"
D 10-26 12:11:40 provisioner.py:138] }
D 10-26 12:11:40 provisioner.py:138] }
D 10-26 12:11:40 provisioner.py:138] ],
D 10-26 12:11:40 provisioner.py:138] "metadata": {
D 10-26 12:11:40 provisioner.py:138] "items": [
D 10-26 12:11:40 provisioner.py:138] {
D 10-26 12:11:40 provisioner.py:138] "key": "ssh-keys",
D 10-26 12:11:40 provisioner.py:138] "value": "gcpuser:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRdTMAFJrCxdseY/K9xfnUO2LoR/h5V0B1zT87foT31tHVSRebgtzuMjKJ/gQf7wAHgZPRUJOZW2OAi1IYgSLyPHIbP/C2kq9vMv/JUiAW7thN4bBPOJL6qtNo1EGzQJ727yvCHSRwgobXqj3B9PvoUg31RszKNN0DH7owic1mwltYVmzVdM2wmYXtzHI4eKL+XRCuh1630P4fgwiOgjt2zRDvo1BG9IW9IWrasOFQhBHMSzu3iTN2TWHmwu2gqvLrn5avcclfwDp2cM1k4F5iW54f9Bsz+8ghnRH0vLASmXF9Dw/XKgGf1EQKBC/hObIDjUA+ceHX0v8FsL6/LX+b"
D 10-26 12:11:40 provisioner.py:138] }
D 10-26 12:11:40 provisioner.py:138] ]
D 10-26 12:11:40 provisioner.py:138] }
D 10-26 12:11:40 provisioner.py:138] },
D 10-26 12:11:40 provisioner.py:138] "count": 1,
D 10-26 12:11:40 provisioner.py:138] "tags": {},
D 10-26 12:11:40 provisioner.py:138] "resume_stopped_nodes": true,
D 10-26 12:11:40 provisioner.py:138] "ports_to_open_on_launch": null
D 10-26 12:11:40 provisioner.py:138] }
D 10-26 12:11:43 skypilot_config.py:146] User config: gcp.vpc_name -> default
D 10-26 12:12:03 provisioner.py:69]
D 10-26 12:12:03 provisioner.py:69] Waiting for instances of 't-gcp-force-enable-ex-es-8c' to be ready...
D 10-26 12:12:04 provisioner.py:89] Instances of 't-gcp-force-enable-ex-es-8c' are ready after 0 retries.
D 10-26 12:12:04 provisioner.py:92]
D 10-26 12:12:04 provisioner.py:92] Provisioning 't-gcp-force-enable-ex-es-8c' took 23.60 seconds.
D 10-26 12:12:05 skypilot_config.py:146] User config: gcp.force_enable_external_ips -> True
D 10-26 12:12:05 provisioner.py:575]
D 10-26 12:12:05 provisioner.py:575]
D 10-26 12:12:05 provisioner.py:575] ==================== System Setup After Provision ====================
D 10-26 12:12:05 provisioner.py:575]
D 10-26 12:12:08 provisioner.py:411] Provision record:
D 10-26 12:12:08 provisioner.py:411] {
D 10-26 12:12:08 provisioner.py:411] "provider_name": "gcp",
D 10-26 12:12:08 provisioner.py:411] "region": "us-central1",
D 10-26 12:12:08 provisioner.py:411] "zone": "us-central1-a",
D 10-26 12:12:08 provisioner.py:411] "cluster_name": "t-gcp-force-enable-ex-es-8c-402b",
D 10-26 12:12:08 provisioner.py:411] "head_instance_id": "t-gcp-force-enable-ex-es-8c-402b-head-es5u8e28-compute",
D 10-26 12:12:08 provisioner.py:411] "resumed_instance_ids": [],
D 10-26 12:12:08 provisioner.py:411] "created_instance_ids": [
D 10-26 12:12:08 provisioner.py:411] "t-gcp-force-enable-ex-es-8c-402b-head-es5u8e28-compute"
D 10-26 12:12:08 provisioner.py:411] ]
D 10-26 12:12:08 provisioner.py:411] }
D 10-26 12:12:08 provisioner.py:411] Cluster info:
D 10-26 12:12:08 provisioner.py:411] {
D 10-26 12:12:08 provisioner.py:411] "instances": {
D 10-26 12:12:08 provisioner.py:411] "t-gcp-force-enable-ex-es-8c-402b-head-es5u8e28-compute": [
D 10-26 12:12:08 provisioner.py:411] {
D 10-26 12:12:08 provisioner.py:411] "instance_id": "t-gcp-force-enable-ex-es-8c-402b-head-es5u8e28-compute",
D 10-26 12:12:08 provisioner.py:411] "internal_ip": "10.128.0.10",
D 10-26 12:12:08 provisioner.py:411] "external_ip": "34.136.220.79",
D 10-26 12:12:08 provisioner.py:411] "tags": {
D 10-26 12:12:08 provisioner.py:411] "skypilot-user": "memory",
D 10-26 12:12:08 provisioner.py:411] "use-managed-instance-group": "0",
D 10-26 12:12:08 provisioner.py:411] "ray-cluster-name": "t-gcp-force-enable-ex-es-8c-402b",
D 10-26 12:12:08 provisioner.py:411] "skypilot-cluster-name": "t-gcp-force-enable-ex-es-8c-402b",
D 10-26 12:12:08 provisioner.py:411] "ray-node-type": "head",
D 10-26 12:12:08 provisioner.py:411] "skypilot-head-node": "1"
D 10-26 12:12:08 provisioner.py:411] },
D 10-26 12:12:08 provisioner.py:411] "ssh_port": 22
D 10-26 12:12:08 provisioner.py:411] }
D 10-26 12:12:08 provisioner.py:411] ]
D 10-26 12:12:08 provisioner.py:411] },
D 10-26 12:12:08 provisioner.py:411] "head_instance_id": "t-gcp-force-enable-ex-es-8c-402b-head-es5u8e28-compute",
D 10-26 12:12:08 provisioner.py:411] "provider_name": "gcp",
D 10-26 12:12:08 provisioner.py:411] "provider_config": {
D 10-26 12:12:08 provisioner.py:411] "type": "external",
D 10-26 12:12:08 provisioner.py:411] "module": "sky.provision.gcp",
D 10-26 12:12:08 provisioner.py:411] "region": "us-central1",
D 10-26 12:12:08 provisioner.py:411] "availability_zone": "us-central1-a",
D 10-26 12:12:08 provisioner.py:411] "cache_stopped_nodes": true,
D 10-26 12:12:08 provisioner.py:411] "project_id": "skypilot-375900",
D 10-26 12:12:08 provisioner.py:411] "vpc_name": "default",
D 10-26 12:12:08 provisioner.py:411] "use_internal_ips": true,
D 10-26 12:12:08 provisioner.py:411] "force_enable_external_ips": true,
D 10-26 12:12:08 provisioner.py:411] "disable_launch_config_check": true,
D 10-26 12:12:08 provisioner.py:411] "use_managed_instance_group": false
D 10-26 12:12:08 provisioner.py:411] },
D 10-26 12:12:08 provisioner.py:411] "docker_user": null,
D 10-26 12:12:08 provisioner.py:411] "ssh_user": null,
D 10-26 12:12:08 provisioner.py:411] "custom_ray_options": null
D 10-26 12:12:08 provisioner.py:411] }
D 10-26 12:12:08 provisioner.py:436]
D 10-26 12:12:08 provisioner.py:436] Waiting for SSH to be available for 't-gcp-force-enable-ex-es-8c' ...
D 10-26 12:12:09 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:09 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:11 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:11 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:13 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:13 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:15 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:15 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:17 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:17 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:19 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:19 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:21 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:21 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:23 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:23 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:25 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:25 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:27 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:27 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:29 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:12:29 provisioner.py:380] Retrying in 1 second...
D 10-26 12:12:31 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:22:08 provisioner.py:380] Retrying in 1 second...
D 10-26 12:22:10 provisioner.py:304] Waiting for SSH to 10.128.0.10. Try: ssh -T -i '~/.ssh/sky-key' gcpuser@10.128.0.10 -p 22 -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o AddKeysToAgent=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 uptime. Timeout: SSH connection to 10.128.0.10 is not ready.
# Keeps retrying forever
E 10-26 12:22:10 provisioner.py:582] [0m[31m⨯[0m Failed to set up SkyPilot runtime on cluster. [2mView logs at: ~/sky_logs/sky-2024-10-26-12-11-35-464541/provision.log[0m
D 10-26 12:22:10 provisioner.py:586] Stacktrace:
D 10-26 12:22:10 provisioner.py:586] Traceback (most recent call last):
D 10-26 12:22:10 provisioner.py:586] File "/home/memory/skypilot/sky/provision/provisioner.py", line 576, in post_provision_runtime_setup
D 10-26 12:22:10 provisioner.py:586] return _post_provision_setup(cloud_name,
D 10-26 12:22:10 provisioner.py:586] File "/home/memory/skypilot/sky/provision/provisioner.py", line 438, in _post_provision_setup
D 10-26 12:22:10 provisioner.py:586] wait_for_ssh(cluster_info, ssh_credentials)
D 10-26 12:22:10 provisioner.py:586] File "/home/memory/skypilot/sky/provision/provisioner.py", line 387, in wait_for_ssh
D 10-26 12:22:10 provisioner.py:586] _retry_ssh_thread((ip, ssh_port))
D 10-26 12:22:10 provisioner.py:586] File "/home/memory/skypilot/sky/provision/provisioner.py", line 377, in _retry_ssh_thread
D 10-26 12:22:10 provisioner.py:586] raise RuntimeError(
D 10-26 12:22:10 provisioner.py:586] RuntimeError: Failed to SSH to 10.128.0.10 after timeout 600s, with Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:22:10 provisioner.py:586]
[?25hTraceback (most recent call last):
File "/home/memory/install/miniconda3/envs/sky/bin/sky", line 8, in <module>
sys.exit(cli())
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/memory/skypilot/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/cli.py", line 818, in invoke
return super().invoke(ctx)
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/memory/install/miniconda3/envs/sky/lib/python3.9/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/cli.py", line 1131, in launch
_launch_with_confirm(task,
File "/home/memory/skypilot/sky/cli.py", line 609, in _launch_with_confirm
sky.launch(
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/execution.py", line 454, in launch
return _execute(
File "/home/memory/skypilot/sky/execution.py", line 280, in _execute
handle = backend.provision(task,
File "/home/memory/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
File "/home/memory/skypilot/sky/backends/backend.py", line 60, in provision
return self._provision(task, to_provision, dryrun, stream_logs,
File "/home/memory/skypilot/sky/backends/cloud_vm_ray_backend.py", line 2873, in _provision
cluster_info = provisioner.post_provision_runtime_setup(
File "/home/memory/skypilot/sky/provision/provisioner.py", line 576, in post_provision_runtime_setup
return _post_provision_setup(cloud_name,
File "/home/memory/skypilot/sky/provision/provisioner.py", line 438, in _post_provision_setup
wait_for_ssh(cluster_info, ssh_credentials)
File "/home/memory/skypilot/sky/provision/provisioner.py", line 387, in wait_for_ssh
_retry_ssh_thread((ip, ssh_port))
File "/home/memory/skypilot/sky/provision/provisioner.py", line 377, in _retry_ssh_thread
raise RuntimeError(
RuntimeError: Failed to SSH to 10.128.0.10 after timeout 600s, with Timeout: SSH connection to 10.128.0.10 is not ready.
D 10-26 12:22:10 skypilot_config.py:228] Using config path: /home/memory/.sky/config.yaml
D 10-26 12:22:10 skypilot_config.py:233] Config loaded:
D 10-26 12:22:10 skypilot_config.py:233] {'kubernetes': {'remote_identity': 'SERVICE_ACCOUNT'},
D 10-26 12:22:10 skypilot_config.py:233] 'serve': {'controller': {'resources': {'cloud': 'aws'}}}}
D 10-26 12:22:10 skypilot_config.py:245] Config syntax check passed.
D 10-26 12:22:12 cloud_vm_ray_backend.py:3895] Provisioner version: ProvisionerVersion.SKYPILOT using new provisioner for teardown.
D 10-26 12:22:12 instance.py:36] handlers: [<class 'sky.provision.gcp.instance_utils.GCPComputeInstance'>]
D 10-26 12:22:13 instance.py:47] handler_to_instances: defaultdict(<class 'list'>, {<class 'sky.provision.gcp.instance_utils.GCPComputeInstance'>: ['t-gcp-force-enable-ex-es-8c-402b-head-es5u8e28-compute']})
D 10-26 12:22:13 instance.py:36] handlers: [<class 'sky.provision.gcp.instance_utils.GCPComputeInstance'>]
D 10-26 12:22:14 instance.py:47] handler_to_instances: defaultdict(<class 'list'>, {<class 'sky.provision.gcp.instance_utils.GCPComputeInstance'>: ['t-gcp-force-enable-ex-es-8c-402b-head-es5u8e28-compute']})
D 10-26 12:22:44 instance.py:36] handlers: [<class 'sky.provision.gcp.instance_utils.GCPComputeInstance'>]
D 10-26 12:22:45 instance.py:47] handler_to_instances: defaultdict(<class 'list'>, {<class 'sky.provision.gcp.instance_utils.GCPComputeInstance'>: ['t-gcp-force-enable-ex-es-8c-402b-head-es5u8e28-compute']})
D 10-26 12:22:45 instance.py:554] Terminating instance: t-gcp-force-enable-ex-es-8c-402b-head-es5u8e28-compute.
D 10-26 12:22:47 instance.py:122] wait_for_compute_zone_operation: Waiting for operation operation-1729970566912-62566291cd887-ea4731a6-9ceba6c4 to finish...
D 10-26 12:22:47 instance_utils.py:431] Waiting GCP operation operation-1729970566912-62566291cd887-ea4731a6-9ceba6c4 to be ready ...
D 10-26 12:23:07 metadata_utils.py:115] Remove metadata of cluster t-gcp-force-enable-ex-es-8c-402b.
D 10-26 12:23:07 common_utils.py:505] Tried to remove /home/memory/.sky/generated/ssh/t-gcp-force-enable-ex-es-8c but failed to find it. Skip.
D 10-26 12:23:07 metadata_utils.py:115] Remove metadata of cluster t-gcp-force-enable-ex-es-8c.
Terminating cluster t-gcp-force-enable-ex-es-8c...done.
Terminating 1 cluster ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00
[31mFailed[0m.
Reason: sky launch -y -c t-gcp-force-enable-ex-es-8c --cloud gcp --cpus 2 tests/test_yamls/minimal.yaml
Log: less /tmp/gcp_force_enable_external_ips-j0yxzuu3.log
</code></pre></details>
This seems still using private IP.
Error log for `test_managed_jobs_cancellation_gcp`: https://gist.github.com/cblmemo/a5c33a9a46ceececebc1b13fbdaa9617
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -c`: 0e915d3430d8027aa40b766605bb13c889ffc62f
| closed | 2024-10-26T20:29:25Z | 2025-02-19T02:14:27Z | https://github.com/skypilot-org/skypilot/issues/4192 | [] | cblmemo | 5 |
ultralytics/ultralytics | deep-learning | 18,879 | imgsz mistakes in documetation and warnings | In some places, some of the permitted size arguments are missing.
In https://docs.ultralytics.com/modes/benchmark/#arguments the imgsz can be a string "height,width".
Similartly in https://docs.ultralytics.com/modes/train/#train-settings, you can pass a str.
The warning on training is also wrong:
```
WARNING ⚠️ updating to 'imgsz=320'. 'train' and 'val' imgsz must be an integer, while 'predict' and 'export' imgsz may be a [h, w] list or an integer, i.e. 'yolo export imgsz=640,480' or 'yolo export imgsz=640'
```
Train can be a list or a string (according to docs?), and some other mistakes.
This has become really confusing. | open | 2025-01-25T13:10:32Z | 2025-01-26T15:06:08Z | https://github.com/ultralytics/ultralytics/issues/18879 | [
"documentation"
] | EmmanuelMess | 9 |
klen/mixer | sqlalchemy | 97 | Using Mixer with django-taggit | Hi!
I use [django-taggit](https://github.com/alex/django-taggit) in my project but I encounter problems with mixer. When I blend models with the TagManager attached I get the following message:
> AttributeError: Mixer (transactions.Transaction): type object 'TaggableManager' has no attribute '_meta'
When i add **mixer.SKIP** to the field I get another error message:
> AttributeError: Mixer (transactions.Transaction): 'TaggableManager' object has no attribute 'm2m_field_name'
Here's some pseudo code how I use the TaggableManager:
from taggit.managers import TaggableManager
class Transaction(models.Model):
# some standard django fields here
tags = TaggableManager()
# I add values like this
trx = Transaction()
trx.tags.add("TagA", "TagB")
Can I skip blending for this field completely or better yet define my own handler? Thank you. | closed | 2018-04-01T09:46:33Z | 2021-01-11T09:12:24Z | https://github.com/klen/mixer/issues/97 | [] | fusion44 | 1 |
BeanieODM/beanie | asyncio | 340 | State management error with decimal.Decimal | Decimal types get retrieved as a bson Decimal128 type and that is causing a ValueError on retrieval if use_state_management is set to True.
ValueError: [TypeError("'Decimal128' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]
**To Reproduce**
```python
class Test(beanie.Document):
amt: decimal.Decimal
other_amt: pydantic.condecimal(decimal_places=1, multiple_of=decimal.Decimal("0.5")) = 0
class Settings:
name = "amounts"
use_revision = True
use_state_management = True
```
It saves fine to the DB and in fact, you can save and retrieve without issue with the use_state_management set to False and then turn it on and it'll fail. | closed | 2022-09-03T20:02:41Z | 2023-03-31T19:43:45Z | https://github.com/BeanieODM/beanie/issues/340 | [
"bug"
] | nickleman | 7 |
sanic-org/sanic | asyncio | 2,999 | Is there a plan for compatibility with Python 3.13 no GIL? | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
_No response_
### Describe the solution you'd like
Python 3.13 introduces free threading. Although it will certainly take a long time to reach a usable state, I still want to know if Sanic has any plans to be compatible with no GIL.
### Additional context
_No response_ | closed | 2024-09-21T08:18:52Z | 2024-11-13T19:10:57Z | https://github.com/sanic-org/sanic/issues/2999 | [
"feature request"
] | yuWorm | 3 |
cookiecutter/cookiecutter-django | django | 4,867 | Split the `docs` service out of `local.yml` | ## Description
Related to #4865: by default, the docs service is tarting by default with the app when running `docker compose -f local.yml up`. I find this to be a surprising behaviour. When running that command, I want to start the app, not the docs.
I'm suggesting to split out the docs service into its own docker compose config `docs.yml`, which could be combined with the main `local.yml`, if needed: `docker compose -f local.yml -f docs.yml up docs`.
## Rationale
Be more intentional about which services are being started.
The main inconvenient I can see is that the command to run the docs is a bit longer, but that could be solved with alias or makefile.
| closed | 2024-02-16T10:01:30Z | 2024-03-18T19:26:19Z | https://github.com/cookiecutter/cookiecutter-django/issues/4867 | [
"enhancement"
] | browniebroke | 2 |
ultralytics/yolov5 | pytorch | 13,507 | How to solve the YOLO original model mAP50-95 results the more you run the better | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Training the original YOLOv5s model results ran about three or four times, each time the results are better and better, precison/recall/mAP50 values are almost the same each time, so the comparison is to look at the value of mAP50-95, but the original model the value of this value is better and better, the first 100 rounds of training of the mAP50-95 results are 0.86 or so, the latest The first 100 rounds of training the mAP50-95 result is 0.86 or so, the latest run is 0.89 or so, but I want to reproduce the first result, because my improved model although the result is trained, precison/recall/mAP50 value is almost the same as the original, it is already 0.98 or so, but the mAP50-95 run is 0.87 or so, I would like to ask how to solve this problem, can you help I reproduce the first result.
### Additional
_No response_ | open | 2025-02-12T01:36:53Z | 2025-02-12T01:37:36Z | https://github.com/ultralytics/yolov5/issues/13507 | [
"question",
"detect"
] | lroy615 | 1 |
fastapi/sqlmodel | sqlalchemy | 69 | sqlalchemy.exc.CompileError: Can't generate DDL for NullType(); did you forget to specify a type on this Column? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from datetime import datetime
from typing import Optional, List, Tuple, Dict, Union
from typing_extensions import TypedDict
from sqlmodel import Field, SQLModel, create_engine
from sqlalchemy_utils.functions import database_exists, create_database
class InnerSemanticSearchDict(TypedDict):
acquis_code: str
code: str
level: int
title: str
similarity_score: float
class SemanticSearchDict(TypedDict):
rank: int
value: InnerSemanticSearchDict
class SemanticSearch(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
id_user: int
date_time: datetime
query: str
clean_query: str
semantic_search_result: List[SemanticSearchDict]
## sqlite
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
if not database_exists('postgresql://postgres:postgres@localhost:5432/embeddings_sts_tf'):
create_database('postgresql://postgres:postgres@localhost:5432/embeddings_sts_tf')
engine = create_engine('postgresql://postgres:postgres@localhost:5432/embeddings_sts_tf', echo=True)
SQLModel.metadata.create_all(engine)
```
### Description
I would like to create a `semanticsearch` table in the `embeddings_sts_tf` postgresql database with the 6 fields specified.
But I got the following error code:
```
2021-09-01 15:35:32,439 INFO sqlalchemy.engine.Engine select version()
2021-09-01 15:35:32,440 INFO sqlalchemy.engine.Engine [raw sql] {}
2021-09-01 15:35:32,440 INFO sqlalchemy.engine.Engine select current_schema()
2021-09-01 15:35:32,440 INFO sqlalchemy.engine.Engine [raw sql] {}
2021-09-01 15:35:32,441 INFO sqlalchemy.engine.Engine show standard_conforming_strings
2021-09-01 15:35:32,441 INFO sqlalchemy.engine.Engine [raw sql] {}
2021-09-01 15:35:32,441 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2021-09-01 15:35:32,442 INFO sqlalchemy.engine.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
2021-09-01 15:35:32,442 INFO sqlalchemy.engine.Engine [generated in 0.00015s] {'name': 'semanticsearch'}
2021-09-01 15:35:32,443 INFO sqlalchemy.engine.Engine ROLLBACK
Traceback (most recent call last):
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/compiler.py", line 4143, in visit_create_table
processed = self.process(
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/compiler.py", line 489, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/visitors.py", line 82, in _compiler_dispatch
return meth(self, **kw)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/compiler.py", line 4177, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/base.py", line 2509, in get_column_specification
colspec += " " + self.dialect.type_compiler.process(
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/compiler.py", line 521, in process
return type_._compiler_dispatch(self, **kw)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/visitors.py", line 82, in _compiler_dispatch
return meth(self, **kw)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/compiler.py", line 4724, in visit_null
raise exc.CompileError(
sqlalchemy.exc.CompileError: Can't generate DDL for NullType(); did you forget to specify a type on this Column?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/matthieu/Code/Python/fastapi-graphql/embeddings_sts_tf_postgresql_db.py", line 65, in <module>
SQLModel.metadata.create_all(engine)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/schema.py", line 4740, in create_all
bind._run_ddl_visitor(
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/future/engine.py", line 342, in _run_ddl_visitor
conn._run_ddl_visitor(visitorcallable, element, **kwargs)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2082, in _run_ddl_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/visitors.py", line 520, in traverse_single
return meth(obj, **kw)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 846, in visit_metadata
self.traverse_single(
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/visitors.py", line 520, in traverse_single
return meth(obj, **kw)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 890, in visit_table
self.connection.execute(
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/future/engine.py", line 280, in execute
return self._execute_20(
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1583, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1350, in _execute_ddl
compiled = ddl.compile(
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 489, in compile
return self._compiler(dialect, **kw)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 29, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/compiler.py", line 454, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/compiler.py", line 489, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/visitors.py", line 82, in _compiler_dispatch
return meth(self, **kw)
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/sql/compiler.py", line 4153, in visit_create_table
util.raise_(
File "/home/matthieu/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
sqlalchemy.exc.CompileError: (in table 'semanticsearch', column 'semantic_search_result'): Can't generate DDL for NullType(); did you forget to specify a type on this Column?
```
### Operating System
Linux
### Operating System Details
Ubuntu 18.04 LTS
### SQLModel Version
0.0.4
### Python Version
3.8.8
### Additional Context
_No response_ | open | 2021-09-01T13:48:31Z | 2021-09-01T13:53:42Z | https://github.com/fastapi/sqlmodel/issues/69 | [
"question"
] | Matthieu-Tinycoaching | 1 |
MaartenGr/BERTopic | nlp | 1,639 | Value of min_similarity attribute is overwritten in merge_models function | In https://github.com/MaartenGr/BERTopic/blob/bcb3ca2ee0e691fe041da5db71bb076e2d5835e9/bertopic/_bertopic.py#L3020 you are able to pass the variable `min_similarity`, which is 0.7 by default.
However, in https://github.com/MaartenGr/BERTopic/blob/bcb3ca2ee0e691fe041da5db71bb076e2d5835e9/bertopic/_bertopic.py#L3089 the value is being overwritten with 0.7 value. Immediately after the assignment, the variable is used. So that means, that the value passed in the function doesn't influence the merging process | closed | 2023-11-21T09:57:48Z | 2023-12-22T10:38:43Z | https://github.com/MaartenGr/BERTopic/issues/1639 | [] | Ceglowa | 2 |
mljar/mljar-supervised | scikit-learn | 96 | Use internal early stopping | When using algorithms internal early stopping fitting is much faster. Please use internal early stopping if possible for algorithms. We can disable learning curves. We can produce learning curves for `explain_level=1` | closed | 2020-05-28T10:26:18Z | 2020-06-02T10:49:47Z | https://github.com/mljar/mljar-supervised/issues/96 | [
"enhancement"
] | pplonski | 1 |
zappa/Zappa | flask | 692 | [Migrated] Broken unicode query parameters in django | Originally from: https://github.com/Miserlou/Zappa/issues/1770 by [ambientlight](https://github.com/ambientlight)
https://github.com/Miserlou/Zappa/pull/1311 for https://github.com/Miserlou/Zappa/issues/1199 has broke passing unicode string to django, since django rest framework which does the url decoding does not expect an iso-8859-1 string.
## Possible Fix
https://github.com/GeoThings/Zappa/commit/cba59878d97be10a9e70257d8ce34658ca1e03e2
## Steps to Reproduce
1. Make a request with query parameters containing unicode.(`空氣盒子`) `/some_apis?filter=%E7%A9%BA%E6%B0%A3%E7%9B%92%E5%AD%90`
2. write a handler matching `/some_api`
3. log inside your handler `request.query_params.get('filter', None)` to see `空氣çå`
## Your Environment
* Zappa version used: Zappa 0.47.1 (django 1.11.16)
* Operating System and Python version: Amazon Linux: 4.14.77-70.59.amzn1.x86_64, Python 2.7.15
If possible fix is acceptable, will create a pull request. | closed | 2021-02-20T12:33:01Z | 2024-04-13T18:14:09Z | https://github.com/zappa/Zappa/issues/692 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
JaidedAI/EasyOCR | pytorch | 437 | How much training data do you recommend? | Hi I am trying to generate data and do some training. Do you have a recommendation on training data size? How much data should I generate for one language? | closed | 2021-05-27T11:44:36Z | 2021-05-30T02:32:49Z | https://github.com/JaidedAI/EasyOCR/issues/437 | [] | purplesword | 1 |
mwaskom/seaborn | pandas | 3,815 | Error when using the lineplot | `
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
fig = plt.figure()
#--------------------------------------- f1 -----------
df = pd.DataFrame({'x1': [0,0,1], 'y1': [0,1,1], 'x2':[0,0.001,1], 'x3':[0,0.3,1], 'y2':[0,0.8,1]})
print(df)
a1 = fig.add_subplot(221)
sns.lineplot(data=df, x='x1', y='y1', ax=a1)
a1 = plt.text(0.55,0.3, str(df[['x1', 'y1']]))
a1 = plt.text(0.55,0.1, 'sns.lineplot')
plt.subplots_adjust(wspace=0.3,hspace=0.3)
a2 = fig.add_subplot(222)
sns.lineplot(data=df, x='x2', y='y1', ax=a2)
a2 = plt.text(0.55,0.3, str(df[['x2', 'y1']]))
a2 = plt.text(0.55,0.1, 'sns.lineplot')
plt.subplots_adjust(wspace=0.3,hspace=0.3)
a3 = fig.add_subplot(223)
sns.lineplot(data=df, x='x3', y='y2', ax=a3)
a3 = plt.text(0.55,0.3, str(df[['x3', 'y2']]))
a3 = plt.text(0.55,0.1, 'sns.lineplot')
plt.subplots_adjust(wspace=0.3,hspace=0.3)
a4 = fig.add_subplot(224)
a4.plot(df['x1'], df['y1'])
a4 = plt.text(0.55,0.3, str(df[['x1', 'y1']]))
a4 = plt.text(0.55,0.1, 'matplotlib')
plt.show()
`

| closed | 2025-01-20T10:58:49Z | 2025-01-26T02:21:30Z | https://github.com/mwaskom/seaborn/issues/3815 | [] | whisper-to | 5 |
httpie/http-prompt | rest-api | 82 | Use pytest-httpbin to test | http://httpbin.org has been unstable recently. Time to consider use [pytest-httpbin](https://github.com/kevin1024/pytest-httpbin).
| open | 2016-10-21T01:27:53Z | 2019-10-04T20:50:29Z | https://github.com/httpie/http-prompt/issues/82 | [
"enhancement"
] | eliangcs | 1 |
ray-project/ray | tensorflow | 51,291 | [Autoscaler] Update YAML Example and Docs for NodeProvider | ### What happened + What you expected to happen
<img width="776" alt="Image" src="https://github.com/user-attachments/assets/05d1b87f-9507-460f-87bd-d5f4fb307b6d" />
The YAML example for the CoordinatorSenderNodeProvider (e.g., [example-minimal-automatic.yaml](https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/local/example-minimal-automatic.yaml)) is outdated. When running the example, parameter errors occur, indicating that an update is needed to reflect the latest changes.
Additionally, there is no documentation available for NodeProvider. Although the information is available in
https://github.com/ray-project/ray/issues/26715, it has not yet been incorporated into the official documentation. This update should be reflected in the docs as well.
### Versions / Dependencies
Ray 2.43.0
### Reproduction script
- ray start --head --port=6379 --autoscaling-config=python/ray/autoscaler/local/example-minimal-automatic.yaml
- ray status
### Issue Severity
Low: It annoys or frustrates me. | open | 2025-03-12T06:54:30Z | 2025-03-14T16:17:26Z | https://github.com/ray-project/ray/issues/51291 | [
"bug",
"P2",
"core"
] | nadongjun | 0 |
RobertCraigie/prisma-client-py | pydantic | 124 | Global aliases cause confusing error message | <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/latest/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
When a field overlaps with a global alias used for query building, a confusing error message is shown
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
```py
import asyncio
from prisma import Client
async def main() -> None:
client = Client()
await client.connect()
user = await client.user.create(
data={
'name': 'Robert',
'order_by': 'age',
}
)
print(user)
if __name__ == '__main__':
asyncio.run(main())
```
Running the above script raises the following error:
```
prisma.errors.MissingRequiredValueError: Failed to validate the query: `Unable to match input value to any allowed input type for the field. Parse errors: [Query parsing/validation error at `Mutation.createOneUser.data.UserCreateInput.order_by`: A value is required but not set., Query parsing/validation error at `Mutation.createOneUser.data.UserUncheckedCreateInput.order_by`: A value is required but not set.]` at `Mutation.createOneUser.data`
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
This should be a valid query, however supporting this would require a massive refactor of our query builder.
The part of this issue that is considered a bug is the confusing error message, we should disallow generating a client that will result in an invalid internal query being generated.
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
```prisma
datasource db {
provider = "sqlite"
url = "file:tmp.db"
}
generator db {
provider = "prisma-client-py"
interface = "asyncio"
recursive_type_depth = -1
}
model User {
id String @id @default(cuid())
name String
order_by String
}
``` | closed | 2021-11-16T12:56:09Z | 2021-11-18T08:23:05Z | https://github.com/RobertCraigie/prisma-client-py/issues/124 | [
"bug/2-confirmed",
"kind/bug"
] | RobertCraigie | 0 |
sgl-project/sglang | pytorch | 4,192 | [Bug] cannot find this project in hugging face :lmsys/sglang-ci-dsv3-test. | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
I am trying to run test_mla,but lmsys/sglang-ci-dsv3-test was not found.
### Reproduction
python test_mla.py
### Environment
linux | closed | 2025-03-08T00:55:10Z | 2025-03-08T01:02:38Z | https://github.com/sgl-project/sglang/issues/4192 | [] | july8023 | 3 |
ShishirPatil/gorilla | api | 753 | [BFCL] bugs in function def _multi_threaded_inference(self, test_case, include_input_log: bool, include_state_log: bool): | **Describe the issue**
I encountered an error while running bfcl generate. The error occurred in the file gorilla/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py in the function def _multi_threaded_inference(self, test_case, include_input_log: bool, include_state_log: bool):.
The line model_responses, metadata = self.inference_single_turn_prompting(test_case, include_input_log, include_state_log) caused a runtime error. I discovered that the function inference_single_turn_prompting only accepts the parameters test_case and include_input_log. However, the code additionally passes include_state_log, which leads to the runtime error. When I removed include_state_log, the code ran successfully.
**ID datapoint**
1. Datapoint / Model Handler permalink:
2. Issue:
2. Gorilla repo commit #:
**What is the issue**
The function inference_single_turn_prompting does not accept include_state_log as a parameter, causing a runtime error when it is passed.
**Proposed Changes**
{
'previous_datapoint':[],
'updated_datapoint':[]
}
**Additional context**
Add any other context about the problem here.
| closed | 2024-11-12T17:40:08Z | 2024-11-12T20:25:07Z | https://github.com/ShishirPatil/gorilla/issues/753 | [] | pikepokenew | 1 |
pinry/pinry | django | 100 | The loading icon is not hidden after installation | The variable API_LIMIT_PER_PAGE isn't available in the template base.html. I hardcoded the value and now it works fine.
The loading icon kept visible all the time. I wasn't able to pin a new image so I checked the javascript console and found that there was a javascript error.
Also, minor suggestion: adding this info in the technical documentation about how to add a new user from the terminal: bin/python manage.py createsuperuser
Thank you for this project!!!! | closed | 2017-01-15T21:14:38Z | 2017-02-11T04:42:27Z | https://github.com/pinry/pinry/issues/100 | [] | j4ndr4 | 1 |
statsmodels/statsmodels | data-science | 9,013 | ENH/reference: Zero-Inflated Nonnegative Continuous Data | mainly parking a reference:
Liu, Lei, Ya-Chen Tina Shih, Robert L. Strawderman, Daowen Zhang, Bankole A. Johnson, and Haitao Chai. “Statistical Analysis of Zero-Inflated Nonnegative Continuous Data: A Review.” Statistical Science 34, no. 2 (May 2019): 253–79. https://doi.org/10.1214/18-STS681.
https://github.com/joyfulstones/zero-inflated-continuous examples using SAS
compares Tobit with 2-part models
2-part models are similar to count models with a mixture of a mass-point distribution and a distribution for values (weakly) larger than the threshold, in this case the distribution is for non-negative continuous data, e.g. log-normal or gamma.
This is also similar to Tweedie.
The interesting cases are when we cannot estimate the two parts separately (which we do right now in hurdle count models).
Example: if there is correlation (and tools will be similar to sample selection models with unobserved heterogeneity and/or endogeneity, I guess)
| open | 2023-09-29T15:43:53Z | 2023-09-29T15:43:53Z | https://github.com/statsmodels/statsmodels/issues/9013 | [
"type-enh",
"comp-othermod"
] | josef-pkt | 0 |
zihangdai/xlnet | nlp | 163 | Cache problem during pretraining | Durinig pretraining, after saving checkpoint below error occurs.
```
I0712 06:47:22.892611 140596004366080 tf_logging.py:115] [99000] | gnorm 0.71 lr 0.000001 | loss 7.25 | pplx 1408.25, bpc 10.4597
I0712 07:13:05.624328 140596004366080 tf_logging.py:115] [100000] | gnorm 1.03 lr 0.000000 | loss 7.25 | pplx 1406.88, bpc 10.4583
I0712 07:13:34.885596 140596004366080 tf_logging.py:115] Model saved in path: /home/xlnet_exam/models_wiki_ja/model.ckpt
2019-07-12 07:13:34.961923: W tensorflow/core/kernels/data/cache_dataset_ops.cc:770] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the datasetwill be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
```
in data_utils.py,
dataset cache order seems to be the same as above waring suggestion,
(but note exists.)
```
...
# (zihang): since we are doing online preprocessing, the parsed result of
# the same input at each time will be different. Thus, cache processed data
# is not helpful. It will use a lot of memory and lead to contrainer OOM.
# So, change to cache non-parsed raw data instead.
dataset = dataset.cache().map(parser).repeat()
dataset = dataset.batch(bsz_per_core, drop_remainder=True)
dataset = dataset.prefetch(num_core_per_host * bsz_per_core)
...
```
my env
```
GPU: Tesla V100 32GB *4
CUDA_VERSION: 9.0.176
TENSORFLOW_VERSION: 1.11.0
```
my pretrain command
```
python train_gpu.py \
--record_info_dir=${TFRECORD_DIR} \
--num_core_per_host=1 \
--train_batch_size=4 \
--save_steps=10000 \
--model_dir=${MODEL_DIR} \
--seq_len=512 \
--reuse_len=256 \
--mem_len=384 \
--perm_size=256 \
--n_layer=24 \
--d_model=1024 \
--d_embed=1024 \
--n_head=16 \
--d_head=64 \
--d_inner=4096 \
--untie_r=True \
--mask_alpha=6 \
--mask_beta=1 \
--num_predict=85 \
--uncased=True
```
Is it okay? or problem? | open | 2019-07-16T04:25:46Z | 2024-05-02T03:54:59Z | https://github.com/zihangdai/xlnet/issues/163 | [] | rkcalnode | 5 |
deepspeedai/DeepSpeed | pytorch | 6,936 | [BUG]The higher the zero_stage, the GPU memory is consumed. | Thank you for your wonderful work. When I used a small model to test the zero strategy of deepspeed, the higher the stage, the more video memory is consumed. Is there any other operation that needs to be enabled?
The model used is resnet152
| closed | 2025-01-09T09:35:16Z | 2025-01-20T00:13:28Z | https://github.com/deepspeedai/DeepSpeed/issues/6936 | [
"bug",
"training"
] | yangshenchang | 2 |
biolab/orange3 | scikit-learn | 5,977 | Inaccurate predictions from Test and Score in Regression and Curve Fit | **What's wrong?**
When plotting predictions of regression based on one feature and curve fit against the feature, a smooth line or curve would be expected, according to the model, e.g., _y = a + bx_. This only happens with Predictions from Test and Score, not from Predictions.
Instead, we see jagged lines. With positive coefficients, the prediction based on a higher feature value can actually be lower
This can be demonstrated both in a scatter plot and in a line chart (see screenshots)
**How can we reproduce the problem?**
see attached workflow using online input data
**What's your environment?**
- Operating system: Mac OS 11.6.5 Intel
- Orange version: 3.32.0
- How you installed Orange: from dmg download


[prdeiction-error-demo.ows.zip](https://github.com/biolab/orange3/files/8714354/prdeiction-error-demo.ows.zip)
Edit: added that this specifically applies to the Predictions Output of Test and Score | closed | 2022-05-18T08:06:39Z | 2022-05-18T08:34:33Z | https://github.com/biolab/orange3/issues/5977 | [
"bug report"
] | wvdvegte | 1 |
noirbizarre/flask-restplus | api | 190 | Swagger UI crashes when circular model defined | I have defined following circular model in my API:
```
tree_model = api.model('TreeModel', {
'node': fields.Nested(node_model)
})
tree['children'] = fields.List(fields.Nested(tree_model), default=[])
```
Marshalling such model with api.marshal_with works ok, but app crashes when I try to open swagger documentation:
> RecursionError: maximum recursion depth exceeded
> Fatal Python error: Cannot recover from stack overflow.
>
> Current thread 0x00007f8731cf2700 (most recent call first):
> File "/home/karol/App/pycharm-2016.2/helpers/pydev/_pydevd_bundle/pydevd_trace_dispatch_regular.py", line 139 in __call__
> File "/home/karol/test-api/venv/lib/python3.5/site-packages/werkzeug/utils.py", line 68 in **get**
> File "/home/karol/test-api/venv/lib/python3.5/site-packages/flask_restplus/fields.py", line 207 in nested
> File "/home/karol/test-api/venv/lib/python3.5/site-packages/flask_restplus/swagger.py", line 468 in register_field
> File "/home/karol/test-api/venv/lib/python3.5/site-packages/flask_restplus/swagger.py", line 460 in register_model
> File "/home/karol/test-api/venv/lib/python3.5/site-packages/flask_restplus/swagger.py", line 468 in register_field
> File "/home/karol/test-api/venv/lib/python3.5/site-packages/flask_restplus/swagger.py", line 470 in register_field
> File "/home/karol/test-api/venv/lib/python3.5/site-packages/flask_restplus/swagger.py", line 460 in register_model
> File "/home/karol/test-api/venv/lib/python3.5/site-packages/flask_restplus/swagger.py", line 468 in register_field
> File "/home/karol/test-api/venv/lib/python3.5/site-packages/flask_restplus/swagger.py", line 470 in register_field
> ...
>
> Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
Swagger 2.0 accepts circular (recursive) models, so I think this library should handle such cases.
Quick fix could be to just check if model is already registered in Swagger.register_model method.
I am using 0.9.2 version
| open | 2016-08-10T11:52:23Z | 2017-11-30T16:11:43Z | https://github.com/noirbizarre/flask-restplus/issues/190 | [
"bug"
] | miszczu | 1 |
plotly/jupyter-dash | dash | 2 | Support dev tools stack traces | Stack-traces in the dev UI don't work in the notebook when there is no source code file to pull from. Look into providing this information from the notebook. | closed | 2020-05-02T09:26:45Z | 2020-05-14T11:09:27Z | https://github.com/plotly/jupyter-dash/issues/2 | [] | jonmmease | 1 |
iperov/DeepFaceLab | machine-learning | 655 | I have questions about version 07.03.2020 | I have questions about version 07.03.2020, when I use quick96, I spend a lot of time, but the loss is always greater than 4.0. But after a few minutes of SAEHD, the loss value is lower than 0.5. What's wrong with Quick96? | open | 2020-03-14T04:25:37Z | 2023-06-08T20:20:46Z | https://github.com/iperov/DeepFaceLab/issues/655 | [] | jiongjarjar | 2 |
NullArray/AutoSploit | automation | 808 | Divided by zero exception78 | Error: Attempted to divide by zero.78 | closed | 2019-04-19T16:01:02Z | 2019-04-19T16:37:45Z | https://github.com/NullArray/AutoSploit/issues/808 | [] | AutosploitReporter | 0 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 151 | ODBC engine support | **Describe the bug**
ODBC engine doesn't support
**To Reproduce**
Create table with ODBC engine
Executing Table class with autoload=True, autoload_with=self.engine will fail
**Expected behavior**
Table object should be created
**Versions**
- Version of package with the problem. - 0.1.6
- Python version. - 3.7.3
| closed | 2021-10-25T12:52:27Z | 2021-10-26T06:27:40Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/151 | [] | PolitePp | 3 |
davidsandberg/facenet | computer-vision | 1,058 | Running FaceNet and MTCNN model simutaneously on 2 cameras | I am trying to run the MTCNN and FaceNet model on 2 cameras simultaneously. So, I am not getting any error while doing this but the code doesn't give me any results.
It just loads both the models and doesn't give me any predictions. Can anyone help me with this?
I have created 2 separate graphs and sessions using g=tf.Graph for MTCNN and FaceNet.
I think this error is coming due to multi-processing with TensorFlow as it might try to load MTCNN input to the Facenet graph. *this is my assumption.
Please let me know if you have any ideas about this. Thanks.
MTCNN:
```python
graph = tf.Graph()
with graph.as_default():
with open(model_path, 'rb') as f:
graph_def = tf.GraphDef.FromString(f.read())
tf.import_graph_def(graph_def, name='')
self.graph = graph
config = tf.ConfigProto(
allow_soft_placement=True,
intra_op_parallelism_threads=4,
inter_op_parallelism_threads=4)
config.gpu_options.allow_growth = True
self.sess = tf.Session(graph=graph, config=config)
```
FaceNet:
```python
with face_rec_graph.graph.as_default():
self.sess = tf.Session()
with self.sess.as_default():
self.__load_model(model_path)
self.x = tf.get_default_graph() \
.get_tensor_by_name("input:0")
self.embeddings = tf.get_default_graph() \
.get_tensor_by_name("embeddings:0")
self.phase_train_placeholder = tf.get_default_graph() \
.get_tensor_by_name("phase_train:0")
print("Model loaded")
```
face_rec_graph was created as follows:
```python
class FaceRecGraph(object):
def __init__(self):
self.graph = tf.Graph();
```
There is no error coming just both the cameras stop giving any result. | open | 2019-07-25T11:24:37Z | 2019-07-25T11:31:53Z | https://github.com/davidsandberg/facenet/issues/1058 | [] | apoorvopen | 0 |
ccxt/ccxt | api | 25,355 | Kucoin connection and ping-pong issues using watch_order_book_for_symbols() | ### Operating System
Windows 11 Pro
### Programming Languages
Python
### CCXT Version
4.4.62
### Description
Hi everyone,
First of all thanks a lot for the great library you are providing
My aim here is to get in real time the orderbook for c.300 trading pairs (spot and futures)
I have some issues with watch_order_book_for_symbols with Kucoin and Kucoinfutures, suffering from disconnections or ping pong issues quite often (see log attached)
At some point it also seems that data is staling so i am a bit doubtful weither ccxt reconnects or not
I believe i am below all kucoin's limits but my code may be wrong overall (feel free to correct me)
I also tried to change batch size but i did not see any difference
I just uploaded my code for futures but the one for spot is the same using ccxt.kucoin instead of ccxt.kucoinfutures
Happy to provide any complementary info or additional code if need be
Thanks a lot for your help
### Code
```
import asyncio
import ccxt.pro as ccxt
import pandas as pd
import xlwings as xw
import config0 as config
import tradingPairs_class_0 as TP0
import json
import uuid
import time
from ccxt.base.errors import InvalidNonce # noqa E402
from numpy import random
API_KEY = config.API_KEY
SECRET = config.SECRET
PASSWORD = config.PASSWORD
class KuCoinWebSocketFut:
MAX_TOPICS_PER_SESSION = 300 # Global session topic limit
MAX_PAIRS_PER_TICKERS_BATCH = 90 # Max 100 tickers per batch
MAX_PAIRS_PER_ORDERBOOKS_BATCH = 20 # Max 50 pairs per order book (50 * 2 = 100 topics)
def __init__(self, xls_path, sheet_name, trading_pairs, shared_dict=None):
self.xls_path = xls_path
self.sheet_name = sheet_name
self.trading_pairs = trading_pairs
self.exchange_tickers = []
self.exchange_orderbooks = []
self.wb = xw.Book(self.xls_path)
self.sheet = self.wb.sheets[self.sheet_name]
self.tickers = {}
self.order_books = {} # Dictionary storage for order books
self.shared_dict = shared_dict
self.processor = None
def batch_trading_pairs(self, batch_size):
"""Creates batches of trading pairs to optimize WebSocket usage."""
return [self.trading_pairs[i:i + batch_size] for i in range(0, len(self.trading_pairs), batch_size)]
async def write_(self):
while True:
print(dict(list(self.tickers.items())[:1]))
print(dict(list(self.order_books.items())[:1]))
await asyncio.sleep(5)
def safe_task(self, coro, name="Unnamed Task"):
"""Creates an asyncio task and ensures uncaught exceptions are logged."""
async def wrapper():
try:
return await coro
except Exception as e:
print(f"[KO] Uncaught Exception in Task {name}: {e}")
return asyncio.create_task(wrapper(), name=name) # ✅ Returns the task properly
async def _initialize_exchange(self):
""" Asynchronously initializes KuCoin WebSocket instances and loads markets """
total_symbols = len(self.trading_pairs)
num_ws_needed = max(1, (total_symbols // self.MAX_TOPICS_PER_SESSION) + 1) # ✅ Creates minimum WebSockets
for i in range(num_ws_needed):
try :
exchange = ccxt.kucoinfutures({
"enableRateLimit": True, #ok
'options': {
'adjustForTimeDifference': True,
'streaming': {
'pingInterval': 10000, # in ms, ping every 25 seconds
'pingTimeout': 20000
}
}
})
await exchange.load_markets() # ✅ Ensure markets are loaded
await exchange.watch_ticker('BTC/USDT:USDT')
self.exchange_tickers.append(exchange)
except Exception as e:
error_message = str(e)
if "nonce is behind cache" in error_message:
print(f"Nonce issue for Tickers instance {i}")
else: print(f"Error unkown initializing Tickers {i}: {error_message[:20]}")
for i in range(num_ws_needed):
try :
exchange = ccxt.kucoinfutures({
"enableRateLimit": True,
'options': {
'adjustForTimeDifference': True,
'streaming': {
'pingInterval': 10000, # in ms, ping every 25 seconds
'pingTimeout': 20000
}
}
})
await exchange.load_markets() # ✅ Ensure markets are loaded
await exchange.watch_ticker('BTC/USDT:USDT')
self.exchange_orderbooks.append(exchange)
# await asyncio.sleep(1)
except Exception as e:
error_message = str(e)
if "nonce is behind cache" in error_message:
print(f"Nonce issue for Orderbooks instance {i}")
else: print(f"Error unkown initializing Orderbooks {i}: {error_message[:20]}")
print(f"[OK] Created {len(self.exchange_tickers)} Futures Websocket Tickers instances.")
print(f"[OK] Created {len(self.exchange_orderbooks)} Futures Websocket Orderbooks instances.")
async def update_ticker_data(self, pairs_batch, batch_index):
exchange_instance = self.exchange_tickers[batch_index]
while True:
try:
tickers = await exchange_instance.watch_tickers(pairs_batch)
for symbol, ticker in tickers.items():
self.tickers[symbol] = {
"last price": ticker["last"],
"high": ticker["high"],
"low" : ticker["low"],
"volume" : ticker["quoteVolume"],
"timestamp" : ticker["timestamp"],
}
if self.shared_dict is not None:
if symbol not in self.shared_dict:
self.shared_dict[symbol] = {} # ✅ Ensure dictionary exists
self.shared_dict[symbol]["ticker"] = self.tickers[symbol] # ✅ Store order book data
except asyncio.CancelledError:
# print(f"🛑 Task Update Futures Tickers {batch_index} was canceled.")
raise # ✅ Re-raise to stop task properly
except Exception as e:
print(f"[KO] Error data for Futures Websocket Tickers batch {batch_index}")
pass
# self.down_ticker_websockets.add(batch_index)
async def update_order_book_data(self, pairs_batch, batch_index):
"""Updates order book metrics asynchronously while preventing conflicts with `keep_alive`."""
while True:
now = time.time()
try:
# order_book = await self.safe_fetch(self.fetch_order_books, pairs_batch, batch_index)
order_book = await self.exchange_orderbooks[batch_index].watch_order_book_for_symbols(pairs_batch)
if order_book :
# ✅ Extract Order Book Data
symbol = order_book["symbol"]
self.order_books[symbol] = {
"bids": order_book["bids"][:20],
"asks": order_book["asks"][:20],
}
if self.shared_dict is not None:
if symbol not in self.shared_dict:
self.shared_dict[symbol] = {} # ✅ Ensure dictionary exists
self.shared_dict[symbol]["order_book"] = self.order_books[symbol] # ✅ Store order book data
except asyncio.CancelledError:
# print(f"[KO] Task Update Futures Oderbooks {batch_index} was canceled.")
raise # ✅ Re-raise to stop task properly
except Exception as e:
print(f"[KO] Error Futures Oderbooks {batch_index}.")
await asyncio.sleep(3) # Wait before retrying
async def run(self):
"""Main function to start all WebSocket tasks and ensure they are properly monitored."""
ticker_batches = self.batch_trading_pairs(self.MAX_PAIRS_PER_TICKERS_BATCH)
order_book_batches = self.batch_trading_pairs(self.MAX_PAIRS_PER_ORDERBOOKS_BATCH)
await asyncio.sleep(60)
await self._initialize_exchange()
print(f"[OK] Running Futures with {len(self.exchange_tickers)} tickers WebSocket connections.")
print(f"[OK] Running Futures with {len(self.exchange_orderbooks)} order book WebSocket connections.")
try:
# 🛠️ Use safe_task() instead of wrapping with asyncio.create_task()
tasks = []
ws_index = 0
for _, batch in enumerate(ticker_batches):
tasks.append(self.safe_task(self.update_ticker_data(batch, ws_index), f"update_ticker_data-{ws_index}"))
tasks.append(self.safe_task(asyncio.sleep(0.1)))
ws_index = (ws_index + 1) % ((len(self.trading_pairs) // self.MAX_TOPICS_PER_SESSION) + 1) # ✅ Rotate WebSockets to balance load
ws_index = 0
for i, batch in enumerate(order_book_batches):
tasks.append(self.safe_task(self.update_order_book_data(batch, ws_index), f"update_order_book_data-{ws_index}"))
tasks.append(self.safe_task(asyncio.sleep(0.1)))
ws_index = (ws_index + 1) % ((len(self.trading_pairs) // self.MAX_TOPICS_PER_SESSION) + 1) # ✅ Rotate WebSockets to balance load
tasks.append(self.safe_task(self.write_(), "write"))
results = await asyncio.gather(*tasks, return_exceptions=True)
for result in results:
if isinstance(result, Exception):
print(f"[KO] Unhandled Exception in Async Task: {result}") # ✅ Ensures all tasks run without silent failures
except asyncio.CancelledError:
print("[KO] Stopping WebSockets... Cleaning up connections.")
finally:
# 🔥 Ensure all WebSockets are closed gracefully
for instance in self.exchange_orderbooks:
await instance.close()
for instance in self.exchange_tickers:
await instance.close()
print("[OK] All WebSocket connections closed.")
# Example Usage
if __name__ == "__main__":
XLS_PATH = "test"
SHEET_NAME = "test"
# TRADING_PAIRS = [
# '1CAT/USDT:USDT', '1INCH/USDT:USDT', 'ADA/USDT:USDT',
# ]
TP_fetcher = TP0.KuCoinTradingPairsFetcher()
TP_fetcher.fetch_trading_pairs()
TRADING_PAIRS = TP_fetcher.futures_pairs
client = KuCoinWebSocketFut(XLS_PATH, SHEET_NAME, TRADING_PAIRS)
try:
loop = asyncio.get_event_loop()
if loop.is_running():
print("...Event loop is already running, using create_task()")
loop.create_task(client.run()) # ✅ Safe for interactive environments
else:
asyncio.run(client.run()) # ✅ Safe for standalone scripts
except RuntimeError as e:
print(f"...RuntimeError")
[log.txt](https://github.com/user-attachments/files/18970299/log.txt)
```
| open | 2025-02-25T18:24:28Z | 2025-03-02T12:31:31Z | https://github.com/ccxt/ccxt/issues/25355 | [] | Philip-n7 | 5 |
Johnserf-Seed/TikTokDownload | api | 583 | Mac11.7, Ubuntu 22.04 terminal均无法扫描出二维码,只显示商品名[BUG] | Mac11.7, Ubuntu 22.04 terminal均无法扫描出二维码,只显示商品名 | open | 2023-10-25T08:58:04Z | 2023-12-26T11:44:04Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/583 | [
"重复(duplicate)",
"等待反馈(feedback)",
"已确认(confirmed)"
] | myrainbowandsky | 0 |
encode/httpx | asyncio | 2,239 | ASGI/WSGI transport does not honor timeouts, but should | For ASGI:
```python
import asyncio
import httpx
async def forever_app(*args):
forever = asyncio.get_event_loop().create_future()
await forever
async def main():
client = httpx.AsyncClient(app=forever_app, timeout=1e-6)
await asyncio.wait_for(client.get("/"), 1)
asyncio.run(main())
```
Expected: `httpx.TimeoutException`
Got: `asyncio.TimeoutError`
---
For WSGI:
```python
import httpx
import time
def forever_app(*args):
time.sleep(1e8)
def main():
client = httpx.Client(app=forever_app, timeout=1e-6)
client.get("http://test")
main()
```
Expected: `httpx.TimeoutException`
Got: hangs | closed | 2022-05-20T00:54:49Z | 2022-07-25T17:05:06Z | https://github.com/encode/httpx/issues/2239 | [
"bug"
] | AllSeeingEyeTolledEweSew | 5 |
aeon-toolkit/aeon | scikit-learn | 2,133 | [ENH] Density Peaks (DP) clusterer | ### Describe the feature or idea you want to propose
Density Peaks (DP) has been used in the TSCL literature. In a recent benchmark paper it was used: https://www.sciencedirect.com/science/article/pii/S2666827020300013. It also has a popular "fast" variant called TADPole: https://dl.acm.org/doi/10.1145/2783258.2783286
### Describe your proposed solution
I propose we initially implement DP and then in the future we can look into implementing the specific pruning variant TADPole.
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | open | 2024-10-02T11:57:25Z | 2025-02-24T15:14:55Z | https://github.com/aeon-toolkit/aeon/issues/2133 | [
"enhancement",
"clustering",
"implementing algorithms"
] | chrisholder | 3 |
ultralytics/ultralytics | computer-vision | 18,904 | Benchmark gives NaN for exportable models | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Other
### Bug
I am getting this result from benchmark:
```
1300.2s 2103 Benchmarks complete for best.pt on face-detection-dataset.yaml at imgsz=192,320 (536.99s)
1300.2s 2104 Format Status❔ Size (MB) metrics/mAP50-95(B) Inference time (ms/im) FPS
1300.2s 2105 0 PyTorch ✅ 0.3 0.2027 16.46 60.73
1300.2s 2106 1 TorchScript ❎ 0.7 NaN NaN NaN
1300.2s 2107 2 ONNX ❎ 0.5 NaN NaN NaN
1300.2s 2108 3 OpenVINO ❎ 0.6 NaN NaN NaN
1300.2s 2109 4 TensorRT ❌ 0.0 NaN NaN NaN
1300.2s 2110 5 CoreML ❎ 0.3 NaN NaN NaN
1300.2s 2111 6 TensorFlow SavedModel ❎ 1.4 NaN NaN NaN
1300.2s 2112 7 TensorFlow GraphDef ❎ 0.5 NaN NaN NaN
1300.2s 2113 8 TensorFlow Lite ❎ 0.5 NaN NaN NaN
1300.2s 2114 9 TensorFlow Edge TPU ❎ 0.3 NaN NaN NaN
1300.2s 2115 10 TensorFlow.js ❎ 0.5 NaN NaN NaN
1300.2s 2116 11 PaddlePaddle ❎ 1.0 NaN NaN NaN
1300.2s 2117 12 MNN ❎ 0.5 NaN NaN NaN
1300.2s 2118 13 NCNN ✅ 0.5 0.0003 5.97 167.53
1300.2s 2119 14 IMX ❌ 0.0 NaN NaN NaN
1300.2s 2120 15 RKNN ❌ 0.0 NaN NaN NaN
```
Why do I get so many NaN, especially for TensorFlow Lite, even thought I can export and run the model all right?
Logs:
[logs.log](https://github.com/user-attachments/files/18550557/logs.log)
### Environment
```
Ultralytics 8.3.68 🚀 Python-3.10.14 torch-2.4.0 CUDA:0 (Tesla P100-PCIE-16GB, 16269MiB)
Setup complete ✅ (4 CPUs, 31.4 GB RAM, 6095.9/8062.4 GB disk)
OS Linux-6.6.56+-x86_64-with-glibc2.35
Environment Kaggle
Python 3.10.14
Install pip
RAM 31.35 GB
Disk 6095.9/8062.4 GB
CPU Intel Xeon 2.00GHz
CPU count 4
GPU Tesla P100-PCIE-16GB, 16269MiB
GPU count 1
CUDA 12.3
numpy ✅ 1.26.4>=1.23.0
numpy ✅ 1.26.4<2.0.0; sys_platform == "darwin"
matplotlib ✅ 3.7.5>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 11.0.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.1>=1.4.1
torch ✅ 2.4.0>=1.8.0
torch ✅ 2.4.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.19.0>=0.9.0
tqdm ✅ 4.66.4>=4.64.0
psutil ✅ 5.9.3
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.12.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
dataset:
```
%%writefile face-detection-dataset.yaml
# CC0: Public Domain license
# Face-Detection-Dataset dataset by Fares Elmenshawii
# Documentation: https://www.kaggle.com/datasets/fareselmenshawii/face-detection-dataset
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: /kaggle/input/face-detection-dataset # dataset root dir
train: images/train # train images (relative to 'path')
val: images/val # val images (relative to 'path')
test: # test images (optional)
# Classes
names:
0: face
# Download script/URL (optional)
download: https://storage.googleapis.com/kaggle-data-sets/3345370/5891144/bundle/archive.zip
```
model:
```
%%writefile yolov6-face.yaml
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv6 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/models/yolov6
# Parameters
nc: 1 # number of classes
activation: nn.ReLU() # (optional) model default activation function
scales: # model compound scaling constants, i.e. 'model=yolov6n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
p: [0.33, 0.25, 8] # nano is [0.33, 0.25, 1024]
# YOLOv6-3.0s backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- [-1, 6, Conv, [128, 3, 1]]
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- [-1, 12, Conv, [256, 3, 1]]
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- [-1, 18, Conv, [512, 3, 1]]
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- [-1, 6, Conv, [1024, 3, 1]]
- [-1, 1, SPPF, [1024, 5]] # 9
# YOLOv6-3.0s head
head:
- [-1, 1, Conv, [256, 1, 1]]
- [-1, 1, nn.ConvTranspose2d, [256, 2, 2, 0]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 1, Conv, [256, 3, 1]]
- [-1, 9, Conv, [256, 3, 1]] # 14
- [-1, 1, Conv, [128, 1, 1]]
- [-1, 1, nn.ConvTranspose2d, [128, 2, 2, 0]]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 1, Conv, [128, 3, 1]]
- [-1, 9, Conv, [128, 3, 1]] # 19
- [[14, 19], 1, Detect, [nc]] # Detect(P3, P4, P5)
```
code:
```
model = YOLO("./yolov6-face.yaml")
r = model.train(data="face-detection-dataset.yaml", epochs=1, imgsz='192,320', single_cls=True, plots=True, batch=500)
from ultralytics.utils.benchmarks import benchmark
benchmark(model=model, data="face-detection-dataset.yaml", imgsz='192,320', device="cpu")
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-01-26T15:12:52Z | 2025-01-28T18:16:28Z | https://github.com/ultralytics/ultralytics/issues/18904 | [
"bug",
"fixed",
"exports"
] | EmmanuelMess | 8 |
babysor/MockingBird | pytorch | 53 | 请问如何恰当调整CPU和GPU的占用率呢 | 请教一下,GPU和CPU利用率只有13%左右,该怎么调整训练参数? | closed | 2021-08-27T08:46:42Z | 2021-10-01T03:18:35Z | https://github.com/babysor/MockingBird/issues/53 | [] | TypicalSpider | 3 |
deezer/spleeter | tensorflow | 9 | OSError: FFMPEG binary (ffmpeg) not found | Tried running spleeter on on my machine and kept getting this error
`OSError: FFMPEG binary (ffmpeg) not found` | closed | 2019-11-03T07:35:42Z | 2019-11-03T11:06:31Z | https://github.com/deezer/spleeter/issues/9 | [
"invalid",
"question",
"wontfix"
] | ajatau | 1 |
davidsandberg/facenet | tensorflow | 261 | when i consfer the 20170216-091149model to a frozen model,i have a problem ,please help | below is my transfer code:
import os
import tensorflow as tf
import re
from tensorflow.python.framework import graph_util
def get_model_filenames(model_dir):
.....................
def freeze_graph(model_folder):
meta_file, ckpt_file = get_model_filenames(model_folder)
output_graph = "./frozen_model.pb"
output_node_names = "embeddings"
clear_devices = True
saver = tf.train.import_meta_graph(meta_file, clear_devices=clear_devices)
graph = tf.get_default_graph()
input_graph_def = graph.as_graph_def()
with tf.Session() as sess:
saver.restore(sess, ckpt_file)
output_graph_def = graph_util.convert_variables_to_constants( sess, input_graph_def,
output_node_names.split(","))
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
if __name__ == '__main__':
freeze_graph('./models/20170216-091149')
it did work,i ot the file frozen_model.pb,but when i use this file,igot some errors:
below are my code and error:
with tf.gfile.FastGFile('./frozen_model.pb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0")
embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0")
...
error:
Traceback (most recent call last):
File "test_frozen.py", line 54, in <module>
_ = tf.import_graph_def(graph_def, name='')
File "/home/flyvideo/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/importer.py", line 388, in import_graph_def
node, 'Input tensor %r %s' % (input_name, te)))
ValueError: graph_def is invalid at node u'InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/AssignMovingAvg/Switch': Input tensor 'InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/moving_mean:0' Cannot convert a tensor of type float32 to an input of type float32_ref.
who can help me please,i dont know how to fix it......... | closed | 2017-05-04T14:55:16Z | 2017-05-07T11:25:13Z | https://github.com/davidsandberg/facenet/issues/261 | [] | xidian-xuchen | 2 |
sammchardy/python-binance | api | 1,066 | BinanceApiException - response code not available | **Describe the bug**
I sometimes receive the following exception, while invoking futures_place_batch_order:
` File "/usr/local/lib/python3.8/dist-packages/binance/exceptions.py", line 14, in __init__
self.code = json_res['code']
KeyError: 'code'`
**To Reproduce**
Seems hard to reproduce, since it only happens occasionally, when BinanceAPI is sending an unexpected response text.
Easiest would be to send a text not including the "code" key.
**Expected behavior**
Either capture the exception or have a default handling when "code" is not in text
**Environment (please complete the following information):**
- Python version: 3.8.10
- Virtual Env: no virtual env
- OS: Ubuntu 20.04.3 LTS
- python-binance version: 1.0.15
**Logs or Additional context**
`Exception in thread Thread-7:
Traceback (most recent call last):
File "PositionManager.py", line 63, in open_otoco
results = self.client.futures_place_batch_order(batchOrders=batch_order_as_json,timestamp=ts,recvWindow=config.RCV_WINDOW)
File "/usr/local/lib/python3.8/dist-packages/binance/client.py", line 5866, in futures_place_batch_order
return self._request_futures_api('post', 'batchOrders', True, data=params)
File "/usr/local/lib/python3.8/dist-packages/binance/client.py", line 339, in _request_futures_api
return self._request(method, uri, signed, True, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/binance/client.py", line 315, in _request
return self._handle_response(self.response)
File "/usr/local/lib/python3.8/dist-packages/binance/client.py", line 324, in _handle_response
raise BinanceAPIException(response, response.status_code, response.text)
File "/usr/local/lib/python3.8/dist-packages/binance/exceptions.py", line 14, in __init__
self.code = json_res['code']
KeyError: 'code'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "PositionManager.py", line 197, in start_trading
self.open_otoco("SELL",self.position_size(),self.macd_rsi_strategy.get_sl_price("SELL"),self.macd_rsi_strategy.get_tp_price("SELL"))
File "PositionManager.py", line 72, in open_otoco
if results:
UnboundLocalError: local variable 'results' referenced before assignment`
| open | 2021-10-27T11:04:19Z | 2021-10-27T11:04:19Z | https://github.com/sammchardy/python-binance/issues/1066 | [] | Kayne88 | 0 |
mwaskom/seaborn | pandas | 3,614 | seaborn.objects incorrectly plots secondary y-axis | Hello,
Thanks for the great work on seaborn.objects, I really like it.
I noticed an issue however when trying to plot on a secondary y-axis using Plot.on(). Essentially, the second axis seems to be double-plotted - once using the original axis, and again using the correct second axis. This leads to overlapping ticks and the grid lines being drawn above the first plot's line.
Minimal reproduction:
```python
from matplotlib import pyplot as plt
import seaborn as sns
import seaborn.objects as so
import pandas as pd
# create simple dataframe
df = pd.DataFrame(
{
"X": [1, 2, 3, 4],
"Y1": [1, 2, 3, 4],
"Y2": [0, 3, 9, 81],
}
)
sns.set_theme()
f = plt.figure(figsize=(8, 4))
# get the axis
ax = f.add_subplot(111)
ax2 = ax.twinx()
(
so.Plot(df, x="X", y="Y1")
.add(so.Line(color="C0"))
.on(ax)
.plot()
)
(
so.Plot(df, x="X", y="Y2")
.add(so.Line(color="C1"))
.on(ax2)
.plot()
)
plt.show()
```

I managed a temporary workaround using the following:
```python
ax2.grid(False)
ax2.yaxis.tick_right()
```
but it's still weird that it happens to begin with. I'd be happy to take a stab at fixing it if anyone could point me in the right direction | open | 2024-01-10T11:40:01Z | 2024-01-16T12:06:58Z | https://github.com/mwaskom/seaborn/issues/3614 | [] | SamKG | 1 |
strawberry-graphql/strawberry-django | graphql | 13 | Should strawberry-graphql-django be merged into strawberry? | This issue is to discuss whether strawberry-graphql-django should be merged into strawberry, or if it should remain a separate repo. | closed | 2021-03-22T15:01:30Z | 2021-06-14T19:07:23Z | https://github.com/strawberry-graphql/strawberry-django/issues/13 | [
"question"
] | joeydebreuk | 3 |
neuml/txtai | nlp | 807 | ImportError: cannot import name 'DuckDuckGoSearchTool' from 'transformers.agents' | When running the first example on Google Colab [01_Introducing_txtai.ipynb](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/01_Introducing_txtai.ipynb)
I am receiving the following error.
> ImportError: cannot import name 'DuckDuckGoSearchTool' from 'transformers.agents' (/usr/local/lib/python3.10/dist-packages/transformers/agents/__init__.py)
Digging through the source code I found [this](https://github.com/huggingface/transformers/blob/33eef992503689ba1af98090e26d3e98865b2a9b/src/transformers/agents/search.py#L25-L34) however, installing `duckduckgo_search` does not seem to fix the error. Any help would be appreciated! | closed | 2024-11-12T12:01:24Z | 2024-11-16T02:28:41Z | https://github.com/neuml/txtai/issues/807 | [
"bug"
] | blacksmithop | 2 |
alteryx/featuretools | scikit-learn | 2,416 | release Featuretools v1.20.0 | closed | 2022-12-19T16:04:59Z | 2023-01-05T18:47:16Z | https://github.com/alteryx/featuretools/issues/2416 | [] | gsheni | 1 | |
ultralytics/yolov5 | deep-learning | 12,878 | Expected performance on an Jetson Orin AGX | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
We have been using your YoloV5 model for a number of years, it provides impressive detection performance and the documentation is very thorough.
We have recently moved to testing our models on the Jetson Orin AGX unit. We haven't seen an improvement in performance compared to the Xavier AGX unit that we expected. With the YoloV5s model running against a 640x640 image we are seeing inference times of 10ms/image on both units (FP32 or FP16, batch size 1). A (blog post)[https://www.stereolabs.com/blog/performance-of-yolo-v5-v7-and-v8] gives performance that should be approaching 3ms/image, although they don't specify FP or INT8, or their batch size.
Do you have any benchmarks available on the Orin AGX unit? I'm looking to try and understand if there is some setting or optimisation I have missed, or whether it is expected that the YoloV5s model wouldn't see any speed improvement between the Xavier and Orin AGX units.
I appreciate that the Yolov5 model is not the current model and we are looking to move to YoloV7 or YoloV8. If you have benchmarks for these that would also be useful.
Thanks for your time.
### Additional
_No response_ | closed | 2024-04-03T09:19:18Z | 2024-05-07T15:43:57Z | https://github.com/ultralytics/yolov5/issues/12878 | [
"question",
"Stale"
] | AWilco | 4 |
JohnSnowLabs/nlu | streamlit | 70 | Error when loading match.datetime component | ```
import nlu
nlu.load('match.datetime').predict('In the years 2000/01/01 to 2010/01/01 a lot of things happened')
```
**Running it in colab** `pip install nlu pyspark==3.0.2`
Get this Error:
Exception: Something went wrong during loading and fitting the pipe. Check the other prints for more information and also verbose mode. Did you use a correct model reference? | open | 2021-09-06T11:41:31Z | 2021-09-11T10:23:38Z | https://github.com/JohnSnowLabs/nlu/issues/70 | [
"bug"
] | GladiatorX | 1 |
InstaPy/InstaPy | automation | 5,906 | Instapy run but don't do anything in DigitalOcean | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Instapy must like and comment people content by tag, like always
## Current Behavior
Logged in successfully!
INFO [2020-11-16 15:20:57] [XXXX] Saving account progress...
INFO [2020-11-16 15:21:14] [XXXX] Tag [1/5]
INFO [2020-11-16 15:21:14] [XXXX] --> b'simplicitevolontaire'
INFO [2020-11-16 15:21:22] [XXXX] desired amount: 19 | top posts [disabled]: 9 | possible posts: 0
INFO [2020-11-16 15:21:28] [XXXX] Tag: b'simplicitevolontaire'
INFO [2020-11-16 15:21:28] [XXXX] Tag [2/5]
INFO [2020-11-16 15:21:28] [XXXX] --> b'viedeparents'
INFO [2020-11-16 15:21:36] [XXXX] desired amount: 19 | top posts [disabled]: 9 | possible posts: 0
INFO [2020-11-16 15:21:41] [XXXX] Tag: b'viedeparents'
INFO [2020-11-16 15:21:41] [XXXX] Tag [3/5]
INFO [2020-11-16 15:21:41] [XXXX] --> b'mamandedeux'
etc..
Nothing more happen, Instapy do just like if his job is done but.. it do nothing..
Just before that I had the same issue but with "Too few images, skip..." issue, and now it just go from a tag to another without liking or commenting nothing;;
## Possible Solution (optional)
Ghostban ? Need to put a proxy? The thing is that Im not sure its that and if its that, is there a way to create a proxy with my home IP that will be used by my bot ? Because its with that IP that I connect to this instagram account normally..
Thanks for your help and sorry for my bad language, Im a poor French instapy user who use your super bot from long time ago..
## InstaPy configuration
```
# !/usr/bin/python2.7
import random
from instapy import InstaPy
from instapy import smart_run
from instapy import set_workspace
# Creating a workspace
set_workspace(path=None) # currently there is not working path
# get a session!
session = InstaPy(username='XXXXX', password='XXXXX', headless_browser=True)
# let's go! :>
with smart_run(session):
hashtags = ['simplicitevolontaire','lesregardergrandir','acompagnementparental','viedeparents','mamandedeux']
random.shuffle(hashtags)
my_hashtags = hashtags[:10]
# general settings
session.set_dont_like(['saddddd'])
session.set_do_follow(enabled=True, percentage=25, times=1)
session.set_do_comment(enabled=True, percentage=70)
session.set_comments([
u'My comments are here, I got so much so I just show this one haha'],
media='Photo')
session.set_do_like(True, percentage=45)
session.set_delimit_liking(enabled=True, max_likes=100, min_likes=0)
session.set_delimit_commenting(enabled=True, max_comments=20, min_comments=0)
session.set_relationship_bounds(enabled=True,
potency_ratio=None,
delimit_by_numbers=True,
max_followers=15000,
max_following=15000,
min_followers=70,
min_following=70)
session.set_quota_supervisor(enabled=True,
sleep_after=["likes", "follows"],
sleepyhead=True, stochastic_flow=True,
notify_me=True,
peak_likes_hourly=None,
peak_likes_daily=None,
peak_comments_hourly=8,
peak_comments_daily=224,
peak_follows_hourly=None,
peak_follows_daily=None,
peak_unfollows_hourly=None,
peak_unfollows_daily=None,
peak_server_calls_hourly=None,
peak_server_calls_daily=4700)
session.set_user_interact(amount=2, randomize=True, percentage=70)
# activity
session.like_by_tags(my_hashtags, amount=10, media=None)
session.unfollow_users(amount=10, instapy_followed_enabled=True, instapy_followed_param="nonfollowers",
style="FIFO",
unfollow_after=12 * 60 * 60, sleep_delay=501)
session.unfollow_users(amount=10, instapy_followed_enabled=True, instapy_followed_param="all",
style="FIFO", unfollow_after=24 * 60 * 60,
sleep_delay=501)
``` | closed | 2020-11-16T15:34:08Z | 2021-07-21T07:18:31Z | https://github.com/InstaPy/InstaPy/issues/5906 | [
"wontfix"
] | MundoBoss | 9 |
ultralytics/ultralytics | python | 18,982 | YOLOv11 vs SSD performance on 160x120 infrared images | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello,
in our previous project, we successfully implemented object detection in images from 160x120 infrared camera on Raspberry Pi 4. We used SqueezeNet-SSD network trained with CaffeSSD framework. Its performance was very good for our needs (about 60 ms per frame) with excellent accuracy on normal-sized objects (confidence mostly over 90%) but lower accuracy on smaller objects (mostly detected correctly with very low confidence 30%).
Later, we stripped SqueezeNet's fire modules to simple squeeze-expand blocks, added feature-fusion for the first SSD layer and modified priorboxes ratios to match our dataset. We reached detection speed of about 30 ms per frame and excellent accuracy for all objects.
In our upcoming project, we are continuing with similar task but we would like to use more innovative approach, because Caffe framework has not been maintained for years anymore. We're experimenting with Ultralytics framework and it looks very modern to us. We're also thinking about switching to Raspberry Pi 5, maybe with Hailo8 kit which is not supported by CaffeSSD so Ultralytics seems to be good way to go.
Our dataset consists of 5000 training grayscale images and 1000 testing images with resolution of 160x120. Many augmented versions of each training image was added to the training dataset thus it has over 40000 images. We identify 5 types of objects - example: face (about 64x100) and eyes (45x10). It's exactly the same dataset that was used for training our SSD networks. Now we have trained several versions of YOLOv11 with batch size of 128 for 300 epochs. Results are good, but not as good as our original SSD network. Here, I would like to share our benchmarks with others:
**Detection speed**
```
RPi5 Rock 4B+ RPi4 RPi 5 + Hailo 8
----------------------------------------------------------------------------------------------------
SEnet-FSSD-160x120 7.542 ms 27.478 ms 29.263 ms -
SqueezeNet-SSD 10.074 ms 32.615 ms 38.491 ms -
Yolo11n-160 12.317 ms 49.212 ms 45.283 ms 4.252 ms
Yolo11n-320 48.207 ms 177.076 ms 178.268 ms 7.236 ms
Yolo11s-160 30.835 ms 129.767 ms 127.677 ms 10.999 ms
Yolo11m-320 313.738 ms 1121.319 ms 1180.839 ms 24.829 ms
```
As you can see, even the nano version of YOLOv11 is much slower than the original SqueezeNet-SSD. Although we would prefer better times, it is still usable for our needs, especially when we're thinking about Hailo8.
**Detection accuracy**
I don't have specific objective statistics here but it is worst just visually. Even yolo11m-320 version provides worst results. Rectangles are not as exact, confidences are lower and there is a bit higher number of false results. Just for illustration on 1000 validation images:
(mean wIoU is average of IoU for all detections with threshold of 50 weighted by the confidence)
SEnet-FSSD-160x120 - total detections: 2014, false positives: 6, false negatives: 9, mean wIoU: 0.892
SqueezeNet-SSD - total detections: 2029, false positives: 59, false negatives: 47, mean wIoU: 0.855
Yolo11n-160 - total detections: 2027, false positives: 28, false negatives: 18, mean wIoU: 0.851
Yolo11s-160 - total detections: 2023, false positives: 26, false negatives: 20, mean wIoU: 0.859
Yolo11m-320 - total detections: 2078, false positives: 71, false negatives: 12, mean wIoU: 0.845
```
$ ./test image.png senet-fssd-160x120
0: class = 3, confidence = 1.000000, [61, 10, 124, 107]
1: class = 1, confidence = 0.999990, [72, 49, 115, 57]
$ ./test image.png squeezenet-ssd-160x120
0: class = 3, confidence = 1.000000, [61, 10, 123, 108]
1: class = 1, confidence = 0.774772, [72, 49, 114, 57]
$ ./test image.png yolo11n-160.onnx
0: class = 3, confidence = 0.920182, [60, 11, 123, 99]
1: class = 1, confidence = 0.766865, [71, 49, 115, 57]
$ ./test image.png yolo11m-320.onnx
0: class = 3, confidence = 0.895741, [61, 12, 123, 103]
1: class = 1, confidence = 0.745349, [72, 50, 115, 56]
```
Maybe the problem lies in training hyperparameters. We just set batch size to 128 and number of epochs to 300. I would appreciate any ideas. Thank you!
Meanwhile, I've been trying to simulate our SEnet-FSSD using YAML model in Ultralytics. I don't know if it is good idea, I just would like to see if it changes anything. I made pure copy of our network, but it is not possible to train it, because of layers sizes mismatch. MaxPool2d layers don't seem to downscale the resolution in the same way as it happens in Caffe framework. There is also no Eltwise (element sum) layer so I had to change it to Concat layer. Adding padding=1 to all MaxPool2d layers works but it automatically changes the input resolution to 192. But results are practically very similar to other YOLO models and not to our original network.
Here is the model that is 1:1 rewrite of our SSD network. Maybe, someone will be able to fix it:
```
nc: 5
activation: nn.ReLU6()
backbone:
- [-1, 1, Conv, [64, 3, 2, 0]] # 0,conv1
- [-1, 1, nn.MaxPool2d, [3, 2]] # 1,pool1
- [-1, 1, Conv, [32, 1, 1, 0] ] # 2,fire2 squeeze
- [-1, 1, Conv, [64, 3, 1, 1] ] # 3,fire2 expand
- [-1, 1, Conv, [32, 1, 1, 0] ] # 4,fire3 squeeze
- [-1, 1, Conv, [64, 3, 1, 1] ] # 5,fire3 expand
- [-1, 1, nn.MaxPool2d, [3, 2]] # 6,pool3
- [-1, 1, Conv, [64, 1, 1, 0] ] # 7,fire4 squeeze
- [-1, 1, Conv, [128, 3, 1, 1] ] # 8,fire4 expand
- [-1, 1, Conv, [64, 1, 1, 0] ] # 9,fire5 squeeze
- [-1, 1, Conv, [128, 3, 1, 1] ] # 10,fire5 expand
- [-1, 1, nn.MaxPool2d, [3, 2]] # 11,pool5
- [-1, 1, Conv, [96, 1, 1, 0] ] # 12,fire6 squeeze
- [-1, 1, Conv, [192, 3, 1, 1] ] # 13,fire6 expand
- [-1, 1, Conv, [96, 1, 1, 0] ] # 14,fire7 squeeze
- [-1, 1, Conv, [192, 3, 1, 1] ] # 15,fire7 expand
- [-1, 1, Conv, [64, 1, 1, 0] ] # 16,fire8 squeeze
- [-1, 1, Conv, [256, 3, 1, 1] ] # 17,fire8 expand
- [-1, 1, Conv, [64, 1, 1, 0] ] # 18,fire9 squeeze
- [-1, 1, Conv, [256, 3, 1, 1] ] # 19,fire9 expand
head:
- [-1, 1, nn.MaxPool2d, [3, 2]] # 20,pool9
- [-1, 1, Conv, [96, 1, 1, 0] ] # 21,fire10 squeeze
- [-1, 1, Conv, [384, 3, 1, 1] ] # 22,fire10 expand
- [-1, 1, nn.MaxPool2d, [3, 2]] # 23,pool10
- [-1, 1, Conv, [64, 1, 1, 0] ] # 24,fire11 squeeze
- [-1, 1, Conv, [256, 3, 1, 1] ] # 25,fire11 expand
# feature-fusion layers
- [19, 1, Conv, [128, 1, 1, 0] ] # 26
- [-1, 1, nn.Upsample, [None, 2, "nearest"]] # 27
- [22, 1, Conv, [128, 1, 1, 0] ] # 28
- [-1, 1, nn.Upsample, [None, 4, "nearest"]] # 29
- [[29, 27, 10], 1, Concat, [1]] # 30
- [-1, 1, nn.BatchNorm2d, []] # 31
- [-1, 1, Conv, [64, 1, 1, 0] ] # 32
- [-1, 1, Conv, [128, 3, 1, 1] ] # 33
- [[25, 22, 19, 33], 1, Detect, [nc]]
```
### Additional
EDIT: I forgot to provide information that all detections are done using OpenCV in C++ | open | 2025-02-03T20:01:01Z | 2025-02-27T22:16:34Z | https://github.com/ultralytics/ultralytics/issues/18982 | [
"question",
"detect",
"embedded"
] | BigMuscle85 | 32 |
pydantic/logfire | pydantic | 786 | Logfire Only Logging FastAPI Requests, Not SQL Queries with SQLModel/SQLAlchemy | ### Question
Hello Logfire Community,
I'm integrating Logfire with a FastAPI backend that uses [SQLModel](https://sqlmodel.tiangolo.com/#a-sql-table) (built on SQLAlchemy) for database interactions. I aim to have Logfire display detailed logs for both incoming HTTP requests and the SQL queries executed by SQLModel. However, currently, only the FastAPI request data is being logged, and the SQL queries are not appearing in Logfire.
Here's an overview of my setup:
Main Application (main.py):
Configured Logfire with service name, environment, and token.
Instrumented Pydantic and FastAPI:
```python
logfire.instrument_pydantic()
logfire.instrument_fastapi(app)
```
Included routers and set up the FastAPI app lifecycle.
Database Configuration (database.py):
Created the SQLAlchemy engine:
```python
engine = create_engine(
url=DATABASE_URL,
echo=False if IS_CLOUD else True,
connect_args={} if IS_CLOUD else {"check_same_thread": False},
)
```
Instrumented SQLAlchemy with Logfire:
```python
logfire.instrument_sqlalchemy(engine=engine)
```
Provided a session dependency for FastAPI endpoints.
Endpoint Example:
Defined a FastAPI endpoint that uses a service and data access layer to execute database queries.
Issue: Despite the above setup, Logfire only captures and displays logs related to FastAPI requests. The SQL queries executed via SQLModel/SQLAlchemy are not being logged or visible in the Logfire dashboard.
What I've Tried:
Ensured that logfire.instrument_sqlalchemy(engine=engine) is called after creating the engine.
Verified that SQLAlchemy's echo parameter is set appropriately for logging.
Checked Logfire configurations and tokens for correctness.
Questions:
Am I missing any additional configuration steps to enable SQL query logging with SQLModel/SQLAlchemy in Logfire?
Are there any compatibility considerations between Logfire and SQLModel that I should be aware of?
Could there be any issues with the order of instrumentation calls that might prevent SQL logs from being captured?
Additional Information:
Logfire Version: logfire 3.0.0
FastAPI Version: 0.111.1
SQLModel Version: 0.0.19
SQLAlchemy Version: 2.0.36
Environment: Local
``` python
# main.py
import logfire
from fastapi import FastAPI
logfire.configure(
service_name="Recommendations",
environment="production",
token="YOUR_LOGFIRE_TOKEN",
)
logfire.instrument_pydantic()
logfire.instrument_fastapi(app)
# database.py
import logfire
from sqlmodel import create_engine
engine = create_engine(DATABASE_URL)
logfire.instrument_sqlalchemy(engine=engine)
Thank you for your assistance!
```

| open | 2025-01-08T14:36:25Z | 2025-01-21T08:12:52Z | https://github.com/pydantic/logfire/issues/786 | [
"Question"
] | alon710 | 6 |
sqlalchemy/alembic | sqlalchemy | 411 | auto generate not working for postgres JSONB | **Migrated issue, originally created by Paul van der Linden**
When I auto generate a migration for a model with a jsonb field i get the following error:
```
sa.Column('data', postgresql.JSONB(astext_type=Text()), nullable=True),
NameError: name 'Text' is not defined
```
This is with version alembic `0.8.10` and sqlalchemy `1.1.5`, it used to work, and would not add the `astext_type=Text()` (in version alembic `0.8.9` and sqlalchmey `1.0.15`).
| closed | 2017-02-16T20:30:38Z | 2019-01-28T17:09:38Z | https://github.com/sqlalchemy/alembic/issues/411 | [
"bug",
"autogenerate - rendering",
"postgresql",
"awaiting info"
] | sqlalchemy-bot | 9 |
pyg-team/pytorch_geometric | pytorch | 8,747 | `torch.compile`:`Cannot call sizes() on tensor with symbolic sizes/strides` when using `knn_graph` | ### 🐛 Describe the bug
Compilation fails when using `knn_graph`. I am able to reproduce the problem with the following code:
```
import torch
import torch_geometric
from torch_geometric.nn.pool import knn_graph
datas = [torch_geometric.data.Data(pos=torch.rand((50, 3))) for i in range(5)]
batch = torch_geometric.data.Batch.from_data_list(datas)
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, pos, batch):
edge_index = knn_graph(pos, 5, batch=batch, loop=True)
return edge_index
model = MyModel()
model = torch_geometric.compile(model, dynamic=True)
output = model(batch.pos, batch.batch)
print(output.shape)
```
My environment contains:
```
# Name Version Build Channel
pytorch-lightning 2.1.2 pypi_0 pypi
torch 2.1.2+cu118 pypi_0 pypi
torch-cluster 1.6.3+pt21cu118 pypi_0 pypi
torch-geometric 2.4.0 pypi_0 pypi
torch-scatter 2.1.2+pt21cu118 pypi_0 pypi
torch-sparse 0.6.18+pt21cu118 pypi_0 pypi
torchmetrics 1.2.1 pypi_0 pypi
torchvision 0.16.2+cu118 pypi_0 pypi
```
Here is the full traceback of the exception:
```
/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/overrides.py:110: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()'
torch.has_cuda,
/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/overrides.py:111: UserWarning: 'has_cudnn' is deprecated, please use 'torch.backends.cudnn.is_available()'
torch.has_cudnn,
/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/overrides.py:117: UserWarning: 'has_mps' is deprecated, please use 'torch.backends.mps.is_built()'
torch.has_mps,
/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/overrides.py:118: UserWarning: 'has_mkldnn' is deprecated, please use 'torch.backends.mkldnn.is_available()'
torch.has_mkldnn,
Traceback (most recent call last):
File "/home/CGaydon/repositories/mre_compile.py", line 23, in <module>
output = model(batch.pos, batch.batch)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/CGaydon/repositories/mre_compile.py", line 16, in forward
edge_index = knn_graph(pos, 5, batch=batch, loop=True)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch_geometric/nn/pool/__init__.py", line 169, in knn_graph
return torch_cluster.knn_graph(x, k, batch, loop, flow, cosine,
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch_cluster/knn.py", line 132, in knn_graph
edge_index = knn(x, x, k if loop else k + 1, batch, batch, cosine,
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch_cluster/knn.py", line 66, in knn
batch_size = int(batch_x.max()) + 1
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch_cluster/knn.py", line 69, in <resume in knn>
batch_size = max(batch_size, int(batch_y.max()) + 1)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 490, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 641, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 569, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner
out_code = transform_code_object(code, transform)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 458, in transform
tracer.run()
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2069, in run
super().run()
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 683, in step
getattr(self, inst.opname)(inst)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper
return inner_fn(self, inst)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1110, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 557, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/variables/torch.py", line 729, in call_function
tensor_variable = wrap_fx_proxy(
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 1191, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 1278, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1376, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1337, in get_fake_value
return wrap_fake_exception(
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 916, in wrap_fake_exception
return fn()
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1338, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1410, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1397, in run_node
return node.target(*args, **kwargs)
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch/_ops.py", line 692, in __call__
return self._op(*args, **kwargs or {})
torch._dynamo.exc.TorchRuntimeError: Failed running call_function torch_cluster.knn(*(FakeTensor(..., size=(s2, s3)), FakeTensor(..., size=(s2, s3)), FakeTensor(..., size=(s0 + 1,), dtype=torch.int64), FakeTensor(..., size=(s0 + 1,), dtype=torch.int64), 5, False, 1), **{}):
Cannot call sizes() on tensor with symbolic sizes/strides
from user code:
File "/var/data/CGaydon/mambaforge/envs/myria3d_upgrade/lib/python3.9/site-packages/torch_cluster/knn.py", line 81, in <resume in knn>
return torch.ops.torch_cluster.knn(x, y, ptr_x, ptr_y, k, cosine,
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.1.2+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.18 | packaged by conda-forge | (main, Aug 30 2023, 03:49:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40
GPU 2: NVIDIA A40
Nvidia driver version: 535.104.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7413 24-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3630,8101
CPU min MHz: 1500,0000
BogoMIPS: 5289.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 1,5 MiB (48 instances)
L1i cache: 1,5 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] pytorch-lightning==2.1.2
[pip3] torch==2.1.2+cu118
[pip3] torch-cluster==1.6.3+pt21cu118
[pip3] torch_geometric==2.4.0
[pip3] torch-scatter==2.1.2+pt21cu118
[pip3] torch-sparse==0.6.18+pt21cu118
[pip3] torchmetrics==1.2.1
[pip3] torchvision==0.16.2+cu118
[pip3] triton==2.1.0
[conda] numpy 1.26.2 py39h474f0d3_0 conda-forge
[conda] pytorch-lightning 2.1.2 pypi_0 pypi
[conda] torch 2.1.2+cu118 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt21cu118 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt21cu118 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt21cu118 pypi_0 pypi
[conda] torchmetrics 1.2.1 pypi_0 pypi
[conda] torchvision 0.16.2+cu118 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi | closed | 2024-01-10T10:55:45Z | 2024-01-11T15:23:15Z | https://github.com/pyg-team/pytorch_geometric/issues/8747 | [
"bug",
"compile"
] | CharlesGaydon | 3 |
benlubas/molten-nvim | jupyter | 204 | [Help] [Molten] Cell not Found | **Molten cannot find cells** and using Quarto produces the following message:
```
No code chunks found for the current language, which is detected based on the current code block. Is your cursor in a code block?
```
Below are my plugin.lua files. _What am I doing wrong?_
**molten-nvim.lua:**
```lua
return {
{
"benlubas/molten-nvim",
version = "^1.0.0", -- use version <2.0.0 to avoid breaking changes
build = ":UpdateRemotePlugins",
init = function()
-- Configuration options
vim.g.molten_output_win_max_height = 12
vim.g.molten_auto_open_output = false
vim.g.molten_image_provider = "image.nvim"
vim.g.molten_wrap_output = true
vim.g.molten_virt_text_output = true
vim.g.molten_virt_lines_off_by_1 = true
-- Key mappings
vim.keymap.set("n", "<leader>mi", ":MoltenInit<CR>", { silent = true, desc = "Initialize Molten" })
vim.keymap.set(
"n",
"<leader>me",
":MoltenEvaluateOperator<CR>",
{ silent = true, desc = "Molten run operator selection" }
)
vim.keymap.set("n", "<leader>rl", ":MoltenEvaluateLine<CR>", { silent = true, desc = "Molten evaluate line" })
vim.keymap.set(
"n",
"<leader>rr",
":MoltenReevaluateCell<CR>",
{ silent = true, desc = "Molten re-evaluate cell" }
)
vim.keymap.set(
"v",
"<leader>r",
":<C-u>MoltenEvaluateVisual<CR>gv",
{ silent = true, desc = "Molten evaluate visual selection" }
)
vim.keymap.set(
"n",
"<leader>mo",
":noautocmd MoltenEnterOutput<CR>",
{ silent = true, desc = "Molten enter output" }
)
end,
},
{
-- see the image.nvim readme for more information about configuring this plugin
"3rd/image.nvim",
opts = {
backend = "kitty", -- whatever backend you would like to use
max_width = 100,
max_height = 12,
max_height_window_percentage = math.huge,
max_width_window_percentage = math.huge,
window_overlap_clear_enabled = true, -- toggles images when windows are overlapped
window_overlap_clear_ft_ignore = { "cmp_menu", "cmp_docs", "" },
},
},
}
```
**jupytext.lua:**
```lua
return {
"GCBallesteros/jupytext.nvim",
config = function()
require("jupytext").setup({
custom_language_formatting = {
python = {
extension = "md",
style = "markdown",
force_ft = "markdown",
},
})
end,
lazy = false,
}
```
**quarto.lua**
```lua
return {
{
"jmbuhr/otter.nvim",
lazy = false,
},
{
"quarto-dev/quarto-nvim",
ft = { "quarto", "markdown" },
config = function()
local runner = require("quarto.runner")
-- Set up key mappings for quarto.runner
vim.keymap.set("n", "<leader>rc", runner.run_cell, { desc = "run cell", silent = true })
vim.keymap.set("n", "<leader>ra", runner.run_above, { desc = "run cell and above", silent = true })
vim.keymap.set("n", "<leader>rA", runner.run_all, { desc = "run all cells", silent = true })
-- vim.keymap.set("n", "<leader>rl", runner.run_line, { desc = "run line", silent = true })
vim.keymap.set("v", "<leader>r", runner.run_range, { desc = "run visual range", silent = true })
vim.keymap.set("n", "<leader>RA", function()
runner.run_all(true)
end, { desc = "run all cells of all languages", silent = true })
end,
lazy = false,
},
}
```
| closed | 2024-06-04T16:09:31Z | 2024-06-05T00:35:26Z | https://github.com/benlubas/molten-nvim/issues/204 | [] | teimurlu | 3 |
dropbox/PyHive | sqlalchemy | 240 | "TSocket read 0 bytes" during a long running Hive insert query | I'm running a long-ish insert query in Hive using PyHive 0.6.1 and it fails with `thrift.transport.TTransport.TTransportException: TSocket read 0 bytes` after about 5 minutes running. On the server side the query keeps running until finishing successfully. I don't have this problem with fast queries.
The environment in which this happens is a Docker container based on `python:3.6-slim`. Among other things, i'm installing `libsasl2-dev` and `libsasl2-modules` packages, and `pyhive[hive]` python package. I can't reproduce it locally on my Mac with the same python version: the code correctly waits untill the query finishes.
Any clue why this is happening? Thanks in advance.
The code i'm using is:
```python
import contextlib
from pyhive.hive import connect
def get_conn():
return connect(
host='my-host',
port=10000,
auth='NONE',
username='username',
database='database'
)
with contextlib.closing(get_conn()) as conn, \
contextlib.closing(conn.cursor()) as cur:
cur.execute('My long insert statement')
```
This is the full traceback
```
Traceback (most recent call last):
File "<stdin>", line 5, in <module>
File "/usr/local/lib/python3.6/site-packages/pyhive/hive.py", line 364, in execute
response = self._connection.client.ExecuteStatement(req)
File "/usr/local/lib/python3.6/site-packages/TCLIService/TCLIService.py", line 280, in ExecuteStatement
return self.recv_ExecuteStatement()
File "/usr/local/lib/python3.6/site-packages/TCLIService/TCLIService.py", line 292, in recv_ExecuteStatement
(fname, mtype, rseqid) = iprot.readMessageBegin()
File "/usr/local/lib/python3.6/site-packages/thrift/protocol/TBinaryProtocol.py", line 134, in readMessageBegin
sz = self.readI32()
File "/usr/local/lib/python3.6/site-packages/thrift/protocol/TBinaryProtocol.py", line 217, in readI32
buff = self.trans.readAll(4)
File "/usr/local/lib/python3.6/site-packages/thrift/transport/TTransport.py", line 60, in readAll
chunk = self.read(sz - have)
File "/usr/local/lib/python3.6/site-packages/thrift_sasl/__init__.py", line 166, in read
self._read_frame()
File "/usr/local/lib/python3.6/site-packages/thrift_sasl/__init__.py", line 170, in _read_frame
header = self._trans.readAll(4)
File "/usr/local/lib/python3.6/site-packages/thrift/transport/TTransport.py", line 60, in readAll
chunk = self.read(sz - have)
File "/usr/local/lib/python3.6/site-packages/thrift/transport/TSocket.py", line 132, in read
message='TSocket read 0 bytes')
thrift.transport.TTransport.TTransportException: TSocket read 0 bytes
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 5, in <module>
File "/usr/local/lib/python3.6/contextlib.py", line 185, in __exit__
self.thing.close()
File "/usr/local/lib/python3.6/site-packages/pyhive/hive.py", line 221, in close
response = self._client.CloseSession(req)
File "/usr/local/lib/python3.6/site-packages/TCLIService/TCLIService.py", line 218, in CloseSession
return self.recv_CloseSession()
File "/usr/local/lib/python3.6/site-packages/TCLIService/TCLIService.py", line 230, in recv_CloseSession
(fname, mtype, rseqid) = iprot.readMessageBegin()
File "/usr/local/lib/python3.6/site-packages/thrift/protocol/TBinaryProtocol.py", line 134, in readMessageBegin
sz = self.readI32()
File "/usr/local/lib/python3.6/site-packages/thrift/protocol/TBinaryProtocol.py", line 217, in readI32
buff = self.trans.readAll(4)
File "/usr/local/lib/python3.6/site-packages/thrift/transport/TTransport.py", line 60, in readAll
chunk = self.read(sz - have)
File "/usr/local/lib/python3.6/site-packages/thrift_sasl/__init__.py", line 166, in read
self._read_frame()
File "/usr/local/lib/python3.6/site-packages/thrift_sasl/__init__.py", line 170, in _read_frame
header = self._trans.readAll(4)
File "/usr/local/lib/python3.6/site-packages/thrift/transport/TTransport.py", line 60, in readAll
chunk = self.read(sz - have)
File "/usr/local/lib/python3.6/site-packages/thrift/transport/TSocket.py", line 132, in read
message='TSocket read 0 bytes')
thrift.transport.TTransport.TTransportException: TSocket read 0 bytes
``` | open | 2018-09-24T21:13:14Z | 2020-08-05T19:45:47Z | https://github.com/dropbox/PyHive/issues/240 | [] | gseva | 17 |
mwaskom/seaborn | data-science | 3,738 | Introducing `seaborn_objects_recipes` Library | I am excited to introduce a new library we've (@nickeubank) been working on, `seaborn_objects_recipes`. This library extends `seaborn.objects` by providing additional functionalities that we hope will be useful for your data visualization needs.
**Features**
The library includes the following recipes:
* **Rolling:** Apply rolling window calculations to your data.
* **LineLabel:** Add labels directly to your lines for better readability.
* **Lowess:** Perform locally-weighted regression smoothing, with support for confidence intervals.
* **PolyFitWithCI:** Fit polynomial regression models and include confidence intervals.
**Example Usage**
Here's a quick example using the `PolyFitWithCI` recipes:
```python
import seaborn.objects as so
import seaborn as sns
import seaborn_objects_recipes as sor
# Load the penguins dataset
penguins = sns.load_dataset("penguins")
# Prepare data
data = penguins.copy()
data = data[data["species"] == "Adelie"]
# Create the plot
plot = (
so.Plot(data, x="bill_length_mm", y="body_mass_g")
.add(so.Dot())
.add(so.Line(), PolyFitWithCI := sor.PolyFitWithCI(order=2, gridsize=100, alpha=0.05))
.add(so.Band(), PolyFitWithCI)
.label(x="Bill Length (mm)", y="Body Mass (g)", title="PolyFit Plot with Confidence Intervals")
)
# Display Plot
plot.show()
```
**Output**

**Acknowledgements**
We'd like to acknowledge and thank the following contributors from whom we've borrowed code:
* Special thanks to @JesseFarebro for [Rolling, LineLabel](https://github.com/mwaskom/seaborn/discussions/3133)
* Special thanks to @tbpassin and @kcarnold for [LOWESS Smoother](https://github.com/mwaskom/seaborn/issues/3320)
**Feedback Request**
We are looking for feedback on the following:
1. Integration: Should this library remain a separate extension, or would it be better to roll these features directly into seaborn.objects?
2. Confidence Intervals: We are particularly interested in feedback on how we're handling confidence intervals in our `Lowess` and `PolyFitWithCI` implementations.
You can find the library and more examples on our GitHub repository: [seaborn_objects_recipes](https://github.com/Ofosu-Osei/seaborn_objects_recipes).
Looking forward to your feedback!
| open | 2024-07-25T02:17:49Z | 2024-07-26T00:59:36Z | https://github.com/mwaskom/seaborn/issues/3738 | [] | Ofosu-Osei | 2 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,595 | Modern labels not yet introduced for the pods controlled by deployments etc | The following labels are missing from the hub pod but are present in the singleuser pod. Is this the intended behavior?
```
app.kubernetes.io/name
app.kubernetes.io/instance
app.kubernetes.io/managed-by
```
You can verify this behavior by describing the resource or checking it manually with the command:
```
helm template chart-release . --values values.yaml
``` | closed | 2025-01-10T14:40:00Z | 2025-01-12T15:07:39Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3595 | [
"bug"
] | samyuh | 4 |
pytest-dev/pytest-html | pytest | 130 | Text on successful execution | Awesome plugin, thanks for it!
I have a question: How can I show some text when the execution is successful?
my tests print() details and make asserts, but the report shows "No log output captured".
Thanks in advance for your help! | closed | 2017-09-05T10:49:21Z | 2017-09-05T12:39:51Z | https://github.com/pytest-dev/pytest-html/issues/130 | [] | lcofre | 2 |
predict-idlab/plotly-resampler | data-visualization | 229 | Does this work with px.strip | I was excited to hear of `plotly-resampler`, as it seems to address my use-case very well.
However, I've been benchmarking this on some plotly figures that use `px.strip` - and I'm not seeing performance improvements.
Does `plotly-resampler` work with `px.strip`?
https://plotly.com/python/strip-charts/ | open | 2023-06-23T11:00:26Z | 2023-07-26T17:10:54Z | https://github.com/predict-idlab/plotly-resampler/issues/229 | [] | mkleinbort-ic | 3 |
deezer/spleeter | deep-learning | 670 | Cannot Reproduce MusDB Evaluation Results | - [✅] I didn't find a similar issue already open.
- [✅] I read the documentation (README AND Wiki)
- [✅] I have installed FFMpeg
- [✅] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
Despite multiple training runs using the provided MusDB configuration, I cannot reproduce the results shown on the MusDB evaluation table. Training/evaluation is performed using the provided musdb_config.json file. My experiment results in ~10% less average SDR performance (Avg. SDR of 4.059 vs expected 4.43, Spleeter team's result). Full result comparision can be found under output section.
## Step to reproduce
1. Installed using pip (Python 3.8 venv, Spleeter 2.2.2, Tensorflow-GPU 2.6.0, Nvidia RTX 2080Ti, CUDA 11.4, CuDNN Toolkit 11.2)
2. Run as (in virtual env):
- spleeter train -p configs/musdb_config.json -d /path/to/musdb18 --verbose
- check train/validation csvs and audio data loaded
- spleeter evaluate -p configs/musdb_config.json --musd_dir /path/to/musdb18 -o /some_output_eval_path/ --verbose
3. Evaluation result for avg. SDR is ~10% less than Spleeter's MusDB evaluation.
## Output
**My musdb_config.json Result**
- Vocals SDR: 4.668
- Bass SDR: 3.889
- Drums SDR: 4.496
- Other SDR: 3.181
- Average SDR: 4.059
**Expected: Spleeter Team's musdb_config.json Result**
- Vocals SDR: 5.10
- Bass SDR: 4.27
- Drums SDR: 5.15
- Other SDR: 3.21
- Average SDR: 4.43
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Linux 18.04 |
| Installation type | pip |
| RAM available | 64GB |
| Hardware spec | Nvidia RTX 2080Ti, Intel i7-8700k |
## Additional context
Tools: Python 3.8 venv, Spleeter 2.2.2, Tensorflow-GPU 2.6.0, Nvidia RTX 2080Ti, CUDA 11.4, CuDNN Toolkit 11.2
| open | 2021-10-14T18:57:43Z | 2021-10-28T18:18:47Z | https://github.com/deezer/spleeter/issues/670 | [
"bug",
"invalid"
] | jaolan | 2 |
python-gitlab/python-gitlab | api | 3,112 | CLI ignores end points that don't define managed RESTObject | ## Description of the problem, including code/CLI snippet
Certain end points like SideKiq are skipped by CLI parser because they don't manage any objects. This is because only RESTManager classes that defined the `_obj_cls` attribute are parsed for the endpoints.
I described this behaviour here: https://github.com/python-gitlab/python-gitlab/pull/3083#discussion_r1934000608
## Expected Behavior
All endpoints with managers are accessible.
## Actual Behavior
Missing endpoints:
* `group-registry-repository`: `/groups/{group_id}/registry/repositories`
* `project-audit`: `/projects/{project_id}/audit_events`
* `project-iteration`: `/projects/{project_id}/iterations`
* `sidekiq`: None
* `user-identity-provider`: `/users/{user_id}/identities`
This list is made by taking a diff between current main branch `python -m gitlab --help` and my quick experiment of adding all RESTManagers to the CLI parser. | open | 2025-01-30T15:33:57Z | 2025-02-12T23:27:10Z | https://github.com/python-gitlab/python-gitlab/issues/3112 | [] | igorp-collabora | 4 |
ClimbsRocks/auto_ml | scikit-learn | 239 | modify feature responses with feature_learning | right now i think we've got our feature_learning transformer as part of our transformation pipeline, which we don't invoke as part of feature responses. instead, we use the static output from our transformation pipeline, and only get the predictions from our trained final model, not the output from our feature_learning model. | open | 2017-06-13T01:05:12Z | 2017-06-13T01:05:12Z | https://github.com/ClimbsRocks/auto_ml/issues/239 | [] | ClimbsRocks | 0 |
adbar/trafilatura | web-scraping | 23 | Using headline instead of name in JSON-LD metadata | Trafilatura extracts some metadata from JSON-LD tag if it is available. In particular, it tries to search for the title in the "headline" property of the JSON-LD tag, but looks like the headline is not necessarily the title. For example, look at this wikipedia page: https://en.m.wikipedia.org/wiki/Semantic_satiation
The JSON-LD is:
```json
{
"@context":"https:\/\/schema.org","@type":"Article",
"name":"Semantic satiation",
"url":"https:\/\/en.wikipedia.org\/wiki\/Semantic_satiation",
"sameAs":"http:\/\/www.wikidata.org\/entity\/Q226007",
"mainEntity":"http:\/\/www.wikidata.org\/entity\/Q226007",
"author":{"@type":"Organization","name":"Contributors to Wikimedia projects"},
"publisher":{"@type":"Organization","name":"Wikimedia Foundation, Inc.","logo":{"@type":"ImageObject","url":"https:\/\/www.wikimedia.org\/static\/images\/wmf-hor-googpub.png"}},
"datePublished":"2006-07-12T09:27:14Z",
"dateModified":"2020-08-31T23:55:26Z",
"headline":"psychological phenomenon in which repetition causes a word to temporarily lose meaning for the listener"
}
```
Most of the wikipedia pages are like this.
The title of the page is in the "name" property, and the "headline" property contains a short tagline instead. So trafilatura gives the tagline instead of the title as the title of the page. It probably makes sense to search for the "name" property first? Though it would be hard to extract with a regex: "name" also appears in subfields, like in the "author" property above, so would need to parse the json properly.
There was even a proposal to get rid of the headline property and replace it with "name" or with "title": https://github.com/schemaorg/schemaorg/issues/205 | closed | 2020-10-18T02:04:35Z | 2020-10-19T16:43:23Z | https://github.com/adbar/trafilatura/issues/23 | [] | posobin | 1 |
deepset-ai/haystack | machine-learning | 9,059 | Introduce Chat Generator Protocol | closed | 2025-03-18T18:06:12Z | 2025-03-20T10:58:10Z | https://github.com/deepset-ai/haystack/issues/9059 | [] | anakin87 | 0 | |
open-mmlab/mmdetection | pytorch | 11,833 | Migrate the MAE pretrained vit in mmpretrain to the VitDet project | The vit implementation in the VitDet project is different from that in mmpretrain. When I directly replace the mae pre-trained vit from mmpretrain into the VitDet project, it doesn't use window attention resulting in a very memory intensive and much lower Map than the implementation in the project.
Can someone please provide the VitDet implementation config for a vitbased on MAE pretrained in mmpretrain or provide some guidance?
Backbone config is as follows:
backbone=dict(
arch='base',
drop_path_rate=0.1,
final_norm=False,
img_size=1024,
init_cfg=dict(
checkpoint=
'/root/mmdetection/mycheckpoints/mae_vit-base-p16_8xb512-coslr-800e-fp16_in1k_20220825-5d81fbc4.pth',
prefix='backbone.',
type='Pretrained'),
out_type='featmap',
patch_size=16,
type='mmpretrain.VisionTransformer') | open | 2024-07-05T16:23:13Z | 2024-07-05T16:23:29Z | https://github.com/open-mmlab/mmdetection/issues/11833 | [] | Vanish1112 | 0 |
python-gitlab/python-gitlab | api | 2,258 | Upload attachement to Wiki | ## Description of the problem, including code/CLI snippet
Gitlab API support to attach a file to a wiki page which is nice for bot generating wiki page such as dashboard.
See : https://docs.gitlab.com/ee/api/wikis.html#upload-an-attachment-to-the-wiki-repository
## Expected Behavior
Something like `proj.wikis.upload(id,file,branch)` would be ideal
## Actual Behavior
The API is not reflected in python-gitlab
_If I am missing something, tell me, but I havent found that API call_ | closed | 2022-08-25T12:36:54Z | 2024-11-25T01:50:11Z | https://github.com/python-gitlab/python-gitlab/issues/2258 | [
"feature",
"help wanted"
] | lhausermann | 9 |
horovod/horovod | deep-learning | 3,625 | ncclCommInitRank failed: unhandled cuda error | **Environment:**
1. Framework: Tensorflow
2. Framework version: 2.9.1
3. Horovod version: 0.25.0
4. MPI version: 4.1.3
5. CUDA version: 11.7
6. NCCL version: 2.13
7. Python version: 3.10.5
8. Spark / PySpark version:
9. Ray version:
10. OS and version: ubuntu 20.04.2
11. GCC version: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
12. CMake version: cmake version 3.23.2
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
I am testing on Nvidia Geforce 3090 GPU card with driver version 515. I installed NCCL with `apt install` and tested it by using [nccl-test](https://github.com/NVIDIA/nccl-tests) and it succeeded.
Now if I set `CUDA_VISIBLE_DEVICES=0`, i.e., testing against 1 GPU card by running `horovodrun -np 1 python3 main.py`, the sample demo can run without problem. Next, if I set `CUDA_VISIBLE_DEVICES=0,1` and run `horovodrun -np 2 python3 main.py`, i.e., testing against 2 GPU cards, then I encounter the following errors...
```
2022-07-29 00:06:58.785131: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
[1,0]<stderr>:2022-07-29 00:07:00.634827: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
[1,1]<stderr>:2022-07-29 00:07:00.634825: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
[1,0]<stdout>:hvd.local_rank(): 0
[1,1]<stdout>:hvd.local_rank(): 1
[1,1]<stdout>:gpu: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:1', device_type='GPU')]
[1,0]<stdout>:gpu: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:1', device_type='GPU')]
[1,1]<stderr>:2022-07-29 00:07:02.355240: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
[1,1]<stderr>:To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
[1,0]<stderr>:2022-07-29 00:07:02.361996: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
[1,0]<stderr>:To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
[1,1]<stderr>:2022-07-29 00:07:03.185713: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22258 MB memory: -> device: 1, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:3b:00.0, compute capability: 8.6
[1,0]<stderr>:2022-07-29 00:07:03.244172: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22258 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:18:00.0, compute capability: 8.6
[1,1]<stderr>:WARNING:tensorflow:From /home/liupan/CollaborationForce/Framework-Horovod/frameworkhorovod/supl/data.py:16: shuffle_and_repeat (from tensorflow.python.data.experimental.ops.shuffle_ops) is deprecated and will be removed in a future version.
[1,1]<stderr>:Instructions for updating:
[1,1]<stderr>:Use `tf.data.Dataset.shuffle(buffer_size, seed)` followed by `tf.data.Dataset.repeat(count)`. Static tf.data optimizations will take care of using the fused implementation.
[1,0]<stderr>:WARNING:tensorflow:From /home/liupan/CollaborationForce/Framework-Horovod/frameworkhorovod/supl/data.py:16: shuffle_and_repeat (from tensorflow.python.data.experimental.ops.shuffle_ops) is deprecated and will be removed in a future version.
[1,0]<stderr>:Instructions for updating:
[1,0]<stderr>:Use `tf.data.Dataset.shuffle(buffer_size, seed)` followed by `tf.data.Dataset.repeat(count)`. Static tf.data optimizations will take care of using the fused implementation.
[1,1]<stdout>:Broadcasting initial variable states from rank 0 to all other processes...
[1,0]<stdout>:Broadcasting initial variable states from rank 0 to all other processes...
[1,1]<stdout>:Broadcasting initial variable states from rank 0 to all other processes...
[1,0]<stdout>:Broadcasting initial variable states from rank 0 to all other processes...
[1,1]<stderr>:2022-07-29 00:07:21.059097: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8401
[1,0]<stderr>:2022-07-29 00:07:21.245936: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8401
[1,1]<stderr>:2022-07-29 00:07:23.090369: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
[1,0]<stderr>:2022-07-29 00:07:23.319955: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
[1,1]<stderr>:[biomind:587896:0:587978] Caught signal 11 (Segmentation fault: Sent by the kernel at address (nil))
[1,0]<stderr>:[biomind:587895:0:587975] Caught signal 11 (Segmentation fault: Sent by the kernel at address (nil))
[1,1]<stderr>:==== backtrace (tid: 587978) ====
[1,1]<stderr>: 0 /usr/local/ucx-1.13.0/lib/libucs.so.0(ucs_handle_error+0x2dc) [0x7f5a6c0b6bdc]
[1,1]<stderr>: 1 /usr/local/ucx-1.13.0/lib/libucs.so.0(+0x33dc7) [0x7f5a6c0b6dc7]
[1,1]<stderr>: 2 /usr/local/ucx-1.13.0/lib/libucs.so.0(+0x340f4) [0x7f5a6c0b70f4]
[1,1]<stderr>: 3 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(+0x1f018f) [0x7f5a6e00518f]
[1,1]<stderr>: 4 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZN7horovod6common11NCCLContext10ErrorCheckENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE12ncclResult_tRP8ncclComm+0x64) [0x7f5a6df6c824]
[1,1]<stderr>: 5 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZN7horovod6common13NCCLAllreduce7ExecuteERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x246) [0x7f5a6df6dc56]
[1,1]<stderr>: 6 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteAllreduceERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x7d) [0x7f5a6df2ebed]
[1,1]<stderr>: 7 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteOperationERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseERNS0_10ProcessSetE+0x4c) [0x7f5a6df2f0cc]
[1,1]<stderr>: 8 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(+0xeaee8) [0x7f5a6deffee8]
[1,1]<stderr>: 9 /usr/local/Python-3.10.5/lib/python3.10/site-packages/tensorflow/python/../libtensorflow_framework.so.2(+0x1a52490) [0x7f5af77d2490]
[1,1]<stderr>:10 /lib/x86_64-linux-gnu/libpthread.so.0(+0x8609) [0x7f5b28a4d609]
[1,1]<stderr>:11 /lib/x86_64-linux-gnu/libc.so.6(clone+0x43) [0x7f5b28b87133]
[1,1]<stderr>:=================================
[1,1]<stderr>:[biomind:587896] *** Process received signal ***
[1,1]<stderr>:[biomind:587896] Signal: Segmentation fault (11)
[1,1]<stderr>:[biomind:587896] Signal code: (-6)
[1,1]<stderr>:[biomind:587896] Failing at address: 0x3e50008f878
[1,1]<stderr>:[biomind:587896] [ 0] [1,1]<stderr>:/lib/x86_64-linux-gnu/libc.so.6(+0x43090)[0x7f5b28aab090]
[1,1]<stderr>:[biomind:587896] [ 1] [1,1]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(+0x1f018f)[0x7f5a6e00518f]
[1,1]<stderr>:[biomind:587896] [ 2] [1,1]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZN7horovod6common11NCCLContext10ErrorCheckENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE12ncclResult_tRP8ncclComm+0x64)[0x7f5a6df6c824]
[1,1]<stderr>:[biomind:587896] [ 3] [1,1]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZN7horovod6common13NCCLAllreduce7ExecuteERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x246)[0x7f5a6df6dc56]
[1,1]<stderr>:[biomind:587896] [ 4] [1,1]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteAllreduceERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x7d)[0x7f5a6df2ebed]
[1,1]<stderr>:[biomind:587896] [ 5] [1,1]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteOperationERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseERNS0_10ProcessSetE+0x4c)[0x7f5a6df2f0cc]
[1,1]<stderr>:[biomind:587896] [ 6] [1,1]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(+0xeaee8)[0x7f5a6deffee8]
[1,1]<stderr>:[biomind:587896] [ 7] [1,0]<stderr>:==== backtrace (tid: 587975) ====
[1,0]<stderr>: 0 /usr/local/ucx-1.13.0/lib/libucs.so.0(ucs_handle_error+0x2dc) [0x7f708465dbdc]
[1,0]<stderr>: 1 /usr/local/ucx-1.13.0/lib/libucs.so.0(+0x33dc7) [0x7f708465ddc7]
[1,0]<stderr>: 2 /usr/local/ucx-1.13.0/lib/libucs.so.0(+0x340f4) [0x7f708465e0f4]
[1,0]<stderr>: 3 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(+0x1f018f) [0x7f70876fd18f]
[1,0]<stderr>: 4 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZN7horovod6common11NCCLContext10ErrorCheckENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE12ncclResult_tRP8ncclComm+0x64) [0x7f7087664824]
[1,0]<stderr>: 5 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZN7horovod6common13NCCLAllreduce7ExecuteERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x246) [0x7f7087665c56]
[1,0]<stderr>: 6 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteAllreduceERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x7d) [0x7f7087626bed]
[1,0]<stderr>: 7 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteOperationERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseERNS0_10ProcessSetE+0x4c) [0x7f70876270cc]
[1,0]<stderr>: 8 /usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(+0xeaee8) [0x7f70875f7ee8]
[1,0]<stderr>: 9 /usr/local/Python-3.10.5/lib/python3.10/site-packages/tensorflow/python/../libtensorflow_framework.so.2(+0x1a52490) [0x7f7110eca490]
[1,0]<stderr>:10 /lib/x86_64-linux-gnu/libpthread.so.0(+0x8609) [0x7f7142145609]
[1,0]<stderr>:11 /lib/x86_64-linux-gnu/libc.so.6(clone+0x43) [0x7f714227f133]
[1,0]<stderr>:=================================
[1,0]<stderr>:[biomind:587895] *** Process received signal ***
[1,0]<stderr>:[biomind:587895] Signal: Segmentation fault (11)
[1,0]<stderr>:[biomind:587895] Signal code: (-6)
[1,0]<stderr>:[biomind:587895] Failing at address: 0x3e50008f877
[1,0]<stderr>:[biomind:587895] [ 0] [1,0]<stderr>:/lib/x86_64-linux-gnu/libc.so.6(+0x43090)[0x7f71421a3090]
[1,0]<stderr>:[biomind:587895] [ 1] [1,0]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(+0x1f018f)[0x7f70876fd18f]
[1,0]<stderr>:[biomind:587895] [ 2] [1,0]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZN7horovod6common11NCCLContext10ErrorCheckENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE12ncclResult_tRP8ncclComm+0x64)[0x7f7087664824]
[1,0]<stderr>:[biomind:587895] [ 3] [1,0]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZN7horovod6common13NCCLAllreduce7ExecuteERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x246)[0x7f7087665c56]
[1,0]<stderr>:[biomind:587895] [ 4] [1,0]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteAllreduceERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x7d)[0x7f7087626bed]
[1,0]<stderr>:[biomind:587895] [ 5] [1,1]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/tensorflow/python/../libtensorflow_framework.so.2(+0x1a52490)[0x7f5af77d2490]
[1,1]<stderr>:[biomind:587896] [ 8] /lib/x86_64-linux-gnu/libpthread.so.0(+0x8609)[0x7f5b28a4d609]
[1,1]<stderr>:[biomind:587896] [ 9] [1,0]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteOperationERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseERNS0_10ProcessSetE+0x4c)[0x7f70876270cc]
[1,0]<stderr>:[biomind:587895] [ 6] [1,1]<stderr>:/lib/x86_64-linux-gnu/libc.so.6(clone+0x43)[0x7f5b28b87133]
[1,1]<stderr>:[biomind:587896] *** End of error message ***
[1,0]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/horovod/tensorflow/mpi_lib.cpython-310-x86_64-linux-gnu.so(+0xeaee8)[0x7f70875f7ee8]
[1,0]<stderr>:[biomind:587895] [ 7] [1,0]<stderr>:/usr/local/Python-3.10.5/lib/python3.10/site-packages/tensorflow/python/../libtensorflow_framework.so.2(+0x1a52490)[0x7f7110eca490]
[1,0]<stderr>:[biomind:587895] [ 8] /lib/x86_64-linux-gnu/libpthread.so.0(+0x8609)[0x7f7142145609]
[1,0]<stderr>:[biomind:587895] [ 9] [1,0]<stderr>:/lib/x86_64-linux-gnu/libc.so.6(clone+0x43)[0x7f714227f133]
[1,0]<stderr>:[biomind:587895] *** End of error message ***
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node biomind exited on signal 11 (Segmentation fault).
```
I also tried to down-grade nccl to 2.12.12, however, this time I even can not run on single gpu mode, i.e., `horovodrun -np 1 python3 main.py`.
The code I am running is as follows:
```
hvd.init()
print(f'hvd.local_rank(): {hvd.local_rank()}')
# Pin GPU to be used to process local rank (one GPU per process)
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
if gpus:
tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()], 'GPU')
print(f'gpu: {gpus}')
@tf.function
def training_step(images, labels, first_batch):
with tf.GradientTape() as tape:
pred = model(images)
loss = loss_fun(labels, pred)
# Horovod: add Horovod Distributed GradientTape.
tape = hvd.DistributedGradientTape(tape)
grads = tape.gradient(loss, ops.trainable_variables)
optimizer.apply_gradients(zip(grads, ops.trainable_variables))
# Horovod: broadcast initial variable states from rank 0 to all other processes.
# This is necessary to ensure consistent initialization of all workers when
# training is started with random weights or restored from a checkpoint.
#
# Note: broadcast should be done after the first gradient step to ensure optimizer
# initialization.
if first_batch:
print('Broadcasting initial variable states from rank 0 to all other processes...')
hvd.broadcast_variables(ops.variables, root_rank=0)
hvd.broadcast_variables(optimizer.variables(), root_rank=0)
return loss
```
It is a very basic demo code, and it worked in Nvidia deep learning framework container, but it failed in my local environment as described above.
Any helps are really welcome!! Thx!!
| open | 2022-07-28T16:13:46Z | 2022-11-10T07:13:54Z | https://github.com/horovod/horovod/issues/3625 | [
"bug"
] | WingsOfPanda | 21 |
koxudaxi/datamodel-code-generator | pydantic | 1,546 | Feature request: support JSON Schema's `prefixItems` with precisely typed tuples instead of imprecise lists | ## Is your feature request related to a problem? Please describe.
When generating models from JSON Schema it would be nice to be able to generate `Tuple[]` types using [Tuple Validation](https://json-schema.org/understanding-json-schema/reference/array.html#tuple-validation).
## Describe the solution you'd like
Consider the following schema:
```json
{
"$schema": "https://json-schema.org/draft-07/schema",
"type": "object",
"properties": {
"a": {
"type": "array",
"prefixItems": [
{ "type": "number" },
{ "type": "string" }
],
"minItems": 2,
"maxItems": 2
}
},
"required": ["a"]
}
```
Running `datamodel-codegen --input test.json --input-file-type jsonschema --output-model-type typing.TypedDict --output model.py`, this produces the output:
```py
from __future__ import annotations
from typing import List
from typing_extensions import TypedDict
class Model(TypedDict):
a: List
```
Ideally, it would produce the following instead:
```py
from __future__ import annotations
from typing import Tuple
from typing_extensions import TypedDict
class Model(TypedDict):
a: Tuple[float, str]
```
## Describe alternatives you've considered
The only real alternative to typing an array with heterogenous types is to replace the array with an object whose `properties` can be typed individually. But changing a schema to use an object instead of an array isn't always possible.
| open | 2023-09-12T21:26:11Z | 2024-12-04T10:00:34Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1546 | [
"enhancement"
] | kylebebak | 2 |
amidaware/tacticalrmm | django | 1,618 | Incorrect display Russian text in command output | **Server Info (please complete the following information):**
- OS: Debian 11 (bullseye)
- Browser: firefox, chrome, safari
- RMM Version: v.0.16.2
**Installation Method:**
- [x] Standard
- [ ] Docker
**Agent Info (please complete the following information):**
- Agent version: v2.4.11
- Agent OS: Windows 11 Pro, 64 bit v22H2
**Describe the bug**
When I send a command to WEBUI or execute a script, I get incorrect content display. (Usernames and group names)
A similar issue has been #1472
But I have everything else displayed correctly, except for the listed.
**To Reproduce**
Steps to reproduce the behavior:
1. Client OS's language should be Russian
2. in WebUI right click on a host => Send command `Get-LocalGroup`
3. Run command that gives output in Russian (CMD or Powershell)
4. I get incorrect content output.
**Expected behavior**
Correct display of command output in any language.
**Screenshots**
<img width="1002" alt="Снимок экрана 2023-08-30 в 16 38 42" src="https://github.com/amidaware/tacticalrmm/assets/137088704/5d78d911-2df7-4df7-b053-640776da311f">
**Additional context**
The standard output of scripts has the same incorrect display of the Russian language.
| open | 2023-08-30T13:56:50Z | 2024-11-04T12:28:17Z | https://github.com/amidaware/tacticalrmm/issues/1618 | [] | crackco00n | 3 |
ultralytics/yolov5 | deep-learning | 13,363 | detect.py is processing only 50 files | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Detection
### Bug
I am running this command:
`
/Users/columbo/Documents/dev/cam-analyzer/pytorch_metal_env/bin/python detect.py --save-txt --save-conf --weights /Users/columbo/Documents/dev/yolo_model/cam_analyzer/weights/best.pt --source /Users/columbo/Documents/dev/yolo_images --conf 0.1 --project /Users/columbo/Documents/dev/yolo_results --name detections --device mps
`
On the output it processes only 50 images. Anything I am missing?
### Environment
YOLOv5 🚀 v7.0-356-g2070b303 Python-3.12.7 torch-2.4.0 MPS
### Minimal Reproducible Example
```YOLOv5 🚀 v7.0-356-g2070b303 Python-3.12.7 torch-2.4.0 MPS```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-10-16T21:51:05Z | 2024-10-22T14:52:05Z | https://github.com/ultralytics/yolov5/issues/13363 | [
"bug"
] | guybashan | 4 |
google-research/bert | tensorflow | 474 | IMDB classification | I am trying to use the imdb model(**predicting_movie_reviews_with_bert_on_tf_hub.ipynb**) on my own data set. I have two labels for my dataset. I want the prediction probabilities for both the labels.
I see that in the imdb notebook output the probabilities do not add up to 1. Since we are using softmax as our last layer I guess the probabilities should add up to 1. Also the value are negative so obviously they don't look like probabilities to me.
Please help me in interpreting the probability output and also how can I get a probability value as output. | closed | 2019-03-04T06:01:35Z | 2019-03-04T06:13:28Z | https://github.com/google-research/bert/issues/474 | [] | platinum736 | 1 |
numba/numba | numpy | 9,723 | we need native progress bar for numba @numba.jit for loop. like python tqdm library | ## Feature request
<!--
we need native progress bar for numba @numba.jit - "for loop".
like python tqdm library
-->
## we need native progress bar for numba @numba.jit - "for loop".
like python tqdm library
<!--
we need native progress bar for numba @numba.jit - "for loop".
like python tqdm library
-->
| closed | 2024-09-12T06:13:45Z | 2024-09-24T08:59:24Z | https://github.com/numba/numba/issues/9723 | [
"feature_request"
] | ganbaaelmer | 5 |
tableau/server-client-python | rest-api | 1,312 | Workbooks,download fails for Tableau Server v.2023.1.7 | **Describe the bug**
Workbooks.download fails with an error in regard to 'filename'.
Versions: Tableau Server 2023.1.7 (API 3.19), tableauserverclient-0.14.1, python 3.7.9
(tested also: Tableau Server 2023.1.7 (API 3.19), tableauserverclient-0.28, python 3.9.13)
**To Reproduce**
server_int = TSC.Server("https://yousite",use_server_version = True)
tableau_auth_int = TSC.TableauAuth("username", "password", site_id='sitename')
with server_int.auth.sign_in(tableau_auth_int):
workbook= server_int.workbooks.get_by_id('fca33a9a-0139-41b9-b1e7-841a92bf5f92')
wkbk=workbook.name
print(workbook.name)
print(workbook.id)
file_path = server_int.workbooks.download('fca33a9a-0139-41b9-b1e7-841a92bf5f92')
print("\nDownloaded the file to {0}.".format(file_path))
**Results**
WorkbookName
fca33a9a-0139-41b9-b1e7-841a92bf5f92
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~\AppData\Local\Temp\1\ipykernel_7340\2448852531.py in <module>
5 print(workbook.name)
6 print(workbook.id)
----> 7 file_path = server_int.workbooks.download('fca33a9a-0139-41b9-b1e7-841a92bf5f92')
8 print("\nDownloaded the file to {0}.".format(file_path))
E:\Anaconda3\envs\Tableau_analytics\lib\site-packages\tableauserverclient\server\endpoint\endpoint.py in wrapper(self, *args, **kwargs)
290 def wrapper(self, *args, **kwargs):
291 self.parent_srv.assert_at_least_version(version, self.__class__.__name__)
--> 292 return func(self, *args, **kwargs)
293
294 return wrapper
E:\Anaconda3\envs\Tableau_analytics\lib\site-packages\tableauserverclient\server\endpoint\endpoint.py in wrapper(self, *args, **kwargs)
332 error = "{!r} not available in {}, it will be ignored. Added in {}".format(p, server_ver, min_ver)
333 warnings.warn(error)
--> 334 return func(self, *args, **kwargs)
335
336 return wrapper
E:\Anaconda3\envs\Tableau_analytics\lib\site-packages\tableauserverclient\server\endpoint\endpoint.py in wrapper(self, *args, **kwargs)
332 error = "{!r} not available in {}, it will be ignored. Added in {}".format(p, server_ver, min_ver)
333 warnings.warn(error)
--> 334 return func(self, *args, **kwargs)
335
336 return wrapper
E:\Anaconda3\envs\Tableau_analytics\lib\site-packages\tableauserverclient\server\endpoint\workbooks_endpoint.py in download(self, workbook_id, filepath, include_extract, no_extract)
182 no_extract: Optional[bool] = None,
183 ) -> str:
--> 184 return self.download_revision(workbook_id, None, filepath, include_extract, no_extract)
185
186 # Get all views of workbook
E:\Anaconda3\envs\Tableau_analytics\lib\site-packages\tableauserverclient\server\endpoint\endpoint.py in wrapper(self, *args, **kwargs)
290 def wrapper(self, *args, **kwargs):
291 self.parent_srv.assert_at_least_version(version, self.__class__.__name__)
--> 292 return func(self, *args, **kwargs)
293
294 return wrapper
E:\Anaconda3\envs\Tableau_analytics\lib\site-packages\tableauserverclient\server\endpoint\workbooks_endpoint.py in download_revision(self, workbook_id, revision_number, filepath, include_extract, no_extract)
488 return_path = filepath
489 else:
--> 490 filename = to_filename(os.path.basename(params["filename"]))
491 download_path = make_download_path(filepath, filename)
492 with open(download_path, "wb") as f:
KeyError: 'filename'
**NOTE:** Be careful not to post user names, passwords, auth tokens or any other private or sensitive information.
| closed | 2023-11-02T21:42:30Z | 2024-02-14T21:27:06Z | https://github.com/tableau/server-client-python/issues/1312 | [
"bug"
] | ElenasemDA | 22 |
NullArray/AutoSploit | automation | 508 | Unhandled Exception (a4af9e0ad) | Autosploit version: `3.0`
OS information: `Linux-4.15.0-45-generic-x86_64-with-Ubuntu-18.04-bionic`
Running context: `autosploit.py --whitelist **********`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/azad/Desktop/AutoSploit/autosploit/main.py", line 103, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/azad/Desktop/AutoSploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-02-25T20:50:51Z | 2019-03-03T03:31:52Z | https://github.com/NullArray/AutoSploit/issues/508 | [] | AutosploitReporter | 0 |
Lightning-AI/pytorch-lightning | pytorch | 20,335 | Unreadable font color theme of YAML files | ### 📚 Documentation
The color theme for YAML files in the documentation is unreadable (dark blue on black background) like [on this site](https://lightning.ai/docs/pytorch/stable/cli/lightning_cli_advanced.html). Here is an image:

cc @borda | open | 2024-10-10T16:43:00Z | 2024-10-14T14:32:30Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20335 | [
"docs",
"needs triage"
] | MrWhatZitToYaa | 3 |
open-mmlab/mmdetection | pytorch | 11,256 | `Evaluating bbox` informations do not write in log file mmdetection | Hello, when validating, the `Evaluating bbox` informations will print onto screen, like this:
```
...
12/06 10:51:37 - mmengine - INFO - Epoch(val) [15][5/7] eta: 0:00:00 time: 0.2168 data_time: 0.0651 memory: 2317
12/06 10:51:37 - mmengine - INFO - Epoch(val) [15][6/7] eta: 0:00:00 time: 0.2141 data_time: 0.0620 memory: 2317
12/06 10:51:37 - mmengine - INFO - Epoch(val) [15][7/7] eta: 0:00:00 time: 0.2066 data_time: 0.0592 memory: 1657
12/06 10:51:38 - mmengine - INFO - Evaluating bbox...
Loading and preparing results...
DONE (t=0.18s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=1.40s).
Accumulating evaluation results...
DONE (t=0.16s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.014
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.057
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.017
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.030
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.120
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.262
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.300
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.262
12/06 10:51:40 - mmengine - INFO - bbox_mAP_copypaste: 0.014 0.057 0.003 -1.000 0.000 0.017
12/06 10:51:41 - mmengine - INFO - Epoch(val) [15][7/7] coco/bbox_mAP: 0.0140 coco/bbox_mAP_50: 0.0570 coco/bbox_mAP_75: 0.0030 coco/bbox_mAP_s: -1.0000 coco/bbox_mAP_m: 0.0000 coco/bbox_mAP_l: 0.0170 data_time: 0.0440 time: 0.1794
...
```
but in the log file, the informations miss the `Evaluating bbox` contents, like this:
```
2023/12/06 10:51:36 - mmengine - INFO - Epoch(val) [15][1/7] eta: 0:00:03 time: 0.2319 data_time: 0.0812 memory: 2317
2023/12/06 10:51:36 - mmengine - INFO - Epoch(val) [15][2/7] eta: 0:00:01 time: 0.2277 data_time: 0.0766 memory: 2317
2023/12/06 10:51:37 - mmengine - INFO - Epoch(val) [15][3/7] eta: 0:00:01 time: 0.2237 data_time: 0.0723 memory: 2317
2023/12/06 10:51:37 - mmengine - INFO - Epoch(val) [15][4/7] eta: 0:00:00 time: 0.2200 data_time: 0.0684 memory: 2317
2023/12/06 10:51:37 - mmengine - INFO - Epoch(val) [15][5/7] eta: 0:00:00 time: 0.2168 data_time: 0.0651 memory: 2317
2023/12/06 10:51:37 - mmengine - INFO - Epoch(val) [15][6/7] eta: 0:00:00 time: 0.2141 data_time: 0.0620 memory: 2317
2023/12/06 10:51:37 - mmengine - INFO - Epoch(val) [15][7/7] eta: 0:00:00 time: 0.2066 data_time: 0.0592 memory: 1657
2023/12/06 10:51:38 - mmengine - INFO - Evaluating bbox...
2023/12/06 10:51:40 - mmengine - INFO - bbox_mAP_copypaste: 0.014 0.057 0.003 -1.000 0.000 0.017
2023/12/06 10:51:41 - mmengine - INFO - Epoch(val) [15][7/7] coco/bbox_mAP: 0.0140 coco/bbox_mAP_50: 0.0570 coco/bbox_mAP_75: 0.0030 coco/bbox_mAP_s: -1.0000 coco/bbox_mAP_m: 0.0000 coco/bbox_mAP_l: 0.0170 data_time: 0.0440 time: 0.1794
```
How to fix this problem? Can you give some advises? | open | 2023-12-06T03:00:42Z | 2024-01-23T11:43:43Z | https://github.com/open-mmlab/mmdetection/issues/11256 | [] | PapaMadeleine2022 | 4 |
dmlc/gluon-nlp | numpy | 727 | pytest 4 support | Tests are failing with current pytest 4.5
This does not affect CI as pytest < 4 is specified in the environment, but affects contributors running tests outside of CI. | closed | 2019-05-24T14:46:15Z | 2019-05-24T18:20:11Z | https://github.com/dmlc/gluon-nlp/issues/727 | [
"bug"
] | leezu | 2 |
deedy5/primp | web-scraping | 12 | TypeError: argument 'data': 'list' object cannot be converted to 'PyString' | When i try to post json data like that ( a list in json ), i am getting TypeError: argument 'data': 'list' object cannot be converted to 'PyString' error.
Code Example:
`data = {
"device_id": "123",
"auth": ["auth_options", "sms_auth_v2", "two_factor_auth"]
}`
`resp = client.post(url="https://httpbin.org/anything", data=data)`
| closed | 2024-05-06T21:08:04Z | 2024-05-13T01:04:48Z | https://github.com/deedy5/primp/issues/12 | [] | farukekim | 4 |
huggingface/datasets | tensorflow | 7,024 | Streaming dataset not returning data | ### Describe the bug
I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly.
I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning on the provided dataset.
However, I'm doing some data preprocessing steps (filtering out entries), when I try to swap out the dataset for mine, it fails to train. However, I eventually fixed this by simply setting `stream=False` in `load_dataset`.
Coud this be some sort of network / firewall issue I'm facing?
### Steps to reproduce the bug
I made a post with greater description about how I reproduced this problem before I found my workaround: https://discuss.huggingface.co/t/problem-with-custom-iterator-of-streaming-dataset-not-returning-anything/94551
Here is the problematic dataset snippet, which works when streaming=False (and with buffer keyword removed from shuffle)
```
commitpackft = load_dataset(
"chargoddard/commitpack-ft-instruct", split="train", streaming=True
).filter(lambda example: example["language"] == "Python")
def form_template(example):
"""Forms a template for each example following the alpaca format for CommitPack"""
example["content"] = (
"### Human: " + example["instruction"] + " " + example["input"] + " ### Assistant: " + example["output"]
)
return example
dataset = commitpackft.map(
form_template,
remove_columns=["id", "language", "license", "instruction", "input", "output"],
).shuffle(
seed=42, buffer_size=10000
) # remove everything since its all inside "content" now
validation_data = dataset.take(4000)
train_data = dataset.skip(4000)
```
The annoying part about this is that it only fails during training and I don't know when it will fail, except that it always fails during evaluation.
### Expected behavior
The expected behavior is that I should be able to get something from the iterator when called instead of getting nothing / stuck in a loop somewhere.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31
- Python version: 3.11.7
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
| open | 2024-07-04T07:21:47Z | 2024-07-04T07:21:47Z | https://github.com/huggingface/datasets/issues/7024 | [] | johnwee1 | 0 |
ivy-llc/ivy | numpy | 28,705 | Fix Frontend Failing Test: paddle - creation.jax.numpy.triu | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-03-29T12:18:54Z | 2024-03-29T16:45:11Z | https://github.com/ivy-llc/ivy/issues/28705 | [
"Sub Task"
] | ZJay07 | 0 |
streamlit/streamlit | deep-learning | 10,265 | Unexpected margin above the last checkbox or toggle of a column | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
When checkboxes or toggles are placed inside a column, the last one gets an unexpected `margin-top: 0.5rem`.
### Reproducible Code Example
```Python
import streamlit as st
st.checkbox("A")
st.checkbox("B")
st.checkbox("C")
left, right = st.columns(2)
with left:
st.checkbox("D")
st.checkbox("E")
st.checkbox("F")
with right:
st.checkbox("G")
st.checkbox("H")
st.checkbox("I")
st.empty() # Work around spacing issue with the last item in the column.
```
### Steps To Reproduce
Try the above code example. It applies to `st.toggle` as well as `st.checkbox`.
### Expected Behavior
I would expect the checkbox/toggle `F` to be spaced equally with the rest of the checkboxes/toggles.
### Current Behavior
The checkbox `F` has an unexpected `margin-top: 0.5rem`.
<img width="640" alt="Image" src="https://github.com/user-attachments/assets/d9504893-3fd6-4e96-abd4-0fb22b9489b8" />
<img width="640" alt="Image" src="https://github.com/user-attachments/assets/d63fad18-59a8-437d-b7f1-20d1ec24ec6f" />
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.1
- Python version: 3.11.9
- Operating System: MacOS
- Browser: Chrome
### Additional Information
A possible workaround is to add an `st.empty()` after the last checkbox so it is no longer the last element of the column. That workaround isn't super obvious until you inspect the CSS and notice that the `margin-top: 0.5rem` is applied with a `:last-of-type` CSS selector. | closed | 2025-01-27T22:03:15Z | 2025-01-27T23:25:26Z | https://github.com/streamlit/streamlit/issues/10265 | [
"type:bug",
"status:confirmed",
"priority:P3",
"feature:st.checkbox",
"feature:st.toggle"
] | JosephMarinier | 3 |
wemake-services/django-test-migrations | pytest | 33 | Add check to forbid auto migrations | More context: https://adamj.eu/tech/2020/02/24/how-to-disallow-auto-named-django-migrations/
Discussion: https://twitter.com/AdamChainz/status/1231895529686208512 | closed | 2020-02-24T19:10:04Z | 2020-02-25T14:04:22Z | https://github.com/wemake-services/django-test-migrations/issues/33 | [
"enhancement",
"good first issue",
"help wanted"
] | sobolevn | 0 |
Avaiga/taipy | data-visualization | 2,257 | [🐛 BUG] Weird syntax when editing a boolean in Table | ### What went wrong? 🤔
Editing a cell representing a boolean will create a weird syntax shown on the UI while editing.

### Expected Behavior
We should not see this syntax.
### Steps to Reproduce Issue
Run this code and try to edit:
```python
import taipy.gui.builder as tgb
import pandas as pd
from taipy import Gui
data = pd.DataFrame(
{"Include": [True, False, True], "A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]}
)
with tgb.Page() as page:
tgb.table("{data}", editable__Include=True)
tgb.table("{data}", editable__Include=True, use_checkbox=True)
Gui(page).run()
```
### Version of Taipy
4.0.*
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-11-18T16:46:17Z | 2024-12-18T13:41:14Z | https://github.com/Avaiga/taipy/issues/2257 | [
"🖰 GUI",
"💥Malfunction",
"🟧 Priority: High"
] | FlorianJacta | 2 |
ultralytics/yolov5 | machine-learning | 13,279 | Silicon Mac GPU Support for training | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello. I have set up and trained my first model with Yolov5s (object detection) and I'm enjoying it so far. However, my training is quite slow on my macbook pro (m2 cpu/gpu). Logs says
`YOLOv5 🚀 v7.0-358-gc07b9a8b Python-3.12.3 torch-2.4.0 CPU`
I saw other posts regarding gpu acceleration on mac, but I didn't find any related to the training process. Currently, only my CPU is hitting <90% usage (total) while my GPU is pretty much idle. Training 30 epochs of a ~100 image dataset takes almost an hour. I used to use mediapipe and tflite, which trains the same dataset (30 epochs) in about 5 minutes. The `train.py` code only has options for `cuda` and `cpu`. I know that metal gpus is `mps`, but I'm not sure if I can just replace the device type with mps and call it a day.
### Additional
_No response_ | closed | 2024-08-24T23:21:55Z | 2024-09-02T06:07:09Z | https://github.com/ultralytics/yolov5/issues/13279 | [
"enhancement",
"question"
] | oliver408i | 1 |
opengeos/leafmap | plotly | 498 | Left/right labels not displaying in split map | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.23.0
- Python version: 3.10.12
- Operating System: macOS Ventura (13.4.1)
### Description
I am using `leafmap.split_map()` to make some split map visualizations using both locally stored tifs and COGs accessed via HTTPS to URL. Everything functions as expected, except that the left/right labels are not displaying on the split map. I am encountering this issue both locally and in a Google Colab runtime. The attached example is the output for two locally stored tifs:

### What I Did
For simplicity, the issue may be reproduced using the example code provided in the `leafmap.split_map` [docs](https://leafmap.org/notebooks/12_split_map/).
```
leafmap.split_map(
left_layer="NLCD 2001 CONUS Land Cover",
right_layer="NLCD 2016 CONUS Land Cover",
left_label="2001",
right_label="2016",
label_position="bottom",
center=[36.1, -114.9],
zoom=10,
)
```
The split map displays as expected, but the labels are not showing up currently. Could this be an issue with my current version of `ipyleaflet` (v.0.17.3)?
Thanks in advance for any help you may be able to provide! | closed | 2023-07-24T19:17:39Z | 2023-07-24T20:17:31Z | https://github.com/opengeos/leafmap/issues/498 | [
"bug"
] | cmspeed | 2 |
microsoft/qlib | machine-learning | 1,380 | test_contrib_model.py running error | ## Environment
$python collect_info.py all
Darwin
x86_64
macOS-10.16-x86_64-i386-64bit
Darwin Kernel Version 22.1.0: Sun Oct 9 20:15:09 PDT 2022; root:xnu-8792.41.9~2/RELEASE_ARM64_T6000
Python version: 3.10.8 (main, Nov 4 2022, 08:45:18) [Clang 12.0.0 ]
Qlib version: 0.8.6.99
numpy==1.21.5
pandas==1.5.1
scipy==1.7.3
requests==2.28.1
sacred==0.8.2
python-socketio==5.7.2
redis==4.3.5
python-redis-lock==4.0.0
schedule==1.1.0
cvxpy==1.2.0
hyperopt==0.1.2
fire==0.4.0
statsmodels==0.13.2
xlrd==2.0.1
plotly==5.9.0
matplotlib==3.5.3
tables==3.7.0
pyyaml==6.0
mlflow==1.30.0
tqdm==4.64.1
loguru==0.6.0
lightgbm==3.3.3
tornado==6.2
joblib==1.1.1
fire==0.4.0
ruamel.yaml==0.17.21
##Error
Reason: tried: '/usr/local/opt/libomp/lib/libomp.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/local/opt/libomp/lib/libomp.dylib' (no such file), '/usr/local/opt/libomp/lib/libomp.dylib' (no such file), '/usr/local/lib/libomp.dylib' (no such file), '/usr/lib/libomp.dylib' (no such file, not in dyld cache)
thanks for your help, team. | closed | 2022-12-02T15:03:58Z | 2023-10-24T02:44:10Z | https://github.com/microsoft/qlib/issues/1380 | [
"bug"
] | libeiheng | 3 |
flairNLP/flair | pytorch | 3,465 | [Question]: Compatibility with old models trained with 0.7 | ### Question
Hi Team,
Is there a way to transform models trained with v0.7 so that it can run with the latest version?
I am not able to load the old model under 0.13.1.
Thanks,
Neil | open | 2024-06-04T13:20:12Z | 2024-06-14T12:57:07Z | https://github.com/flairNLP/flair/issues/3465 | [
"question"
] | neilx4 | 1 |
piskvorky/gensim | data-science | 3,445 | Using linux nohup to execute mallet error report | Dear author:
I have checked all the issues about returning 127 error, but I can't provide much help for my problem, so I created a new issue for help.
The current situation is that I have no problem executing it directly on linux, but when using nohup to run scheduled tasks in the background, an error will be reported.
My code is:
`os.environ['MALLET_HOME'] = '/data/code/social-computing-2022/model/topic_computing/mallet-2.0.8'
mallet_path = '/data/code/social-computing-2022/model/topic_computing/mallet-2.0.8/bin/mallet'
ldamallet = gensim.models.wrappers.LdaMallet(mallet_path,
corpus=corpus, # 语料库
id2word=id2word, # 索引向词语转变字典
num_topics=num_topics, # 观点数量
iterations=500, # 迭代次数
workers=4, # 默认为4线程
)`
And the following is the error message:
`subprocess.CalledProcessError: Command './topic_computing/mallet-2.0.8/bin/mallet import-file --preserve-case --keep-sequence --remove-stopwords --token-regex "\S+" --input /tmp/3540a7_corpus.txt --output /tmp/3540a7_corpus.mallet' returned non-zero exit status 127.`
Please help | closed | 2023-02-21T10:12:34Z | 2023-02-22T03:32:44Z | https://github.com/piskvorky/gensim/issues/3445 | [] | Buaasinong | 3 |
ultralytics/ultralytics | computer-vision | 18,711 | Why the mAP increase only 0.001 percent every epoch. Any suggestion how to make fast? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I’ve been training a YOLO model on a custom dataset and have noticed that the mean Average Precision (mAP) increases by approximately 0.001% with each epoch. The training process doesn't provide clear guidance on when to stop, and I'm concerned that the model might be overfitting. However, the confusion matrix at epoch 400 doesn't seem to indicate overfitting.
Do you have any suggestions on how to determine the optimal stopping point or strategies to prevent potential overfitting?
Thank you!
<img width="855" alt="Image" src="https://github.com/user-attachments/assets/3cd039bc-5ed8-4ea2-b646-1b47bfd0c1f5" />
Thanks
### Additional
_No response_ | open | 2025-01-16T12:15:37Z | 2025-01-16T13:59:07Z | https://github.com/ultralytics/ultralytics/issues/18711 | [
"question",
"detect"
] | khandriod | 2 |
skforecast/skforecast | scikit-learn | 610 | Backrest and hyper parameter tuning | What is difference between the two?
How do you integrate both to improve your forecast model? | closed | 2023-12-19T13:02:15Z | 2024-05-06T07:14:07Z | https://github.com/skforecast/skforecast/issues/610 | [
"question"
] | yeongnamtan | 1 |
ets-labs/python-dependency-injector | asyncio | 217 | Issue resetting ThreadLocalSingletons | Hi!
I'm running into an issue with resetting a ThreadLocalSingleton.
When I call `reset` on the provider, I expect the next `_provide` to return a new instance of the specified type.
However, as can been seen in this excerpt of the code:
```
cdef class ThreadLocalSingleton(BaseSingleton):
...
def reset(self):
self.__storage.instance = None
cpdef object _provide(self, tuple args, dict kwargs):
cdef object instance
try:
instance = self.__storage.instance
except AttributeError:
instance = __factory_call(self.__instantiator, args, kwargs)
self.__storage.instance = instance
finally:
return instance
```
When `reset` is called, it sets the `__storage.instance` to `None`.
However, when the `_provide` method is called, it returns the instance, unless the attribute is not present. In other words, after a `reset`, a new instance will never be initialized, because the attribute is present, so no `AttributeError` will be raised.
Am I approaching this correctly, or should the `instance` attribute actually be deleted?
Thanks! | closed | 2019-03-20T08:19:32Z | 2019-03-22T11:01:19Z | https://github.com/ets-labs/python-dependency-injector/issues/217 | [
"bug"
] | jeroenrietveld | 5 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 24 | 模型文件应该放在什么位置 | 请问,LoRA权重下载了之后,应当放在哪个路径下? | closed | 2023-05-07T07:57:33Z | 2023-05-07T14:23:22Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/24 | [] | caichuang0415 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.