repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
deepinsight/insightface | pytorch | 2,550 | New Button to be translated | ## New Tag is a Span added by the theme when the item is checked as NEW in the product page
``` html
<span class="new product-label">New</span>
``` | open | 2024-03-27T19:29:55Z | 2024-03-27T19:29:55Z | https://github.com/deepinsight/insightface/issues/2550 | [] | Delev94 | 0 |
huggingface/datasets | numpy | 7,107 | load_dataset broken in 2.21.0 | ### Describe the bug
`eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
used to work till 2.20.0 but doesn't work in 2.21.0
In 2.20.0:

in 2.21.0:

### Steps to reproduce the bug
1. Spin up a new google collab
2. `pip install datasets==2.21.0`
3. `import datasets`
4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
5. Will throw an error.
### Expected behavior
Try steps 1-5 again but replace datasets version with 2.20.0, it will work
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.5
- PyArrow version: 17.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.5.0
| closed | 2024-08-16T14:59:51Z | 2024-08-18T09:28:43Z | https://github.com/huggingface/datasets/issues/7107 | [] | anjor | 4 |
encode/apistar | api | 489 | ASyncApp + annotation on the handler function results does not work | I have the following code:
```python
import apistar.http as http
from apistar import ASyncApp, Route
def hello_world(r: http.Request) -> dict:
print('inside hello_world')
return {'message': 'hello world'}
routes = [
Route('/', method='GET', handler=hello_world)
]
app = ASyncApp(routes=routes)
if __name__ == '__main__':
app.serve('127.0.0.1', 8000, debug=True)
```
Running this simple server by ./app.py and then going
```bash
curl -X GET -i http://localhost:8000
```
results in an "infinite loop", i.e., curl just hangs and waits for an answer.
Interestingly, when I kill the curl process, the running server outputs:
```
inside hello_world
127.0.0.1 - - [25/Apr/2018 17:07:49] "GET / HTTP/1.1" 200 -
```
When I use App instead of ASyncApp, everything works without problems.
python version: 3.6.5
package versions:
apistar 0.5.10
certifi 2018.4.16
chardet 3.0.4
idna 2.6
Jinja2 2.10
MarkupSafe 1.0
pip 10.0.1
requests 2.18.4
setuptools 39.0.1
urllib3 1.22
Werkzeug 0.14.1
wheel 0.31.0
whitenoise 3.3.1
| closed | 2018-04-25T15:11:28Z | 2018-09-25T14:26:37Z | https://github.com/encode/apistar/issues/489 | [] | marekkosta | 8 |
dynaconf/dynaconf | django | 604 | Allow dotted first level variables on .ini and .properties [was:how to load simple config without sections?] | I have following config sample:
```
#
# Use this file to override default entries in /usr/share/web/WEB-INF/classes/app.properties
#
app.web.serverURL=https://app.host.org
securitySalt=l56MPNX9I1XnTghgkRaCjlxfzyPZJR6zOjCQ3vBF8
```
I can not load it with any parser | closed | 2021-06-26T14:05:03Z | 2022-06-02T19:21:50Z | https://github.com/dynaconf/dynaconf/issues/604 | [
"RFC"
] | amg-web | 2 |
3b1b/manim | python | 2,003 | An example of manim pi creature scene with different emotions of them | Here we found some pi creatures: https://www.3blue1brown.com/images/pi-creatures/happy.svg and so on, the "happy.svg" can be replaced by other "modes" that appear in 3b1b's video source code (like "hooray" and "sad"). These pi creatures are shown [here](https://github.com/CaftBotti/manim_pi_creatures). Do not use for commercial purposes.
https://user-images.githubusercontent.com/111475301/224537098-3a42075d-371b-4c15-9e84-a3ef16f9ceb4.mp4
Up to now we found these pi creatures:
1. alien
2. angry
3. awe
4. concentrating
5. concerned_musician
6. confused
7. conniving
8. dance_1
9. dance_2
10. dance_3
11. dance_kick
12. dejected
13. erm
14. frustrated
15. gracious
16. guilty
17. happy
18. hesitant
19. hooray
20. horrified
21. maybe
22. miner
23. monster
24. plain
25. pleading
26. pondering
27. raise_left_hand
28. raise_right_hand
29. sad
30. sassy
31. shruggie
32. sick
33. speaking
34. surprised
35. tease
36. thinking
37. tired
38. wave_1
39. wave_2
40. wave_3
41. well | closed | 2023-03-12T09:59:59Z | 2023-03-13T02:18:24Z | https://github.com/3b1b/manim/issues/2003 | [] | CaftBotti | 0 |
feature-engine/feature_engine | scikit-learn | 232 | Is it possible to generalize PRatioEncoder as VarEncoder? | To encode categories PRE uses formula
p(1)/p(0)
and only suitable for y~Ber (classification).
But here is an idea: variance of Bernoulli distributed X is equal p(1)*p(0), which is close and should be correlated to the first formula. Computing variance should also allow PRE to be used in regression tasks (when y is not {0,1}) and also opens door to adding options for different variance measures, like var, entropy, mad etc. | closed | 2021-01-27T14:58:19Z | 2021-02-08T12:03:57Z | https://github.com/feature-engine/feature_engine/issues/232 | [] | glevv | 1 |
flairNLP/flair | nlp | 3,270 | [Feature]: Add Documentation link to Github repo | ### Problem statement
It's always nice to quickly reference documentation when coding. Currently users have to scroll down "below the fold" to click one of several links to access the documentation.
### Solution
Add the [documentation link](https://flairnlp.github.io/) to the sidebar of the github page. This is a ~30 second fix!
### Additional Context
Here's an example from one of my repositories:

| closed | 2023-06-15T21:31:18Z | 2023-06-20T02:58:18Z | https://github.com/flairNLP/flair/issues/3270 | [
"feature"
] | DecafSunrise | 2 |
coqui-ai/TTS | deep-learning | 4,135 | [Bug] When I generate a TTS model and play it, I only hear noise. | ### Describe the bug
Hello,
I wanted to create a TTS model using my voice with Coqui TTS, so I followed the tutorial to implement it.
I wrote a train.py file to train the voice model, but when I try to play TTS using the model I created, I only hear noise.
I thought the issue might be with my audio files, so I tried modeling with 100 samples from the LJSpeech Dataset instead, but I still only hear noise.
### To Reproduce
Here is my train.py source code:
```python
import os
from trainer import Trainer, TrainerArgs
from TTS.tts.configs.glow_tts_config import GlowTTSConfig
from TTS.tts.configs.shared_configs import BaseDatasetConfig
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.glow_tts import GlowTTS
from TTS.tts.utils.text.tokenizer import TTSTokenizer
from TTS.utils.audio import AudioProcessor
output_path = os.path.dirname(os.path.abspath(__file__))
dataset_config = BaseDatasetConfig(
formatter="ljspeech", meta_file_train="metadata.csv", path=os.path.join(output_path, "files/LJSpeech-1.1")
)
config = GlowTTSConfig(
batch_size=32,
eval_batch_size=16,
num_loader_workers=4,
num_eval_loader_workers=4,
run_eval=True,
test_delay_epochs=-1,
epochs=10,
text_cleaner="phoneme_cleaners",
use_phonemes=True,
phoneme_language="en-us",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
print_step=25,
print_eval=False,
mixed_precision=True,
output_path=output_path,
datasets=[dataset_config],
)
ap = AudioProcessor.init_from_config(config)
tokenizer, config = TTSTokenizer.init_from_config(config)
train_samples, eval_samples = load_tts_samples(
dataset_config,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
model = GlowTTS(config, ap, tokenizer, speaker_manager=None)
trainer = Trainer(
TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples
)
if __name__ == '__main__':
trainer.fit()
```
And here is the command I used to play the TTS:
```shell
tts --text "Text for TTS" --model_path ./files/best_model.pth --config_path ./files/config.json --out_path output.wav
```
### Expected behavior
_No response_
### Logs
```shell
> Training Environment:
| > Backend: Torch
| > Mixed precision: True
| > Precision: fp16
| > Num. of CPUs: 16
| > Num. of Torch Threads: 8
| > Torch seed: 54321
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: False
> Start Tensorboard: tensorboard --logdir=./output
> Model has 28610257 parameters
[4m[1m > EPOCH: 0/10[0m
--> ./output
| > avg_loader_time: 0.0013079643249511719 [0m(+0)
| > avg_loss: 3.632810592651367 [0m(+0)
| > avg_log_mle: 0.8002523481845856 [0m(+0)
| > avg_loss_dur: 2.8325581550598145 [0m(+0)
| > avg_loader_time:[91m 0.00635981559753418 [0m(+0.005051851272583008)
| > avg_loss: 3.632810592651367 [0m(+0.0)
| > avg_log_mle: 0.8002523481845856 [0m(+0.0)
| > avg_loss_dur: 2.8325581550598145 [0m(+0.0)
| > avg_loader_time:[92m 0.0036824941635131836 [0m(-0.002677321434020996)
| > avg_loss: 3.632810592651367 [0m(+0.0)
| > avg_log_mle: 0.8002523481845856 [0m(+0.0)
| > avg_loss_dur: 2.8325581550598145 [0m(+0.0)
| > avg_loader_time:[91m 0.007590651512145996 [0m(+0.0039081573486328125)
| > avg_loss:[92m 3.6250685453414917 [0m(-0.007742047309875488)
| > avg_log_mle:[92m 0.798242598772049 [0m(-0.002009749412536621)
| > avg_loss_dur:[92m 2.826825976371765 [0m(-0.005732178688049316)
| > avg_loader_time:[92m 0.0026175975799560547 [0m(-0.004973053932189941)
| > avg_loss:[92m 3.6232705116271973 [0m(-0.0017980337142944336)
| > avg_log_mle:[92m 0.7982403934001923 [0m(-2.205371856689453e-06)
| > avg_loss_dur:[92m 2.8250300884246826 [0m(-0.0017958879470825195)
| > avg_loader_time:[91m 0.009380459785461426 [0m(+0.006762862205505371)
| > avg_loss:[92m 3.6205027103424072 [0m(-0.002767801284790039)
| > avg_log_mle:[92m 0.7982217967510223 [0m(-1.8596649169921875e-05)
| > avg_loss_dur:[92m 2.822281002998352 [0m(-0.0027490854263305664)
| > avg_loader_time:[91m 0.01347208023071289 [0m(+0.004091620445251465)
| > avg_loss:[92m 3.6190003156661987 [0m(-0.001502394676208496)
| > avg_log_mle:[92m 0.798188179731369 [0m(-3.361701965332031e-05)
| > avg_loss_dur:[92m 2.820812225341797 [0m(-0.0014687776565551758)
| > avg_loader_time:[92m 0.003623485565185547 [0m(-0.009848594665527344)
| > avg_loss:[92m 3.6169681549072266 [0m(-0.002032160758972168)
| > avg_log_mle:[92m 0.7981387376785278 [0m(-4.9442052841186523e-05)
| > avg_loss_dur:[92m 2.8188294172286987 [0m(-0.0019828081130981445)
| > avg_loader_time:[91m 0.005441427230834961 [0m(+0.001817941665649414)
| > avg_loss:[92m 3.6120744943618774 [0m(-0.004893660545349121)
| > avg_log_mle:[92m 0.7980725467205048 [0m(-6.619095802307129e-05)
| > avg_loss_dur:[92m 2.8140019178390503 [0m(-0.0048274993896484375)
| > avg_loader_time:[91m 0.012539029121398926 [0m(+0.007097601890563965)
| > avg_loss:[91m 3.6240179538726807 [0m(+0.011943459510803223)
| > avg_log_mle:[92m 0.7979885637760162 [0m(-8.398294448852539e-05)
| > avg_loss_dur:[91m 2.8260293006896973 [0m(+0.012027382850646973)
```
### Environment
```shell
- 🐸TTS Version: 0.22
- PyTorch Version: 2.2.2
- Python version: 3.11.6
- OS: macOS Sequoia 15.0
- CUDA/cuDNN version: N/A
- GPU models and configuration: Radeon pro 575X
- How you installed PyTorch: pip
- Any other relevant information: Intel Core i9 8 Core, 48GB Ram
```
### Additional context
_No response_ | closed | 2025-01-22T06:44:25Z | 2025-02-16T23:11:42Z | https://github.com/coqui-ai/TTS/issues/4135 | [
"bug"
] | chuyeonhak | 5 |
benbusby/whoogle-search | flask | 921 | [BUG] locally hosted services with IP addresses are still prepended with m. and mobile. | When setting the social media redirect to a locally hosted service, ex. http://192.168.0.209:3401, if mobile links appear, the link will point to http://mobile.192.168.0.209:3401 or http://m.192.168.0.209:3401. I think this requires the same solution as #913.
**To Reproduce**
Steps to reproduce the behavior:
1. Set one of the social media redirect environment variables to a locally hosted service
2. Enable social media redirects in the service
3. Search for something with mobile links (I searched "twitter dead by daylight")
4. Hover over mobile links and see the URL
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [x] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: unRAID
- Browser: Firefox
- Version: Latest build from source | closed | 2023-01-02T02:53:31Z | 2023-01-03T17:19:41Z | https://github.com/benbusby/whoogle-search/issues/921 | [
"bug"
] | cazwacki | 1 |
mwaskom/seaborn | data-science | 3,023 | Error during legend creation with mixture of marks | Here's a minimal example, it seems that you need all three layers to trigger the error:
```python
(
so.Plot(penguins, "bill_length_mm", "bill_depth_mm", color="species")
.add(so.Dots())
.add(so.Line(), so.PolyFit(1))
.add(so.Line(), so.PolyFit(2))
)
```
<details><summary>Traceback</summary>
```python-traceback
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/IPython/core/formatters.py:343, in BaseFormatter.__call__(self, obj)
341 method = get_real_method(obj, self.print_method)
342 if method is not None:
--> 343 return method()
344 return None
345 else:
File ~/code/seaborn/seaborn/_core/plot.py:275, in Plot._repr_png_(self)
273 def _repr_png_(self) -> tuple[bytes, dict[str, float]]:
--> 275 return self.plot()._repr_png_()
File ~/code/seaborn/seaborn/_core/plot.py:814, in Plot.plot(self, pyplot)
810 """
811 Compile the plot spec and return the Plotter object.
812 """
813 with theme_context(self._theme_with_defaults()):
--> 814 return self._plot(pyplot)
File ~/code/seaborn/seaborn/_core/plot.py:847, in Plot._plot(self, pyplot)
844 plotter._plot_layer(self, layer)
846 # Add various figure decorations
--> 847 plotter._make_legend(self)
848 plotter._finalize_figure(self)
850 return plotter
File ~/code/seaborn/seaborn/_core/plot.py:1608, in Plotter._make_legend(self, p)
1605 for i, artist in enumerate(existing_artists):
1606 # Matplotlib accepts a tuple of artists and will overlay them
1607 if isinstance(artist, tuple):
-> 1608 artist += artist[i],
1609 else:
1610 existing_artists[i] = artist, artists[i]
IndexError: tuple index out of range
<seaborn._core.plot.Plot at 0x144c1f6a0>
```
</details> | closed | 2022-09-13T21:24:39Z | 2022-10-04T23:44:57Z | https://github.com/mwaskom/seaborn/issues/3023 | [
"bug",
"objects-plot"
] | mwaskom | 1 |
ultralytics/ultralytics | python | 19,553 | Build a dataloader without training | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi. I would like to examine data in a train dataloader. How can I build one without starting training? I plan to train a detection model. Here is my attempt:
``` python
from ultralytics.models.yolo.detect import DetectionTrainer
import os
# Define paths
DATA_YAML = f"{os.environ['DATASETS']}/drone_tiny/data.yaml" # Path to dataset YAML file
WEIGHTS_PATH = f"{os.environ['WEIGHTS']}/yolo11n.pt" # Path to local weights file
SAVE_IMAGES_DIR = f"{os.environ['PROJECT_ROOT']}/saved_images"
# Ensure save directory exists
os.makedirs(SAVE_IMAGES_DIR, exist_ok=True)
# Load the model
trainer = DetectionTrainer(
overrides = dict(
data = DATA_YAML
)
)
train_data, test_data = trainer.get_dataset()
dataloader = trainer.get_dataloader(train_data)
```
But this fails with the following error:
```bash
Ultralytics 8.3.77 🚀 Python-3.10.12 torch-2.3.1+cu121 CUDA:0 (NVIDIA GeForce GTX 1080 Ti, 11169MiB)
engine/trainer: task=detect, mode=train, model=None, data=/home/daniel/drone_detection/datasets/drone_tiny/data.yaml, epochs=100, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train9, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=None, workspace=None, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, copy_paste_mode=flip, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=/home/daniel/drone_detection/runs/detect/train9
train: Scanning /home/daniel/drone_detection/datasets/drone_tiny/train/labels.cache... 14180 images, 5 backgrounds, 304 corrupt: 100%|██████████| 14180/14180 [00:00<?, ?it/s]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0372]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0828 1.0406]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1289 1.1016]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0086]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0617]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0170.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1147]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0175.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1697]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0180.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2278]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0185.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2805]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0190.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3251]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.373]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4189]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.46]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1533]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0034]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0331]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0627]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0924]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1218]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.15]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1782]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2064]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2345]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2627]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2885]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_008_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0089 1.3143]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1051]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0496]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0596]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0697]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0110.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0792]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0115.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0879]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0120.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0966]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0125.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1074]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3334]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.327]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3212]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3178]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3144]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3085]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3027]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2968]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0170.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2906]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0175.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2838]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0180.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2763]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0185.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2688]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0190.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2613 1.0031]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2536 1.0071]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2454 1.0103]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2239 1.0308]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2292 1.0312]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2344 1.0314]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2396 1.0316]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2448 1.0317]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.25 1.0319]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2552 1.0321]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2613 1.0322]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2675 1.0324]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2738 1.0326]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.28 1.0327]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2863 1.0329]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2925 1.0331]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0394 1.4043]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0459 1.4003]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0519 1.3962]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0578 1.392]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0638 1.3876]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0696 1.3826]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0755 1.3777]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_009_0305.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0981 1.1988]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_010_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4141]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_010_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4111]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_010_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3877]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_010_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1133]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1162]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2451]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2451]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2197]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1699]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1377]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_011_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0446]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_012_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0225]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_012_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.041]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_012_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0664]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_012_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0732]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0109]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0109]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0039]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0234]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0102]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0110.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0187]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0120.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.291]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0125.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2998]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3066]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2988]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3008]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3057]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3125]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.501]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4619]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4033]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3584]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3203]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2812]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2119]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2979]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1805]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0305.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1513]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_014_0310.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1203]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_016_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0264]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0040.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0022]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0332]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1279]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2119]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3174]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.46 1.1621]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0641 1.1924]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_017_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1625 1.2627]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0953]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0253]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.074]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1273]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2088]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2828]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3647]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_019_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4316]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0005.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1328]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0010.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.074]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0015.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1471]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0020.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2915]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0025.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4306]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0075.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0109]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0616]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1024]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1336]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1503]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0135]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0301]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0110.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0064]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0190.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2914]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0749 1.6481]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1635]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1814]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1368]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0764]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0011]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0615]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3175]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1808]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1024]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1924]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.212]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2217]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2113]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2198]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2273]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0305.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2393]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0310.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2576]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0315.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2777]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0320.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3006]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_020_0325.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3257]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0030.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1551]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0035.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2778]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0050.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0009]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0055.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0416]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0060.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0731]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0065.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1222]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0070.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1693]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1211]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0438]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1514]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4294]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_021_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4373]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0005.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3317]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0010.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3545]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0015.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3887]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0020.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4316]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0070.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.5488]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0075.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4951]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4268]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3955]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.332]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.293]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2324]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0105.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.209]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0110.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1582]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0115.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.126]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0120.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0879]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0125.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0615]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0186]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0029]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0159]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0475]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.219]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_022_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3054]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1074]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0095.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1396]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0100.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0781]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0120.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.001]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0125.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0254]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0130.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0459]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0135.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.083]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0140.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1084]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0145.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1288]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0150.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1608]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0155.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1904]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0160.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2158]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0165.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.248]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0170.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2832]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0175.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3164]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0358]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.058]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_023_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0137]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.5144]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.499]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4838]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4711]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4583]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4449]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4316]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4185]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4062]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.394]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3825]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3711]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3597]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3492]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.339]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3288]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3185]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3091]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3002]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_040_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2913]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0180.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0093]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0185.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.02]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0190.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0762]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0195.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0915]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0200.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0918]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0205.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0795]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0635]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0572]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.058]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0629]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0651]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0686]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0741]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.086]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0996]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1134]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1227]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1309]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1419]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1574]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1729]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1965]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2201]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.248]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.279]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_047_0305.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3171]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0045.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.013]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0050.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0361]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0055.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0654]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0060.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0947]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0065.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1305]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0070.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.168]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0075.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2059]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0080.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2466]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0085.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0159 1.2979]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_BIRD_048_0090.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0567 1.3491]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0210.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0331]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0215.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0688]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1057]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1506]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1969]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2488]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0240.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2977]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0245.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3359]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0250.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.375]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0255.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4082]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0260.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.4092]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0265.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3945]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0270.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3677]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0275.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.3319]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0280.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2922]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0285.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.254]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0290.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2168]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0295.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1918]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_106_0300.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1668]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0005.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.2162]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0010.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1862]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0015.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1634]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0020.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.139]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0025.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.1135]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0030.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0799]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0035.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0442]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_DRONE_107_0040.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0207]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_HELICOPTER_040_0220.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0618]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_HELICOPTER_040_0225.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0493]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_HELICOPTER_040_0230.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0368]
train: WARNING ⚠️ /home/daniel/drone_detection/datasets/drone_tiny/train/images/V_HELICOPTER_040_0235.png: ignoring corrupt image/label: non-normalized or out of bounds coordinates [ 1.0206]
Traceback (most recent call last):
File "/home/daniel/drone_detection/visualizations/dataloader.py", line 23, in <module>
dataloader = trainer.get_dataloader(train_data)
File "/home/daniel/drone_detection/ultralytics_src/ultralytics/models/yolo/detect/train.py", line 55, in get_dataloader
return build_dataloader(dataset, batch_size, workers, shuffle, rank) # return dataloader
File "/home/daniel/drone_detection/ultralytics_src/ultralytics/data/build.py", line 144, in build_dataloader
sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle)
File "/home/daniel/.local/lib/python3.10/site-packages/torch/utils/data/distributed.py", line 68, in __init__
num_replicas = dist.get_world_size()
File "/home/daniel/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1769, in get_world_size
return _get_group_size(group)
File "/home/daniel/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 841, in _get_group_size
default_pg = _get_default_group()
File "/home/daniel/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1008, in _get_default_group
raise ValueError(
ValueError: Default process group has not been initialized, please make sure to call init_process_group.
```
### Additional
_No response_ | closed | 2025-03-06T12:17:34Z | 2025-03-06T14:01:49Z | https://github.com/ultralytics/ultralytics/issues/19553 | [
"question",
"dependencies",
"detect"
] | daniellehot | 3 |
mckinsey/vizro | plotly | 541 | Mobile version layout bugs | ### Description
Here's some configurations where layout is not working as expected:
1. Table in one container, graph in second
<img width="294" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/f4a6c52f-72d0-4392-b678-340486a39cf5">
2. Table in one container, graph in second in horizontal orientation
<img width="843" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/ecde59a2-1c55-42bf-a40f-0ef1d117a9fa">
3. Two graphs in horizontal orientation plus card
<img width="841" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/1e165435-3a88-4714-9659-f873d7d634be">
4. Agrid title overlaps second container tab
<img width="295" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/f694f78f-88a2-42f6-ac78-d738a0facb1e">
5. Agreed is unreachable with another graph in horizontal orientation
<img width="840" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/f8f94bf8-8d96-4d9c-90ab-978aa6f8df0d">
6. No graph displayed with two table on the same page
<img width="290" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/6a03b35f-171e-4597-970b-fec933359b1c">
7. No graph displayed with lots of components in one container
<img width="292" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/8d41f06a-ca49-453c-ae9e-41250070ac5a">
### Expected behavior
_No response_
### Which package?
vizro
### Package version
0.1.17
### Python version
3.9
### OS
Mac, Linux
### How to Reproduce
Just run tests examples from `vizro-qa`
### Output
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | open | 2024-06-21T13:06:28Z | 2024-06-27T08:28:34Z | https://github.com/mckinsey/vizro/issues/541 | [
"Bug Report :bug:"
] | l0uden | 0 |
Neoteroi/BlackSheep | asyncio | 63 | Correct error happening with latest pip-20.3.1 | ```bash
ERROR: Cannot install blacksheep and blacksheep==0.2.8 because these package versions have conflicting dependencies.
The conflict is caused by:
blacksheep 0.2.8 depends on essentials==1.1.4
essentials-openapi 0.0.9 depends on essentials==1.1.3
blacksheep 0.2.8 depends on essentials==1.1.4
essentials-openapi 0.0.2 depends on essentials==1.1.3
``` | closed | 2020-12-11T20:16:48Z | 2020-12-11T23:39:12Z | https://github.com/Neoteroi/BlackSheep/issues/63 | [] | RobertoPrevato | 0 |
TencentARC/GFPGAN | deep-learning | 275 | GFPGAN to TorchScript/TensorRT | Hello, I am trying to convert the GFPGAN model to TorchScript/TensorRT to increase model performance. Has there be made any efforts yet on this?
So far I made a successful conversion to onnx (including the StyleGAN Decoder)
However the conversion to torchscript (or even just tracing) results in some errors of the StyleGAN Decoder part) | open | 2022-09-27T10:24:54Z | 2023-11-29T13:57:11Z | https://github.com/TencentARC/GFPGAN/issues/275 | [] | lschaupp | 10 |
gee-community/geemap | jupyter | 1,352 | the properties of draw_features is {} | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
python ==3.8.8 Windows
```python
Map = geemap.Map(center=[34, 99], zoom=4, add_google_map=True)
Map
Map.draw_features[0].getInfo()
```
### Description
upgrade geemap from 0.17.3-->0.18.1. I have drew a vector on the diagram, but the display feature is empty.
`{'type': 'Feature', 'geometry': None, 'properties': {}}`
| closed | 2022-12-01T03:57:14Z | 2022-12-02T00:04:12Z | https://github.com/gee-community/geemap/issues/1352 | [
"bug"
] | wurenzhe163 | 3 |
pinry/pinry | django | 310 | Upload and api auth | Hi I'm trying to upload a set of images on my pinry running on docker using the script you advice in #305:
https://github.com/winkidney/PickTrue/blob/feature/import-to-pinry/src/picktrue/pinry/uploader.py
The problem is that I'm not able to autenticate.
Calling /api/v2/profile/login/ return 403.
Looking at https://github.com/pinry/pinry/blob/master/pinry-spa/src/components/api.js I'm sure I'm missing something.
But what?
From debug I got:
> Forbidden (Referer checking failed - no Referer.): /api/v2/profile/login/
I've tried to add my calling ip/domain to ALLOWED_HOSTS but doesn't work.
Thank you | closed | 2021-12-13T10:00:19Z | 2021-12-16T08:32:12Z | https://github.com/pinry/pinry/issues/310 | [] | mbelletti | 2 |
littlecodersh/ItChat | api | 641 | 搜索聊天记录功能 | open | 2018-04-19T14:22:36Z | 2018-06-06T05:15:56Z | https://github.com/littlecodersh/ItChat/issues/641 | [
"help wanted"
] | imporseble | 0 | |
pydata/xarray | pandas | 9,647 | Could we defer to flox for `GroupBy.first`? | ### Is your feature request related to a problem?
I was wondering why a `groupby("foo").first()` call was going so slowly — I think we run a python loop for this, rather than calling into flox:
https://github.com/pydata/xarray/blob/b9780e7a32b701736ebcf33d9cb0b380e92c91d5/xarray/core/groupby.py#L1218-L1231
### Describe the solution you'd like
Could we call into flox? Numbagg has the routines...
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2024-10-18T20:55:43Z | 2025-03-19T14:48:04Z | https://github.com/pydata/xarray/issues/9647 | [
"enhancement",
"topic-groupby"
] | max-sixty | 4 |
OpenBB-finance/OpenBB | python | 6,848 | [🕹️] Starry-eyed Supporter | ### What side quest or challenge are you solving?
Starry-eyed Supporter
### Points
150
### Description
github accounts:
https://github.com/umairullah0905
https://github.com/akaswang
https://github.com/umeshs25
https://github.com/giteshsarvaiya
https://github.com/Hamsegg
### Provide proof that you've completed the task





| closed | 2024-10-23T07:30:21Z | 2024-10-23T12:37:26Z | https://github.com/OpenBB-finance/OpenBB/issues/6848 | [] | rajeevDewangan | 2 |
zappa/Zappa | flask | 696 | [Migrated] Attribute not found decorator | Originally from: https://github.com/Miserlou/Zappa/issues/1777 by [dadrake3](https://github.com/dadrake3)
<!--- Provide a general summary of the issue in the Title above -->
## Context
def zappa_async(func):
print('here')
@wraps(func)
@task(capture_response=True)
def func_wrap_async(*args, **kwargs):
return func(*args, **kwargs)
def func_wrap_async_response_id(*args, **kwargs):
return func_wrap_async(*args, **kwargs).response_id
return func_wrap_async_response_id
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
3.6
## Expected Behavior
<!--- Tell us what should happen -->
Take a function and return a new function that is asynchronous and returns its response id
## Actual Behavior
<!--- Tell us what happens instead -->
lambda throws module 'rap_stats.MapReduce' has no attribute 'func_wrap_async': AttributeError
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Im not sure but i think it is losing reference to first closure on subsequent invocations
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
0.47.1
* Operating System and Python version:
python3.6 macOSX mojave 14.1
* The output of `pip freeze`:
argcomplete==1.9.3
atomicwrites==1.3.0
attrs==18.2.0
beautifulsoup4==4.7.1
boto3==1.9.89
botocore==1.12.89
certifi==2018.11.29
cfn-flip==1.1.0.post1
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
Flask==1.0.2
future==0.16.0
hjson==3.0.1
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
markovify==0.7.1
MarkupSafe==1.1.0
more-itertools==5.0.0
numpy==1.16.1
placebo==0.8.2
pluggy==0.8.1
py==1.7.0
pytest==4.2.0
python-dateutil==2.8.0
python-slugify==1.2.4
PyYAML==3.13
requests==2.21.0
s3transfer==0.2.0
six==1.12.0
soupsieve==1.7.3
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.2
Unidecode==1.0.23
urllib3==1.24.1
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
zappa==0.47.1
* Your `zappa_settings.py`:
{
"dev": {
"app_function": "main.app",
"aws_region": "us-east-1",
"profile_name": "default",
"project_name": "rap-stats",
"runtime": "python3.6",
"s3_bucket": "rap-stats-api",
"environment_variables": {
"DATA-BUCKET": "rap-stats-data",
"SERVERTYPE": "AWS Lambda"
},
"async_resources": "true",
"async_response_table": "rap-stats-async-response-table",
"timeout_seconds": 300,
"certificate_arn": "arn:aws:acm:us-east-1:513448149218:certificate/e8a3691f-b297-4fcf-a7a2-19e657bf501c",
"domain": "rap-stats.com",
"manage_roles": false, // Disable Zappa client managing roles.
"role_name": "rap-stats-dev-ZappaLambdaExecutionRole", // Name of your Zappa execution role. Optional, default: <project_name>-<env>-ZappaExecutionRole.
"role_arn": "arn:aws:iam::513448149218:role/rap-stats-dev-ZappaLambdaExecutionRole"
}
}
| closed | 2021-02-20T12:33:04Z | 2022-07-16T06:36:58Z | https://github.com/zappa/Zappa/issues/696 | [] | jneves | 1 |
psf/requests | python | 6,015 | Possible issue with proxies and TLS versions when using a session. | Using a session or a request object with the same parameters should yield the same results.
When a proxy is used, and when the target website supports TLS 1.0 and TLS 1.1 (or does not support TLS 1.3, I could not figure it out), a request object works fine, whereas a session throws a SSL Error.
## Expected Result
```python
import requests
proxies = {
'http': 'http://127.0.0.1:8888',
'https': 'http://127.0.0.1:8888',
}
requests.get('https://sidep.gouv.fr/', proxies=proxies)
session = requests.Session()
session.proxies.update(proxies)
session.get('https://sidep.gouv.fr/')
```
The two ways to get the data should yield the same result.
## Actual Result
The request works, but not with the session:
```
HTTPSConnectionPool(host='sidep.gouv.fr', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:997)')))
Traceback (most recent call last):
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 964, in _prepare_proxy
conn.connect()
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py", line 364, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py", line 501, in _connect_tls_proxy
socket = ssl_wrap_socket(
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\ssl_.py", line 453, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\ssl_.py", line 495, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 512, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1070, in _create
self.do_handshake()
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1341, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py", line 439, in send
resp = conn.urlopen(
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='sidep.gouv.fr', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "c:\Users\Max\.vscode\extensions\ms-python.python-2021.12.1559732655\pythonFiles\lib\python\debugpy\__main__.py", line 45, in <module>
cli.main()
File "c:\Users\Max\.vscode\extensions\ms-python.python-2021.12.1559732655\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 444, in main
run()
File "c:\Users\Max\.vscode\extensions\ms-python.python-2021.12.1559732655\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 285, in run_file
runpy.run_path(target_as_str, run_name=compat.force_str("__main__"))
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 269, in run_path
return _run_module_code(code, init_globals, run_name,
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "c:\Users\Max\testssl.py", line 16, in <module>
raise e
File "c:\Users\Max\testssl.py", line 13, in <module>
session.get('https://sidep.gouv.fr/')
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "C:\Users\Max\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='sidep.gouv.fr', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:997)')))
```
When using a TLS 1.3 enabled (which seem to mean TLS 1.0 and 1.1 disabled) website, both versions work, for example:
```python
import requests
proxies = {
'http': 'http://127.0.0.1:8888',
'https': 'http://127.0.0.1:8888',
}
requests.get('https://example.com/', proxies=proxies)
session = requests.Session()
session.proxies.update(proxies)
session.verify = False
session.get('https://example.com/')
```
Without the proxy, it works fine for both websites. I spend a couple hours trying with many websites to figure out the breaking cause, and I believe it is the TLS version.
## System Information
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "2.0.9"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.3"
},
"implementation": {
"name": "CPython",
"version": "3.10.1"
},
"platform": {
"release": "10",
"system": "Windows"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.26.0"
},
"system_ssl": {
"version": "101010cf"
},
"urllib3": {
"version": "1.26.7"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
| open | 2021-12-27T11:27:52Z | 2024-05-24T21:23:59Z | https://github.com/psf/requests/issues/6015 | [] | defunes43 | 3 |
openapi-generators/openapi-python-client | fastapi | 408 | async syntax error | I'm using openapi-generator-cli v3.0.x, code generation process is ok, but in the moment of use the api, i've the following issue:
`blorente@drama-laptop:~/Documentos/repos/openapi/test$ python3 test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
import openapi_client
File "/home/blorente/Documentos/repos/openapi/test/openapi_client/__init__.py", line 18, in <module>
from openapi_client.api.default_api import DefaultApi
File "/home/blorente/Documentos/repos/openapi/test/openapi_client/api/__init__.py", line 6, in <module>
from openapi_client.api.default_api import DefaultApi
File "/home/blorente/Documentos/repos/openapi/test/openapi_client/api/default_api.py", line 124
async=params.get('async'),
^
SyntaxError: invalid syntax`
**To Reproduce**
Steps to reproduce the behavior:
1. Install openapi-generator-cli 3.0.0
2. Make an whatever.yaml file with using openpi:3.0.0
3. Run openapi-generator-cli genrate -i whatever.yaml -g python -o test (also tried with --aadditional-properties=library=asyncio)
4. pip3 install -r requirements.txt
5. pip3 install -e openapi_client
6. Runing a custom and simple script importing the library
7. Run the custom script test.py (not autogenerated)
8. Crashing
`blorente@drama-laptop:~/Documentos/repos/openapi/test$ python3 test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
import openapi_client
File "/home/blorente/Documentos/repos/openapi/test/openapi_client/__init__.py", line 18, in <module>
from openapi_client.api.default_api import DefaultApi
File "/home/blorente/Documentos/repos/openapi/test/openapi_client/api/__init__.py", line 6, in <module>
from openapi_client.api.default_api import DefaultApi
File "/home/blorente/Documentos/repos/openapi/test/openapi_client/api/default_api.py", line 124
async=params.get('async'),
^
SyntaxError: invalid syntax`
**Expected behavior**
I expect to test the generated client api
**OpenAPI Spec File**
https://pastebin.com/D2amPb8P
**Desktop (please complete the following information):**
- OS: Ubuntu 20.04
- Python Version: 3.8
- openapi-generator-cli version 3.0.0
**Additional context**
I'm learning by myself with openapi generators, so please, be nice :).
| closed | 2021-05-04T18:51:11Z | 2021-05-04T18:53:25Z | https://github.com/openapi-generators/openapi-python-client/issues/408 | [
"🐞bug"
] | brunolorente | 1 |
mlflow/mlflow | machine-learning | 14,915 | [FR] automatically update artifact view in UI | ### Willingness to contribute
No. I cannot contribute this feature at this time.
### Proposal Summary
We often use a `.txt` artifact as a log and append to it over the course of a run. If would be nice if the artifact display was able to refresh the view when the the artifact changes, like the plot windows to.
Ideally it would be nice if this were automatic, but even a button to refresh would be better than the current setup -- if you refresh the entire page, the currently selected artifact becomes unselected and then you have to reselect it with the mouse to get the updated view.
### Motivation
> #### What is the use case for this feature?
This allows users to easily monitor text logging messages for long training runs.
> #### Why is this use case valuable to support for MLflow users in general?
I think this is a feature that would be generally useful -- it's generally useful to be able to monitor text output from training runs in attition to plots etc..
> #### Why is this use case valuable to support for your project(s) or organization?
See above
> #### Why is it currently difficult to achieve this use case?
I don't think there's any way to currently do this without modifying the UI/front end code.
### Details
_No response_
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [x] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations | open | 2025-03-08T18:51:09Z | 2025-03-09T07:47:36Z | https://github.com/mlflow/mlflow/issues/14915 | [
"enhancement",
"area/uiux"
] | mazer-ai | 1 |
ivy-llc/ivy | numpy | 28,136 | Getting the stateful tests up and running | Trying to run the stateful tests throws an error, e.g. running the tests for `ivy_tests/test_ivy/test_stateful/test_activations.py::test_elu` throws an error
```
E ModuleNotFoundError: No module named ‘ELU'
```
This is the same across all other stateful tests. The goal of this task is to fix this error so that all stateful tests run successfully without error unless there’s a genuine test failure | closed | 2024-01-31T11:11:05Z | 2024-05-06T10:47:36Z | https://github.com/ivy-llc/ivy/issues/28136 | [
"Bounty"
] | vedpatwardhan | 7 |
flairNLP/flair | pytorch | 3,401 | A missing implementation of a method causing training to be stopped | ### Describe the bug
A missing implementation of a method called "to_params" in "flair/embeddings/base.py" causing training to be stopped in the middle
### To Reproduce
```python
from flair.data import Corpus, Sentence, Label
from flair.embeddings import WordEmbeddings, FlairEmbeddings, DocumentLSTMEmbeddings
from flair.models import TextClassifier
from flair.trainers import ModelTrainer
# Load embeddings
word_embeddings = [WordEmbeddings('glove'), FlairEmbeddings('news-forward-fast'), FlairEmbeddings('news-backward-fast')]
document_embeddings = DocumentLSTMEmbeddings(word_embeddings, hidden_size=512, reproject_words=True, reproject_words_dimension=256)
# as usual created a corpus.....
classifier = TextClassifier(document_embeddings,
label_dictionary = corpus.make_label_dictionary(label_type='anger'),
label_type='anger',
multi_label=True)
# Create a ModelTrainer and train the model
trainer = ModelTrainer(classifier, corpus)
trainer.train('/kaggle/working/anger',max_epochs=10)
```
### Expected behavior
should continue to train
### Logs and Stack traces
```stacktrace
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[89], line 3
1 # Create a ModelTrainer and train the model
2 trainer = ModelTrainer(classifier, corpus)
----> 3 trainer.train('/kaggle/working/anger',max_epochs=10, save_optimizer_state=False)
File /opt/conda/lib/python3.10/site-packages/flair/trainers/trainer.py:200, in ModelTrainer.train(self, base_path, anneal_factor, patience, min_learning_rate, initial_extra_patience, anneal_with_restarts, learning_rate, decoder_learning_rate, mini_batch_size, eval_batch_size, mini_batch_chunk_size, max_epochs, optimizer, train_with_dev, train_with_test, reduce_transformer_vocab, main_evaluation_metric, monitor_test, monitor_train_sample, use_final_model_for_eval, gold_label_dictionary_for_eval, exclude_labels, sampler, shuffle, shuffle_first_epoch, embeddings_storage_mode, epoch, save_final_model, save_optimizer_state, save_model_each_k_epochs, create_file_logs, create_loss_file, write_weights, plugins, attach_default_scheduler, **kwargs)
189 for var in [
190 "self",
191 "anneal_factor",
(...)
197 "kwargs",
198 ]:
199 local_variables.pop(var)
--> 200 return self.train_custom(**local_variables, **kwargs)
File /opt/conda/lib/python3.10/site-packages/flair/trainers/trainer.py:735, in ModelTrainer.train_custom(self, base_path, learning_rate, decoder_learning_rate, mini_batch_size, eval_batch_size, mini_batch_chunk_size, max_epochs, optimizer, train_with_dev, train_with_test, max_grad_norm, reduce_transformer_vocab, main_evaluation_metric, monitor_test, monitor_train_sample, use_final_model_for_eval, gold_label_dictionary_for_eval, exclude_labels, sampler, shuffle, shuffle_first_epoch, embeddings_storage_mode, epoch, save_final_model, save_optimizer_state, save_model_each_k_epochs, create_file_logs, create_loss_file, write_weights, use_amp, plugins, **kwargs)
733 if save_best_model and current_epoch_has_best_model_so_far:
734 log.info("saving best model")
--> 735 self.model.save(base_path / "best-model.pt", checkpoint=save_optimizer_state)
737 # - SWAPlugin -> restores SGD weights from SWA
738 self.dispatch("after_training_loop")
File /opt/conda/lib/python3.10/site-packages/flair/nn/model.py:119, in Model.save(self, model_file, checkpoint)
112 def save(self, model_file: Union[str, Path], checkpoint: bool = False):
113 """Saves the current model to the provided file.
114
115 Args:
116 model_file: the model file
117 checkpoint: currently unused.
118 """
--> 119 model_state = self._get_state_dict()
121 # write out a "model card" if one is set
122 if self.model_card is not None:
File /opt/conda/lib/python3.10/site-packages/flair/models/text_classification_model.py:65, in TextClassifier._get_state_dict(self)
62 def _get_state_dict(self):
63 model_state = {
64 **super()._get_state_dict(),
---> 65 "document_embeddings": self.embeddings.save_embeddings(use_state_dict=False),
66 "label_dictionary": self.label_dictionary,
67 "label_type": self.label_type,
68 "multi_label": self.multi_label,
69 "multi_label_threshold": self.multi_label_threshold,
70 "weight_dict": self.weight_dict,
71 }
72 return model_state
File /opt/conda/lib/python3.10/site-packages/flair/embeddings/base.py:103, in Embeddings.save_embeddings(self, use_state_dict)
102 def save_embeddings(self, use_state_dict: bool = True):
--> 103 params = self.to_params()
104 if use_state_dict:
105 params["state_dict"] = self.state_dict()
File /opt/conda/lib/python3.10/site-packages/flair/embeddings/base.py:91, in Embeddings.to_params(self)
90 def to_params(self) -> Dict[str, Any]:
---> 91 raise NotImplementedError
NotImplementedError:
```
### Screenshots
<img width="594" alt="Screenshot 2024-02-02 144452" src="https://github.com/flairNLP/flair/assets/93437568/90906b31-8d78-46b2-9ea9-a9befe690a6d">
<img width="642" alt="Screenshot 2024-02-02 143347" src="https://github.com/flairNLP/flair/assets/93437568/adc52000-03d0-4f6e-b2da-f18300efb75b">
### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.13.1
##### Pytorch
2.1.2
##### Transformers
4.37.0
#### GPU
True | closed | 2024-02-02T09:14:46Z | 2024-02-03T02:01:39Z | https://github.com/flairNLP/flair/issues/3401 | [
"bug"
] | SanjanaVHerur | 3 |
cs230-stanford/cs230-code-examples | computer-vision | 7 | Organization of the blog posts | ### General (common between TensorFlow and PyTorch)
1. Introduction to project starter code
2. Logging + hyperparams
3. AWS setup
4. Train/Dev/Test set
### TensorFlow
1. Getting started
2. Dataset pipeline: `tf.data`
3. Creating the model (`tf.layers`) + training + evaluation
- model
- training ops
- input_fn and model_fn
- evaluation and `tf.metrics`
- initialization
- saving
- tensorboard
- global_step
| closed | 2018-01-31T03:39:22Z | 2018-02-01T09:51:48Z | https://github.com/cs230-stanford/cs230-code-examples/issues/7 | [] | omoindrot | 0 |
jina-ai/serve | machine-learning | 6,119 | Release Note | # Release Note
This release contains 1 bug fix.
## 🐞 Bug Fixes
### Fix dependency on OpenTelemetry Exporter Prometheus (#6118)
We fixed the dependency version with `opentelemetry-exporter-prometheus` to avoid using deprecated versions.
## 🤟 Contributors
We would like to thank all contributors to this release:
- Joan Fontanals (@JoanFM)
| closed | 2023-12-01T08:33:41Z | 2023-12-01T10:47:10Z | https://github.com/jina-ai/serve/issues/6119 | [] | JoanFM | 0 |
JoeanAmier/XHS-Downloader | api | 32 | 可以采集到用户名和发布时间吗 | 作为文件名/文件夹名称可选配置项,可以自行在配置文件中设置。
另外要说下真的好用,谢谢分享 | open | 2023-12-25T09:18:20Z | 2023-12-25T14:03:35Z | https://github.com/JoeanAmier/XHS-Downloader/issues/32 | [] | lqg5522 | 2 |
RobertCraigie/prisma-client-py | pydantic | 620 | Investigate memory usage | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We haven't put any effort towards investigating whether or not there are any memory leaks / improvements that could be made.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
TBD
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
https://discord.com/channels/933860922039099444/933860923117043718/1047108450737459281
| open | 2022-11-29T11:21:30Z | 2024-08-20T15:57:43Z | https://github.com/RobertCraigie/prisma-client-py/issues/620 | [
"kind/improvement",
"topic: internal",
"topic: perf",
"priority/medium",
"level/unknown"
] | RobertCraigie | 1 |
Sanster/IOPaint | pytorch | 109 | SD1.5 : RuntimeError: Input type (float) and bias type (c10::Half) should be the same | I'm getting "RuntimeError: Input type (float) and bias type (c10::Half) should be the same" using SD1.5 with those parameters :
lama-cleaner --model=sd1.5 --device=cpu --port=8181 --sd-run-local --sd-cpu-textencoder
any idea how to fix this ? | closed | 2022-11-03T11:01:16Z | 2023-06-06T21:17:22Z | https://github.com/Sanster/IOPaint/issues/109 | [] | AntoineTurmel | 5 |
pytorch/pytorch | python | 149,556 | dynamo: dont graph break on `torch.jit.isinstance` | It looks like this is a flavor of `isinstance()` that is meant to make torchscript happy. From user empathy day, it looks like `torchaudio` uses this API pretty heavily. We should probably just handle it in dynamo (by mapping it to builtin `isinstance`). Example: https://github.com/pytorch/audio/blob/main/src/torchaudio/functional/functional.py#L233
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | open | 2025-03-19T21:41:37Z | 2025-03-24T10:41:06Z | https://github.com/pytorch/pytorch/issues/149556 | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | bdhirsh | 0 |
thp/urlwatch | automation | 2 | os.getlogin() - Inappropriate ioctl for device | Hello. In the file `handler.py`, line 173, you use `os.getlogin()`.
According to the `os.getlogin()` doc, it « Returns the user logged in to the controlling terminal of the process. ».
It means that if there is no controlling terminal, because urlwatch is launched by cron, or by a systemd.service for example, it will fails with this error:
`OSError: [Errno 25] Inappropriate ioctl for device`
You can find a "fix" for a similar issue in the gitpython repositery: https://github.com/swallat/GitPython/commit/f362d10fa24395c21b1629923ccd705ba73ae996
Thank you for this program.
| closed | 2014-07-14T09:09:47Z | 2020-06-09T22:15:41Z | https://github.com/thp/urlwatch/issues/2 | [] | ghost | 1 |
Lightning-AI/LitServe | rest-api | 352 | How to route /docs path in litserve behind a proxy? | I have hosted litserve as kubernetes(EKS) deployment with a service, now it is further connected to a proxy with Virtual service CRD and gateway.
In eks deployment,
- Model: the url works 0.0.0.0:4000/predict after port forwarding.
- Docs: The url works 0.0.0.0:4000/docs after port forwarding.
In EKS Service, the above url works, mapping 4000:4000, and then port forwarding.
Now, Istio's virtual service has prefix set as "modV1" and I am able to hit the model api as
`domain-name/modV1/predict`
But /docs api doesn't work from virtual service,
`domain-name/modV1/docs`
How to update or direct the /docs route in litserve for proxy? | closed | 2024-11-04T05:04:09Z | 2024-11-10T20:07:29Z | https://github.com/Lightning-AI/LitServe/issues/352 | [
"bug",
"help wanted"
] | Mayurji | 6 |
tqdm/tqdm | jupyter | 1,503 | Progress bar is not showing while training. | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
- [ ] I have visited the [source website], and in particular
read the [known issues]
- [ ] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import os
import sys
from random import shuffle
import numpy as np
import matplotlib.pyplot as plt
import argparse
from pathlib import Path
import logging
import yaml
import mlflow
from tqdm import tqdm
import csv
from sklearn.metrics import (accuracy_score, precision_score,
recall_score, f1_score)
import torch
import torch.nn as nn
from torch.functional import F
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
from dataset import *
from pretrained_models import get_model
from dataset.data import load_dataset
from mlflow import log_metric, log_param, log_params, log_artifacts
from time import sleep
ROOT = Path(__file__).resolve().parents[0]
if str(ROOT) not in sys.path:
sys.path.app(str(ROOT))
config_file = "configs/configs.yaml"
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
file_handler = logging.FileHandler(filename= "logger.log")
stream_handler = logging.StreamHandler()
formatter = logging.Formatter(fmt= "%(asctime)s: %(message)s", datefmt= '%Y-%m-%d %H:%M:%S')
file_handler.setFormatter(formatter)
stream_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
def read_args():
parser = argparse.ArgumentParser()
parser.add_argument("--epochs", type = int, default= 100, help = "number of iterations")
parser.add_argument("--learning_rate", type = float, default= 1e-4, help= "learning rate value")
parser.add_argument("--batch", type = int, default=16, help= "batch size")
parser.add_argument("--weight_decay", type = float, default=1e-5, help="value of weight decay parameter")
parser.add_argument("--save", type = str, help= "path to runs directory to save the results")
parser.add_argument("--workers", type = int, default=8, help= "number of data loader workers")
parser.add_argument('--model', type = str, help= "select model from: resnet18, DenseNet121, vgg16")
parser.add_argument('--colab', action= "store_true", help="colab training option")
parser.add_argument("--subset", action= "store_true", help= "whether to use subset")
opt = parser.parse_args()
return opt
def get_num_correct(preds, labels):
"""
get num of corrects predictions
Parameters
----------
preds: torch.tensor
labels: torch.tensor
"""
return preds.argmax(dim=1).eq(labels).sum().item()
def calculate_metrics(y_pred, y_true, flag = "all"):
"""
calculate metrics for the training logs
Parameters
----------
y_pred: torch.tensor
y_true: torch.tensor
flag: str
"""
if flag == "all":
accuracy = accuracy_score(y_true, y_pred)
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
f1_score = f1_score(y_true, y_pred)
return (accuracy, precision, recall, f1_score)
else:
accuracy = accuracy_score(y_true, y_pred)
return accuracy
def train(model,
optimizer,
criterion,
schedular,
train_loader,
val_loader,
args,
val_every
):
"""
train a model such vgg16, efficientNet, ResNet18
model: torchvision.models
optimizer: torch.optim.Adam
criterion: torch.nn.BCELossWithLogits
schedular: torch.optim.lr_schedular
configs:str
args: argparse.Namespace
"""
logger.info("creating a runs directory to store training runs...")
if args.save:
runs = os.path.join(args.save, 'Runs')
weights = os.path.join(runs, 'weights')
dirs = [runs, weights]
if not os.path.exists(runs) and not os.path.exists(weights):
for dir in dirs:
os.makedirs(dir)
#log training informations
device = "cuda" if torch.cuda.is_available() else "cpu"
logger.info("Learning rate: {}, batch size: {}, epochs: {}, device: {}".format(args.learning_rate, args.batch,
args.epochs, device))
# initialize training & validation variables
print()
valid_loss_min = np.inf
cols = [
'epoch', 'train_loss', 'train_acc', 'valid_loss', 'valid_acc'
]
rows = []
# train and validation set size
train_samples = len(train_loader.dataset)
val_samples = len(val_loader.dataset)
# push model to device
model.to(device)
# starting training.
for epoch in range(args.epochs):
epoch_loss = 0
train_corrects = 0
model.train()
with tqdm(train_loader, unit="batch") as tepoch:
for images, labels in tepoch:
tepoch.set_description(f'Epoch {epoch + 1}')
images, labels = images.to(device), labels.to(device)
predictions = model(images)
loss = criterion(predictions, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.item() * labels.size(0)
train_corrects += get_num_correct(predictions, labels)
#TODO: corrects tqdm epoch udpates...
tepoch.set_postfix(
loss=loss.item(), acc=train_corrects/train_samples)
sleep(0.01)
# now log epoch performance
train_loss = epoch_loss/train_samples
train_acc = train_corrects/train_samples
schedular.step()
log_metric("Training_accuracy", train_acc)
log_metric("Training_loss", train_loss)
# validate the model
if epoch % val_every == 0:
model.eval()
val_loss = 0
val_corrects = 0
with torch.no_grad():
for (images, labels) in val_loader:
images, labels = images.to(device), labels.to(device)
val_predictions = model(images)
val_iter_loss = criterion(val_predictions, labels)
val_loss += val_iter_loss.item() * labels.size(0)
val_corrects += get_num_correct(predictions, labels)
# average over the epoch
avg_val_loss = val_loss/val_samples
avg_val_acc = val_corrects / val_samples
rows.append([epoch, train_loss, train_acc, avg_val_loss, avg_val_acc])
log_metric("Validation_accuracy", avg_val_acc)
log_metric("Validation_loss", avg_val_loss)
# write loss and acc
tepoch.write(
f'\n\t\tAvg train loss: {train_loss:.6f}', end='\t'
)
tepoch.write(f'Avg valid loss: {avg_val_loss:.6f}\n')
# save model if validation loss has decreased
if avg_val_loss <= valid_loss_min:
tepoch.write('\t\tvalid_loss decreased', end=' ')
tepoch.write(f'({valid_loss_min:.6f} -> {avg_val_loss:.6f})')
tepoch.write('\n\t\tsaving model...\n')
torch.save(
model.state_dict(),
f'lr3e-5_{model_name}_{device}.pth'
)
valid_loss_min = avg_val_loss
# write running results for plots
with open(f'{runs}/{model_name}.csv', 'w') as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow(cols)
csv_writer.writerows(rows)
if __name__ == "__main__":
logger.info("Initializing..")
# start mlflow tracking
mlflow.start_run()
# open settings from a config file
val_every = 1
with open(config_file, 'r') as file:
cfg = yaml.safe_load(file)
# read commmand line args
args = read_args()
# data loader batch size
if args.batch:
cfg['DataLoader']["batch_size"] = args.batch
else:
batch = cfg["DataLoader"]["batch_size"]
# training epochs
if args.epochs:
epochs = args.epochs
else:
epochs = cfg["Training"]["epochs"]
# optimizer learning rate
if args.learning_rate:
lr = args.learning_rate
else:
lr = cfg["Training"]["learning_rate"]
# data loader workers
if args.workers:
workers = args.workers
else:
workers = cfg["DataLoader"]["workers"]
# optimizer weigth decay
if args.weight_decay:
weight_decay = args.weight_decay
else:
weight_decay = cfg["Training"]["weight_decay"]
# model selection
if args.model:
model_name = args.model
else:
model_name = cfg['Training']["model_name"]
# set paths for colab drive dataset directory
if args.colab:
cfg["general_configs"]["dataset splitted"] = "/gdrive/MyDrive/covid/data/COVID-19_Radiography_Dataset"
cfg["DataLoader"]["num_workers"] = 2
model = get_model(model_name, pretrained= True,
num_classes=cfg["DataLoader"]["num_classes"])
# get an optimizer
optimizer = optim.Adam(model.parameters(), lr= lr)
# get loss function
loss_function = nn.CrossEntropyLoss()
# leanring rate schedular
exp_lr_scheduler = lr_scheduler.StepLR(optimizer=optimizer, step_size=7, gamma=0.1)
# training data loader
training_loader = load_dataset(config_file= cfg, kind="train", subset = args.subset)
#valiation data loader
validation_loader = load_dataset(config_file= cfg, kind = 'val', subset = args.subset)
# list of training configuration to change when needed.
all_params = {"lr": lr,
"workers": workers,
"batch": args.batch if args.batch else None,
"weight_decay": weight_decay,
"epochs": epochs,
"model_name": model_name,
"optimizer": "adam",
"loss": "CrossEntropyLoss",
"num_classes": "4",
"schedular_steps": 7,
"schedular_gamma": 0.1,
"val_every": val_every
}
# log params configs
log_params(all_params)
logger.info("Starting Training")
print()
train(model = model,
optimizer= optimizer,
criterion= loss_function,
schedular= exp_lr_scheduler,
val_every = val_every,
train_loader= training_loader,
val_loader= validation_loader,
args= args)
print()
logger.info("Training finished.")
# end mlflow tracking
mlflow.end_run()
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
Message
----------
I am trying to train the model, when I train it the model does not show any progress bar. And please I need a progress bar for validation loader. Am using tqdm code correctly in this case. Please correct me or suggest some examples where I can leverage the power of tqdm for training and inference.
here is what I am getting from the tqdm code posted above. I want a progress bar, but its not showing currently..

| open | 2023-08-26T08:41:32Z | 2023-08-26T08:44:13Z | https://github.com/tqdm/tqdm/issues/1503 | [] | faizan1234567 | 0 |
httpie/http-prompt | rest-api | 56 | Auto suggestion (like the fish shell) | Adding auto-suggestion should be easy with the help of prompt_toolkit.
Reference: http://python-prompt-toolkit.readthedocs.io/en/stable/pages/building_prompts.html#auto-suggestion
| closed | 2016-06-16T06:42:17Z | 2016-06-20T06:14:28Z | https://github.com/httpie/http-prompt/issues/56 | [
"enhancement",
"todo"
] | eliangcs | 1 |
scikit-image/scikit-image | computer-vision | 6,979 | Move testing with nightly wheels to shedule | ### Description:
Could we move our testing with nightly wheels to a regular schedule instead of running on every action? I feel like that would create less noise in PRs for contributors and maintainers alike; contributors might be confused / maintainers have to go digging to make sure it can be ignored (e.g. see https://github.com/scikit-image/scikit-image/pull/6978#pullrequestreview-1455791913).
If we use matplotlib's approach to this (see [this part of their testing workflow](https://github.com/matplotlib/matplotlib/blob/515cce40f14a4fe4eed15ddaa569052badb71229/.github/workflows/tests.yml#L247-L256)) it would also have the added benefit to run for every case of our test matrix. The use [an action to raise issues on failures to get notified](https://github.com/matplotlib/matplotlib/blob/515cce40f14a4fe4eed15ddaa569052badb71229/.github/workflows/tests.yml#L329-L343) which also immediately provides a place to discuss the failure rather then on some random PR.
I think matplotlib's approach could also be adapted to run on every push to `main` which is what we originally planned to do.
I'm happy to give either approach a shot and do a PR. Thoughts? | open | 2023-06-01T16:43:22Z | 2023-11-30T02:26:13Z | https://github.com/scikit-image/scikit-image/issues/6979 | [
":robot: type: Infrastructure",
":pray: Feature request",
":sleeping: Dormant"
] | lagru | 4 |
aminalaee/sqladmin | fastapi | 680 | on_form_prefill functionality | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
I want to be able to have a field depend on another field and have it populate with a certain value when selected.
flask-admin has this for [editing](https://flask-admin.readthedocs.io/en/latest/_modules/flask_admin/model/base/#BaseModelView.on_form_prefil), I would like it for creating.
### Describe the solution you would like.
As an example, I have a custom form:
```
class ToolForm(Form):
name = SelectField("Name", choices=[(tool_name, tool_name) for tool_name in KnownTools.get_all_tool_names()])
config = JSONField("Config", validators=[InputRequired()])
```
When a user selects a name in the admin panel, I want to prefill the config field with the associated tool schema. In the case of a tool to send emails, you can imagine the config template to look something like
```
class SendEmailConfig(BaseModel):
sending_address: str
mail_server: str
mail_server_port: int
mail_username: str
mail_password: str
```
So, when a user selects "send_email" from the name dropdown, I would like the config field to default to
```
{
"sending_address": "str"
"mail_server": "str"
"mail_server_port": int
"mail_username": "str"
"mail_password": "str"
}
```
I think it would be nice to have an on_form_change method like on model_change
Assuming data is the form, I would like to be able to do something like:
```
async def on_form_change(self, data: Form, is_created: bool, request: Request):
if is_created:
default_config_class = KnownTools.get_tool(data.name).config
data.config = json.dumps({name: type_mapping.get(type.__name__, 'any') for name, type in default_config_class.__annotations__.items()})
```
### Describe alternatives you considered
Creating a custom template and using something like jQuery but I really would rather not.
### Additional context
_No response_ | closed | 2023-12-07T19:15:27Z | 2023-12-12T19:52:49Z | https://github.com/aminalaee/sqladmin/issues/680 | [] | JettScythe | 3 |
wkentaro/labelme | deep-learning | 1,529 | When an image is rotated (EXIF orientation is not equal to 1), the labels generated by labelme_json_to_dataset do not match. | ### Provide environment information
with latest labelme_json_to_dataset.py script and the .exe as well
### What OS are you using?
windows 11
### Describe the Bug
When an image is rotated (EXIF orientation is not equal to 1), the labels generated by labelme_json_to_dataset do not match.

### Expected Behavior
They should be

The above image was obtained by temporarily modifying the code
```
import argparse
import base64
import json
import os
import os.path as osp
import io
import imgviz
import numpy as np
import PIL.Image
import PIL.ImageOps
import PIL.ExifTags
from loguru import logger
from labelme import utils
def apply_exif_orientation(image):
"""
Apply rotation based on the EXIF Orientation label of the image.
"""
try:
exif = image._getexif()
if exif is not None:
# 查找 EXIF Orientation 标签
orientation_tag = next(
(tag for tag, name in PIL.ExifTags.TAGS.items() if name == 'Orientation'), None
)
if orientation_tag is None:
return image # 未找到 Orientation 标签
orientation = exif.get(orientation_tag, 1)
rotate_values = {
3: 180,
6: 270,
8: 90
}
if orientation in rotate_values:
angle = rotate_values[orientation]
logger.info(f"Rotate the image by {angle} degrees according to the EXIF orientation {orientation}.")
return image.rotate(angle, expand=True)
except Exception as e:
logger.warning(f"Failed to apply EXIF orientation: {e}")
return image
def main():
logger.warning(
"DEPRECATED: This script will be removed in the near future. "
"Please use `labelme_export_json` instead."
)
logger.warning(
"NOTE: This script is aimed to demonstrate how to convert a JSON file "
"to a single image dataset. so it won't handle multiple JSON files to "
"generate a real-use dataset."
)
parser = argparse.ArgumentParser()
parser.add_argument("json_file")
parser.add_argument("-o", "--out", default=None)
args = parser.parse_args()
json_file = args.json_file
if args.out is None:
out_dir = osp.basename(json_file).replace(".", "_")
out_dir = osp.join(osp.dirname(json_file), out_dir)
else:
out_dir = args.out
if not osp.exists(out_dir):
os.mkdir(out_dir)
data = json.load(open(json_file))
imageData = data.get("imageData")
if not imageData:
imagePath = os.path.join(os.path.dirname(json_file), data["imagePath"])
with open(imagePath, "rb") as f:
imageData = f.read()
imageData = base64.b64encode(imageData).decode("utf-8")
image_bytes = base64.b64decode(imageData)
image = PIL.Image.open(io.BytesIO(image_bytes))
image = apply_exif_orientation(image)
img = np.array(image)
label_name_to_value = {"_background_": 0}
for shape in sorted(data["shapes"], key=lambda x: x["label"]):
label_name = shape["label"]
if label_name in label_name_to_value:
label_value = label_name_to_value[label_name]
else:
label_value = len(label_name_to_value)
label_name_to_value[label_name] = label_value
lbl, _ = utils.shapes_to_label(img.shape, data["shapes"], label_name_to_value)
label_names = [None] * (max(label_name_to_value.values()) + 1)
for name, value in label_name_to_value.items():
label_names[value] = name
lbl_viz = imgviz.label2rgb(
lbl, imgviz.asgray(img), label_names=label_names, loc="rb"
)
PIL.Image.fromarray(img).save(osp.join(out_dir, "img.png"))
utils.lblsave(osp.join(out_dir, "label.png"), lbl)
PIL.Image.fromarray(lbl_viz).save(osp.join(out_dir, "label_viz.png"))
with open(osp.join(out_dir, "label_names.txt"), "w") as f:
for lbl_name in label_names:
f.write(lbl_name + "\n")
logger.info("Saved to: {}".format(out_dir))
if __name__ == "__main__":
main()
```
### To Reproduce
_No response_ | open | 2025-01-15T08:46:53Z | 2025-01-15T08:46:53Z | https://github.com/wkentaro/labelme/issues/1529 | [] | FuHaoCheng | 0 |
plotly/dash | data-science | 2,890 | No Layout Exception when using string elements when the layout is a list | In all versions of Dash this works:
```python
app.layout = html.Div(["Select City", dcc.Dropdown()])
```
In dash >= 2.17, this will throw an error:
```python
app.layout = ["Select City", dcc.Dropdown()]
```
Here is a full example. When I run the app, I see the following error. After refreshing the screen it works fine.
```python
from dash import Dash, dcc
app = Dash(__name__)
app.layout = ["Select City", dcc.Dropdown()]
if __name__ == '__main__':
app.run(debug=True)
```

| open | 2024-06-17T16:03:07Z | 2024-08-13T19:52:07Z | https://github.com/plotly/dash/issues/2890 | [
"bug",
"P3"
] | AnnMarieW | 0 |
benbusby/whoogle-search | flask | 843 | night mode enter text can't see——Heroku | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [ ] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [ ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| open | 2022-09-10T00:20:13Z | 2022-09-10T00:24:17Z | https://github.com/benbusby/whoogle-search/issues/843 | [
"bug"
] | rrn21833 | 0 |
zappa/Zappa | flask | 630 | [Migrated] Feature: pass response_id to async function enabling update task status | Originally from: https://github.com/Miserlou/Zappa/issues/1600 by [kiarashm](https://github.com/kiarashm)
## Description
<!-- Please describe the changes included in this PR -->
I was using the asynchronous functionality of zappa and wanted to be able to update the status of my executing 'task' in the dynamodb table created corresponding to the capture_response flag being set to true in the '@task' decorator. Currently, the status is just set to 'in progress', and is set to 'completed' when the task finished executing. I had a long-running task, and wanted to be able to get more accurate feedback in the progress of my task (what stage was it on). In order to do so, I needed to be able to retrieve the response_id of the lambda function in which the task was being executed in so I could use it inside the function to update its progress.
To solve my issue, I have passed in 'response_id' as a parameter to **kwargs when I call the lambda function corresponding to my asynchronous task. This value requires the developer to add a **kwargs argument in the async task being defined AND add in a flag which enables this feature (allow_update_status) when declaring the task decorator:
ex)
```python
from zappa.async import task
@task(capture_response=True, allow_update_status=True)
def _asynch_task(foo, bar, **kwargs):
print("TRYING TO GET THE INSTANCE ID")
if 'response_id' in kwargs:
response_id = kwargs['response_id']
print(response_id)
```
For convenience I also added in a method into async.py which allows the developer to update the dynamodb table used to track async task status. The method simply takes in the response_id and the string to update the 'async_status' value to.
ex)
```python
from zappa.async import task, update_async_response
@task(capture_response=True, allow_update_status=True)
def _asynch_task(foo, bar, **kwargs):
print("TRYING TO GET THE INSTANCE ID")
if 'response_id' in kwargs:
response_id = kwargs['response_id']
update_async_response(response_id, "UPDATED STATUS")
```
Note(*) I currently have print statements in the update_async_response method for logging purposes. These can be removed or made optional if i include a flag in the method.
Would love your feedback on this! Have tested for my own use cases and seems to do the trick! Thanks!
| closed | 2021-02-20T12:26:53Z | 2024-04-13T17:36:00Z | https://github.com/zappa/Zappa/issues/630 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
ivy-llc/ivy | pytorch | 28,311 | ivy.conj | **Why should this be implemented?**
- 3+ of the native frameworks have this function
- it's needed for a complex/long frontend function implementation
**Links to native framework implementations**
- [Jax](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.conj.html)
- [PyTorch](https://pytorch.org/docs/stable/generated/torch.conj.html)
- [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/math/conj)
- [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.conjugate.html)
| closed | 2024-02-17T17:12:36Z | 2024-03-20T03:56:41Z | https://github.com/ivy-llc/ivy/issues/28311 | [
"Next Release",
"Suggestion",
"Ivy API Experimental",
"Useful Issue"
] | ZenithFlux | 3 |
pallets-eco/flask-sqlalchemy | flask | 741 | comparing adjacent rows in R | Hi there,
In my dataframe, I have a column "dates" and I would like for R to walk through each row of dates in a loop to see if the date before or after it is within a 3-14 day range, and if not, it's indexed to a list to be removed at the end of the loop.
for example:
my_dates <- c( 1/4/2019, 1/18/2019, 4/3/2019, 2/20/2019, 4/5/2019)
I would want to remove the entire row containing 2/20/2019 because there is no other date that is within 3-14 days of that date.
Any help would be greatly appreciated!
| closed | 2019-05-22T21:44:47Z | 2020-12-05T20:21:50Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/741 | [] | StephZank | 0 |
explosion/spaCy | deep-learning | 11,992 | spacy-clausie problem | ### Discussed in https://github.com/explosion/spaCy/discussions/11991
<div type='discussions-op-text'>
<sup>Originally posted by **Abelcanc3rhack3r** December 19, 2022</sup>
Hi,
I have a problem running spacy-clausie:
https://spacy.io/universe/project/spacy-clausie
I installed spacy-clausie by running: `python -m pip install git+https://github.com/mmxgn/spacy-clausie.git`
Then I ran the code in the example:
```
import spacy
import claucy
nlp = spacy.load("en_core_web_sm")
claucy.add_to_pipe(nlp)
doc = nlp("AE died in Princeton in 1955.")
print(doc._.clauses)
```
I got the error:
Traceback (most recent call last):
File "/home/yichen/PycharmProjects/openai_codex/NLP_labelling/dependency_rel/claucy.py", line 2, in <module>
import claucy
File "/home/yichen/PycharmProjects/openai_codex/NLP_labelling/dependency_rel/claucy.py", line 5, in <module>
claucy.add_to_pipe(nlp)
AttributeError: partially initialized module 'claucy' has no attribute 'add_to_pipe' (most likely due to a circular import)
How do I solve this problem?
Thanks
</div> | closed | 2022-12-19T04:12:47Z | 2022-12-19T04:13:51Z | https://github.com/explosion/spaCy/issues/11992 | [] | Abelcanc3rhack3r | 1 |
matplotlib/mplfinance | matplotlib | 486 | How to avoid 2 y axis? | I used the following code to generate the plot. But I'm having a very weird issue that in some serial data, the y axis will be 2.
This is the error one, the "BOLLINGER_HBAND" and "BOLLINGER_LBAND" are not using the same y Axis.
<img src="https://github.com/banhao/CoinBasePro-Trading-Simulator/blob/main/screenshot/XRPUSDT_1d.jpg">
https://github.com/banhao/CoinBasePro-Trading-Simulator/blob/main/screenshot/XRPUSDT_1d.jpg
But this one, it became correct, the code is exactly the same.
<img src="https://github.com/banhao/CoinBasePro-Trading-Simulator/blob/main/screenshot/XRPUSDT_4h.jpg">
https://github.com/banhao/CoinBasePro-Trading-Simulator/blob/main/screenshot/XRPUSDT_4h.jpg
```
mc = mpf.make_marketcolors(up='#5ac390',down='#fd6a6c',volume='in',edge='None',)
s = mpf.make_mpf_style(base_mpl_style='fivethirtyeight',marketcolors=mc)
mpf.plot( serial_data, type='candle', style=s, addplot=TA_plot, title=item+' '+interval, volume=True, panel_ratios=(4,1), ylabel='Price', ylabel_lower='Volume', returnfig=True, savefig=dict(fname=variable.plot_path+'/'+item+'_'+interval+'.png',dpi=300) )
```
And another question, how to keep the font size the same as the plot shown on the screen. When I show the plots on screen they are beautiful but when I saved them as .png files, the font size changed and became ugly.
| closed | 2021-12-27T07:00:49Z | 2021-12-28T15:10:33Z | https://github.com/matplotlib/mplfinance/issues/486 | [
"question"
] | banhao | 6 |
aiortc/aiortc | asyncio | 758 | Can't hear client-side Audio after first burst of audio | I'm sending audio back to the client using this setup.
```python
class AudioStreamSendTrack(MediaStreamTrack):
kind = "audio"
def __init__(self, audio_send_queue: asyncio.Queue):
super().__init__()
self.audio_send_queue = audio_send_queue
self.resampler = AudioResampler(format='s16', layout='mono', rate=22050)
async def recv(self):
frame: AudioFrame = await self.audio_send_queue.get()
frame = self.resampler.resample(frame)[0] # Opus requires s16
time.sleep(0.015) # 15ms
return frame
```
As I add the first batch of packets to the queue, the queue is cleared and the audio is heard on the client device (I have both iOS and JavaScript client-side code). Upon any subsequent adding of batches of packets to the queue, the queue is also cleared and I can see similar logs in debug mode in relation to the packets being sent, however I get no sound on the client-side anymore.
I can see that the MediaStreamTrack is remaining enabled and not muted on the client-side, so am unsure of what is causing this behaviour. Any advice on how I should debug this further or what could be the cause? | closed | 2022-08-16T12:58:43Z | 2023-05-25T11:49:43Z | https://github.com/aiortc/aiortc/issues/758 | [] | tand22 | 2 |
drivendataorg/cookiecutter-data-science | data-science | 137 | The need for a data/queries? | Throughout my DS career I've always worked with DB connections and structured parameterized SQL queries. I also like to place them in a dedicated folder, and have been placing them in the `data/raw` directory (which I think is wrong given the cookiecutter philosophy).
Also, in my work environment I keep a dedicated connections place (either ENV vars, or directory or text file). But it would be nice to have some where to place those connection configs in a project specific manner.
Does it make sense to have such structure? To prompt the user in the same fashion as the aws S3 and profile ? If so, what is the best way to do it? | closed | 2018-08-16T14:45:41Z | 2019-01-29T17:53:10Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/137 | [] | GuiMarthe | 3 |
guohongze/adminset | django | 93 | webssh中使用private key连接报错 | webssh程序报错:
File "/usr/lib/python2.7/site-packages/webssh-0.8.0-py2.7.egg/webssh/handler.py", line 323, in ssh_connect_wrapped
worker = self.ssh_connect()
File "/usr/lib/python2.7/site-packages/webssh-0.8.0-py2.7.egg/webssh/handler.py", line 297, in ssh_connect
args = self.get_args()
File "/usr/lib/python2.7/site-packages/webssh-0.8.0-py2.7.egg/webssh/handler.py", line 274, in get_args
privatekey, password, self.privatekey_filename
File "/usr/lib/python2.7/site-packages/webssh-0.8.0-py2.7.egg/webssh/handler.py", line 225, in get_pkey_obj
or cls.get_specific_pkey(paramiko.Ed25519Key, privatekey, bpass)
AttributeError: 'module' object has no attribute 'Ed25519Key'
[E 190213 10:04:13 web:1670] Uncaught exception POST / (10.11.0.200)
HTTPServerRequest(protocol='http', host='10.10.18.230:8888', method='POST', uri='/', version='HTTP/1.1', remote_ip='10.11.0.200')
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/tornado/web.py", line 1592, in _execute
result = yield result
File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "/usr/lib64/python2.7/site-packages/tornado/concurrent.py", line 261, in result
raise_exc_info(self._exc_info)
File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 1141, in run
yielded = self.gen.throw(*exc_info)
File "/usr/lib/python2.7/site-packages/webssh-0.8.0-py2.7.egg/webssh/handler.py", line 353, in post
worker = yield future
File "/usr/lib64/python2.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 455, in result
return self.__get_result()
File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 414, in __get_result
raise exception_type, self._exception, self._traceback
AttributeError: 'module' object has no attribute 'Ed25519Key'
认证信息数据表内容:
mysql> select * from appconf_authinfo\G;
*************************** 1. row ***************************
id: 1
dis_name: test
username: root
password:
private_key: -----BEGIN RSA PRIVATE KEY-----MIIEowIBAAKCAQEA2HrVEKtdyzbPEn8hGc0r0IpY+LVAcgcZVr0t6bqMYe9bI0UbUJqmsXkMElvJL97pIssz0bCxoltvMMhzj8yz3QdeJzeRnrImR2PGtFbBfMGQBIsxdN+pgIIu8xpzF/oM0or4/XRRtR9KMdBSOGhr7epFUCJUcd8Tv7X7vgxHC72kbxXDzaGyp9Rfs/TIPmjf8pvUeo5IcHFmLPYaUnmAJylCQX8iCUgpTHoJRziNRi5WRKEzx0cVDF1ge8owRHYsQNe+oxLdf5OrlKHupFb2wjpLmMeMsIJKIiPlnuVqz/eR16mO2azMRqm9zt3pO1RWg7YNvHMsNI7P/tQDpYkDSQIBIwKCAQEAlHF8KLAFzSzlw4pf1y2aYx0Jzx0zgWP0HjiUgwOTdlr8qniHwj4pKT0Pl49lYqd7Sw7+9jAExxogW/cqq7+RRxr+u85VOZ67KaOA8LCErVGHU5Klkfh0Otzs/niJb4bkOJnPTrYpZkFXcp16NU7q7EjfEmCu7v9d/80+6Lf2M+59Peg1syMROxFH71HcjQT6fvk1vEdRUE84bHjwWXlOs15b6rmFlbyrbRYMBWh2GKrXDZo4Jt3nBxFcLcjtIjhQK03wrc+etKUujhrSpFauVvyfXMQ/ZrKxX3aqr/+FO+/vS3hj3aKM6h38hHwrmEwFLcm+DVNpGlSxbOdwBa9pqwKBgQD/2NJGQKumWN4/YjpVCB3HhEDxmz4SX5m8F6spu9H+eNlpt3js4yQG9gBX190N7gD/+0q52BhDJWjcnCtD3kzKO4Nxs1P/ucTgdSSPPWeUhW4Ww6gcROgpgeNRwtiWNhz7Hbi3PgmLg+e/aSclJmH5Ad1B2oZwKxqFbXPsBipguwKBgQDYm/uEkmLog3zsv2MRk06AdS90ghLfe7l3zG/naYFEPhDn07enq8ZTAg7f53h9yYmHVYA9YSs2Ufsyx4SSlEHtG4KjR5rA/l3nOuuW4fD5U0Z3GIE+oIAzSl4wRg2HonIEa+FF+GuImADuJIGvLC4rSb1BY9MYGGNyfTk+7Gl9ywKBgQCvcBso2+NqwJhl+jagtRu8Auq0TTHgtpVNxxZIgMqCnALMJgnHAidVOvjrxzh+lJL4rFB/b5ubweGBVST7VpsOVLHnkOkkYiCZ6e0vBYja3ybrCdJchwWYzhg4EJSEQl0D9x+TmEEPNeC9xHKdIaJEWQy/cUY+SXFFjOHGaqC3WwKBgQDSa6Pf3qk5pE14ROPlMAMds6pxLeZysrQrO5/oHVkAds6YD381KoYkwCu073xc76orsiTN6V0tdDXZjp4Ku+hFay5yupZVFFs4ZR9fXySabicx3UpaGEIF8HjBLhvFln1jYXvAUGh2EADnVqndXh44rghOJnVKm1lKpYgRPW3KpwKBgEsgr6PnTHX9x+pIrP9HEohyJyh/NepEA3Zmw5lP6uLPFAGOVpOG5+XC3VNwOajzpW4-----END RSA PRIVATE KEY-----
memo:
deploy_port: 22 | open | 2019-02-13T02:14:34Z | 2020-06-11T10:33:38Z | https://github.com/guohongze/adminset/issues/93 | [] | wenbc | 2 |
ray-project/ray | tensorflow | 51,632 | [Serve] Ray Serve Autoscaling supports the configuration of custom-metrics and policy | ### Description
Currently, Ray Serve Autoscaling only supports scaling based on the ongoing HTTP Request metrics by built-in policy, and doesn't support custom-defined metrics. This often proves to be inflexible in some practical scenarios. For example, if an application hopes to do autoscaling based on the average CPU and memory utilization of the nodes where the deployment replicas are located in the recent period, it will not be supported. Issue #31540 describes the same scenario and requirements.
To solve this problem and support custom-defined metrics and policy in Ray Serve Autoscaling, we proposed a design idea in this document and implemented and verified it in our internal version.
### Use case
__At the usage level__, we can add the `custom_metrics` and `policy` option by extending the `autoscaling_config` configuration to support custom-defined scaling metrics and policy. For example:
```python
@serve.deployment(
max_ongoing_requests=10,
autoscaling_config=dict(
min_replicas=1,
initial_replicas=1,
max_replicas=10,
custom_metrics=[
"ray_node_cpu_utilization",
"ray_node_mem_used"
],
policy="autoscale_policy:custom_autoscaling_policy"
)
)
```
Here is an implementation example of a simple custom policy `autoscale_policy:custom_autoscaling_policy` as follows:
```python
def cal_decision_num_replicas_by_custom_metrics(
curr_target_num_replicas: int,
total_num_requests: int,
num_running_replicas: int,
config: Optional[AutoscalingConfig],
capacity_adjusted_min_replicas: int,
capacity_adjusted_max_replicas: int,
policy_state: Dict[str, Any],
# Pass the custom metrics to the custom policy
custom_metrics: Dict[ReplicaID, Dict[str, float]],
) -> int:
"""
Read the values of ray_node_cpu_utilization and ray_node_mem_used from custom_metrics:
- If the average CPU utilization rate of a certain node in the recent period is greater
than 90%, add scaling up one replica.
- If the average CPU utilization rate of a certain node in the recent period is less
than 10%, and scaling down one replica.
- If the average memory utilization rate of a certain node in the recent period is
greater than 80%, add scaling up one replica.
- If the average memory utilization rate of a certain node in the recent period is
less than 10%, and scaling down one replica.
"""
if any(metrics['ray_node_cpu_utilization'] > 90.0 for _, metrics in custom_metrics.items()):
decision_num_replicas = num_running_replicas + 1
elif any(metrics['ray_node_cpu_utilization'] < 10.0 for _, metrics in custom_metrics.items()):
decision_num_replicas = num_running_replicas - 1
elif any(metrics['ray_node_mem_used'] > 80.0 for _, metrics in custom_metrics.items()):
decision_num_replicas = num_running_replicas + 1
elif any(metrics['ray_node_mem_used'] > 10.0 for _, metrics in custom_metrics.items()):
return num_running_replicas - 1
else:
decision_num_replicas = curr_target_num_replicas
return decision_num_replicas
custom_autoscaling_policy = cal_decision_num_replicas_by_custom_metrics
```
Since it's necessary to enable the replica reporting metrics policy, it is required to set `RAY_SERVE_COLLECT_AUTOSCALING_METRICS_ON_HANDLE=0`.
__At the design and implementation level__, as shown in the following figure, considering that Ray itself already supports reporting metrics through the Prometheus Metrics Exporter, we continue the idea of having each Deployment Replica report the metrics expected by the user in implementation:

The core execution process can be described as follows:
1. The Deployment Replica requests the local Prometheus Metrics Exporter to obtain the metrics periodically, and reports the metrics that users are interested in to the ServeController for aggregation according to the `custom_metrics` configuration.
2. The ServeController periodically checks and updates the status of each Deployment. During this period, it will pass the custom metrics to the custom scaling policy to calculate the desired number of Deployment replicas in the current cluster.
3. When the desired number of replicas does not match the currently actually running number of replicas, the DeploymentStateManager will execute the replica scaling operation. | open | 2025-03-24T03:37:52Z | 2025-03-24T03:37:52Z | https://github.com/ray-project/ray/issues/51632 | [
"enhancement",
"triage"
] | plotor | 0 |
scikit-learn/scikit-learn | machine-learning | 30,257 | Estimator creating `_more_tags` and inheriting from `BaseEstimator` will not warn about old tag infrastructure | While making the code of `skrub` compatible with scikit-learn 1.6, I found that the following is really surprising:
```python
# %%
import numpy as np
from sklearn.base import BaseEstimator, RegressorMixin
class MyRegressor(RegressorMixin, BaseEstimator):
def __init__(self, seed=None):
self.seed = seed
def fit(self, X, y):
self.rng_ = np.random.default_rng(self.seed)
return self
def predict(self, X):
return self.rng_.normal(size=X.shape[0])
def _more_tags(self):
return {
"multioutput": True
}
# %%
from sklearn.datasets import make_regression
X, y = make_regression(n_samples=10, n_features=5, random_state=42)
regressor = MyRegressor(seed=42).fit(X, y)
regressor.predict(X)
# %%
from sklearn.utils import get_tags
tags = get_tags(regressor) # does not warn because we inherit from BaseEstimator
tags.target_tags.multi_output # does not use anymore the _more_tags and thus is wrong
```
In the code above, because we inherit from `BaseEstimator` and `RegressorMixin`, we have the default tags set with the methods `__sklearn_tags__`.
However, the previous code that we had was using `_more_tags`.
Currently, `get_tags` will not warn that something is going wrong because we will fallback on the default tags from the base class and mixins.
I think that we should:
- use the values defined in `_more_tags` and warn for the future change
- in the future we should error if we have both `_more_tags` and `__sklearn_tags__` to be sure that people stop using `_more_tags` | closed | 2024-11-09T19:27:10Z | 2024-11-23T03:54:44Z | https://github.com/scikit-learn/scikit-learn/issues/30257 | [
"Blocker"
] | glemaitre | 4 |
stitchfix/hamilton | numpy | 297 | Restructure docs like https://diataxis.fr/ suggests. | # What
The structure of our docs could be better thought out. https://diataxis.fr/ is a good model - we should emulate what it prescribes.
# Why
Good docs are the foundation of any open source project. Having a clear structure and thus content that maps appropriately will help with that.
# Task
What needs to be done:
1. Assess current content.
2. Design where it should live.
3. Move everything as needed. | closed | 2023-01-31T00:51:31Z | 2023-02-26T17:22:51Z | https://github.com/stitchfix/hamilton/issues/297 | [
"documentation"
] | skrawcz | 1 |
sunscrapers/djoser | rest-api | 97 | Registration: Won't allow plus signs in email or username | Trying the following:
`curl -X POST http://127.0.0.1:8000/auth/register/ --data 'username=max+djoser@domain.com&password=djoser'`
I get:
`{"username":["Enter a valid username. This value may contain only letters, numbers and @/./+/-/_ characters."]}`
As you can see, the message even states explicitly that all my special characters are allowed (`+`, `@`, `.`).
But even in emails this seems to be forbidden. I like signing up with `+comment` added to my emails, since this makes testing and debugging easier. This works fine with all Django modules. However, djoser gives me this:
`curl -X POST http://127.0.0.1:8000/auth/register/ --data 'username=max-djoser&password=djoser&email=max+djoser@domain.com'`
`{"email":["Enter a valid email address."]}`
This should definitely work, shouldn't it?
| closed | 2015-11-16T19:56:33Z | 2015-11-16T21:11:22Z | https://github.com/sunscrapers/djoser/issues/97 | [] | cpury | 5 |
shibing624/text2vec | nlp | 56 | 是否支持模型加速 | 请教目前是否可以支持模型加速部署的链路?或者可以用hugging face的API来做ONNX部署?
| closed | 2023-03-05T16:48:30Z | 2023-04-14T00:35:34Z | https://github.com/shibing624/text2vec/issues/56 | [
"question"
] | flydsc | 4 |
microsoft/hummingbird | scikit-learn | 463 | Add support for 'tpot.builtins.stacking_estimator.StackingEstimator'. | I am using tpot for auto ml and unable to convert the model into pytorch getting following error.
Unable to find converter for model type <class '**tpot.builtins.stacking_estimator.StackingEstimator**'>.
It usually means the pipeline being converted contains a
transformer or a predictor with no corresponding converter implemented.
Please fill an issue at https://github.com/microsoft/hummingbird.
Traceback (most recent call last):
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/NumtraBackendHB-0.3-py3.6.egg/automl/ModelPrediction.py", line 85, in getPrediction
model_torch = convert(sklearn_model, 'pytorch', extra_config={"n_features":col_len})
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/convert.py", line 431, in convert
return _convert_common(model, backend, test_input, device, extra_config)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/convert.py", line 392, in _convert_common
return _convert_sklearn(model, backend, test_input, device, extra_config)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/convert.py", line 97, in _convert_sklearn
topology = parse_sklearn_api_model(model, extra_config)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 60, in parse_sklearn_api_model
outputs = _parse_sklearn_api(scope, model, inputs)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 232, in _parse_sklearn_api
outputs = sklearn_api_parsers_map[tmodel](scope, model, inputs)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 278, in _parse_sklearn_pipeline
inputs = _parse_sklearn_api(scope, step[1], inputs)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 234, in _parse_sklearn_api
outputs = _parse_sklearn_single_model(scope, model, inputs)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 254, in _parse_sklearn_single_model
alias = get_sklearn_api_operator_name(type(model))
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/supported.py", line 385, in get_sklearn_api_operator_name
raise MissingConverter("Unable to find converter for model type {}.".format(model_type))
hummingbird.ml.exceptions.MissingConverter: Unable to find converter for model type <class 'tpot.builtins.stacking_estimator.StackingEstimator'>.
**The complete pipeline that tpot returns is**
Pipeline(steps=[('stackingestimator',
StackingEstimator(estimator=DecisionTreeClassifier(max_depth=9,
min_samples_leaf=16,
min_samples_split=16))),
('gaussiannb', GaussianNB())])
I am converting it like this
**model_torch = convert(sklearn_model, 'pytorch', extra_config={"n_features":col_len})**
| open | 2021-03-11T06:40:41Z | 2021-03-11T16:52:49Z | https://github.com/microsoft/hummingbird/issues/463 | [
"enhancement"
] | muhammad49 | 1 |
aleju/imgaug | machine-learning | 165 | WithColorspace doesn't support HLS | Hi,
I was trying to change the brightness of an image and when i used WithColorspace with the target space HLS it give me the error : KeyError: 'HLS2RGB'.
This code work fine :
`image = cv2.imread(imagePath)`
`lighter = iaa.WithColorspace(to_colorspace="HSV",
from_colorspace="RGB",
children=iaa.WithChannels(1, iaa.Multiply((1.5))))`
`img = lighter.augment_image(image)`
And this code give the error :
`image = cv2.imread(imagePath)`
`lighter = iaa.WithColorspace(to_colorspace="HLS",
from_colorspace="RGB",
children=iaa.WithChannels(1, iaa.Multiply((1.5))))`
`img = lighter.augment_image(image)`
You can see that the only difference is the "HLS" colorspace!
And this is the stack trace :
Traceback (most recent call last):
line 22, in <module>
img = lighter.augment_image(image)
line 323, in augment_image
return self.augment_images([image], hooks=hooks)[0]
line 431, in augment_images
hooks=hooks
line 100, in _augment_images
).augment_images(images=result)
line 431, in augment_images
hooks=hooks
line 320, in _augment_images
from_to_var = ChangeColorspace.CV_VARS[from_to_var_name]
KeyError: 'HLS2RGB'
Just to let you know!
Keep up the good work! | open | 2018-08-16T21:26:13Z | 2018-08-19T03:16:13Z | https://github.com/aleju/imgaug/issues/165 | [] | robert405 | 2 |
MilesCranmer/PySR | scikit-learn | 323 | PySR paper is out! | This is long-overdue but I finally finished a methods paper describing the algorithm in PySR and SymbolicRegression.jl. You can find it here: https://github.com/MilesCranmer/pysr_paper and the arXiv here: https://arxiv.org/abs/2305.01582.
I consider this paper to be a "v1," based on an older version of the codebase. I would like to write additional papers in the future describing major updates, and I plan to invite any significant open-source contributors to be co-authors! | open | 2023-05-05T15:51:02Z | 2024-07-04T18:31:51Z | https://github.com/MilesCranmer/PySR/issues/323 | [
"documentation"
] | MilesCranmer | 2 |
cle-b/httpdbg | pytest | 141 | Feature Request: Collapsible Initiator Groups | It would be great if the UI supported expanding/collapsing requests per initiator group like so:
## All initiator groups expanded

## `test_product_connection` collapsed

Alternatively, a filter for initiator groups could be added to achieve similar ends.
| closed | 2024-09-23T21:41:15Z | 2024-09-24T20:59:02Z | https://github.com/cle-b/httpdbg/issues/141 | [] | machadocs | 2 |
seleniumbase/SeleniumBase | pytest | 3,399 | multiplie warning messages, Chrome and X11 related | Running Ubuntu 22.04 x86/amd64, default install with X11/Gnome.
Lastest version of Chrome installed using pacstall/google-chrome-deb.
Install using: https://seleniumbase.io/help_docs/virtualenv_instructions/
Running example from https://seleniumbase.io/examples/cdp_mode/ReadMe/#cdp-mode:
```
from seleniumbase import SB
with SB(uc=True, test=True, locale_code="en") as sb:
url = "https://gitlab.com/users/sign_in"
sb.activate_cdp_mode(url)
sb.uc_gui_click_captcha()
sb.sleep(2)
```
Chrome gives an error message:
`You are using an unsupported command-line flag: --disable-setuid-sandbox. Stability and security will suffer.`
python source code spits out:
`X11 display failed! Will use regular xvfb!`
The most concerning message from Chrome, should that be fixed?
| closed | 2025-01-07T20:14:30Z | 2025-01-07T20:54:39Z | https://github.com/seleniumbase/SeleniumBase/issues/3399 | [
"question",
"invalid usage",
"UC Mode / CDP Mode"
] | vladandersb | 1 |
vllm-project/vllm | pytorch | 14,487 | [Bug]: ModuleNotFoundError: No module named 'pyarrow" in main branch | ### Your current environment
# image info
The latest pull request in the repository is "[V1] Prompt logprobs + APC compatibility; prompt logprobs reqs cannot fill APC (#13949)".

# client start shell
sudo python3 ./bench_serving.py --backend vllm --dataset-name random --model deepseek-r1 --tokenizer ./tokenizer --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --random-input-len 6000 --random-output-len 1000 --random-range-ratio 1 --request-rate 16 --max-concurrency 16 --num-prompts 80 --base-url $BASE_URL --host 0.0.0.0 --port 8000 --profile
# server start shell
VLLM_USE_V1=1 VLLM_TORCH_PROFILER_DIR=/disc vllm serve /root/.cache/huggingface --tensor-parallel-size 16 --trust-remote-code --gpu-memory-utilization 0.9 --max-model-len 32768 --enforce-eager --enable-reasoning --reasoning-parser deepseek_r1 --served-model-name deepseek-r1
## error info

### 🐛 Describe the bug
in the first block
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-08T10:08:49Z | 2025-03-10T02:33:32Z | https://github.com/vllm-project/vllm/issues/14487 | [
"bug"
] | Oneal65 | 4 |
Nemo2011/bilibili-api | api | 213 | [需求] 创建投票类... | 新增了对投票的创建和更新...没投票类难办 | closed | 2023-02-23T14:23:34Z | 2023-02-23T16:07:14Z | https://github.com/Nemo2011/bilibili-api/issues/213 | [
"need"
] | z0z0r4 | 3 |
httpie/cli | python | 652 | Array in GET request | How do I add an array of tags to a GET request? I have tried below to no avail
`› http :3000/api/caters tags:='[\"vegan\"]' `
I get back an error
```
usage: http [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body] [--verbose]
[--all] [--history-print WHAT] [--stream] [--output FILE]
[--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,ssl3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
http: error: "tags:=[\"vegan\"]": Expecting value: line 1 column 2 (char 1)
``` | closed | 2018-02-16T11:24:35Z | 2018-02-16T11:29:33Z | https://github.com/httpie/cli/issues/652 | [] | hahmed | 1 |
marcomusy/vedo | numpy | 1,092 | typing.Self is not compatible with python3.10 | Currently getting an ImportError when using vedo with python 3.10.
As per [this comment](https://stackoverflow.com/a/77247460), using typing.Self with versions of python prior to 3.11 requires the use of typing_extensions.
| closed | 2024-04-12T01:40:58Z | 2024-06-13T18:40:41Z | https://github.com/marcomusy/vedo/issues/1092 | [] | Linus-Foley | 1 |
Kanaries/pygwalker | pandas | 282 | Readme Privacy Policy code does not work | The readme [Privacy Policy section](https://github.com/Kanaries/pygwalker#privacy-policy) says the following:
```python
import pygwalker as pyg, pygwalker.utils_config as pyg_conf
pyg_conf.set_config( { 'privacy': 'meta' }, save=True)
```
However, it seems `utils_config` has been separated out from the rest of pygwalker, so I had to use this instead:
```python
import pygwalker as pyg
import pygwalker_utils as pyg_utils
import pygwalker_utils.config as pyg_conf
pyg_conf.set_config( { 'privacy': 'meta' }, save=True)
``` | closed | 2023-10-24T19:29:07Z | 2023-11-03T10:55:46Z | https://github.com/Kanaries/pygwalker/issues/282 | [
"bug",
"P1"
] | EricPostMaster | 1 |
ivy-llc/ivy | numpy | 28,437 | Fix Frontend Failing Test: paddle - tensor.torch.Tensor.masked_fill | To-do list: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-02-27T11:25:45Z | 2024-04-30T15:38:55Z | https://github.com/ivy-llc/ivy/issues/28437 | [
"Sub Task"
] | StefanSan26 | 0 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 182 | Attention value is strange | **When i train the transfomer, i found the attention values are almost same**
**Encoder**:
[0.10000075, 0.10000038, 0.09999962, 0.10000114, 0.09999923, 0.09999923, 0.1 0.09999847, 0.10000038, 0.10000075, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
**Decoder**:
[0.19999756 0.2000006 0.20000137 0.2000006 0.19999985 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
[0.1666655 0.16666678 0.16666868 0.16666678 0.1666674 0.16666487
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
[0.14285614 0.14285722 0.14285886 0.14285722 0.14285776 0.14285503
0.14285776 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
[0.12499904 0.125 0.12500095 0.12499953 0.125 0.12499809
0.125 0.12500238 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
| open | 2021-07-31T13:40:46Z | 2023-03-20T14:17:24Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/182 | [] | YPatrickW | 1 |
d2l-ai/d2l-en | tensorflow | 2,523 | pip install d2l==1.0.0b0 Fails to Install on Linux Mint/Ubuntu 22.04 | Error Message:
Collecting d2l==1.0.0b0
Using cached d2l-1.0.0b0-py3-none-any.whl (141 kB)
Collecting jupyter (from d2l==1.0.0b0)
Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Requirement already satisfied: numpy in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (1.24.3)
Requirement already satisfied: matplotlib in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (3.7.1)
Requirement already satisfied: matplotlib-inline in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (0.1.6)
Requirement already satisfied: requests in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (2.31.0)
Requirement already satisfied: pandas in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (1.5.3)
Collecting gym==0.21.0 (from d2l==1.0.0b0)
Using cached gym-0.21.0.tar.gz (1.5 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
error in gym setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Thank you! | closed | 2023-07-01T17:56:41Z | 2023-07-01T18:12:09Z | https://github.com/d2l-ai/d2l-en/issues/2523 | [] | k7e7n7t | 1 |
google-research/bert | nlp | 897 | export_saved_model output file does not exist | I couldn't find the output file in the expected directory. Probably, I'm making a simple mistake. Can you help me solve this problem?
<img width="1094" alt="Screen Shot 2019-11-01 at 3 24 16 PM" src="https://user-images.githubusercontent.com/5379104/68031386-d8eceb00-fcbb-11e9-97b4-3f2b98d0a86a.png">
<img width="1045" alt="Screen Shot 2019-11-01 at 3 24 33 PM" src="https://user-images.githubusercontent.com/5379104/68031395-dc807200-fcbb-11e9-8fac-6967c46f9b9b.png">
| open | 2019-11-01T14:27:08Z | 2019-11-01T16:49:19Z | https://github.com/google-research/bert/issues/897 | [] | emrecalisir | 0 |
sinaptik-ai/pandas-ai | data-visualization | 1,120 | PANDAS API KEY needed (and used!!!) if agent.train is utilized | ### System Info
pandasai: v2.0.33
azure openai with gpt-4, api version 2024-02-01
### 🐛 Describe the bug
I defined the pandas ai api key like this, because it seems there is a bug that requires it in combination with azure open ai api (I get "pandasai.exceptions.MissingVectorStoreError: No vector store provided. Please provide a vector store to train the agent". otherwise):
os.environ["PANDASAI_API_KEY"] = "xxx"
I refer to the llm, which is azure open ai:
```
agent = Agent(df, config={"llm": llm})
```
When I train, it writes back my training data to the pandabi saas service!!!!
```
agent.train(docs=query.instructions)
```
It also stores every request (every agent chat question) to the pandabi service for some reason. This is really dangerous. The llm is clearly defined as:
```
llm = AzureOpenAI(
api_token=os.getenv("API_TOKEN"),
azure_endpoint=os.getenv("ENDPOINT"),
api_version="2024-02-01",
deployment_name="gpt-4",
)
``` | closed | 2024-04-17T18:35:54Z | 2024-04-19T09:34:59Z | https://github.com/sinaptik-ai/pandas-ai/issues/1120 | [] | flashtheman | 3 |
davidsandberg/facenet | computer-vision | 958 | AttributeError: module 'facenet' has no attribute 'write_arguments_to_file' | closed | 2019-01-23T10:03:08Z | 2019-01-23T10:03:22Z | https://github.com/davidsandberg/facenet/issues/958 | [] | wanggoudanscd | 0 | |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 695 | No module named pathlib | > matteo@MBP-di-matteo Real-Time-Voice-Cloning-master % python demo_cli.py
> Traceback (most recent call last):
> File "demo_cli.py", line 2, in <module>
> from utils.argutils import print_args
> File "/Users/matteo/Real-Time-Voice-Cloning-master/utils/argutils.py", line 22
> def print_args(args: argparse.Namespace, parser=None):
> ^
> SyntaxError: invalid syntax
> matteo@MBP-di-matteo Real-Time-Voice-Cloning-master % python demo_toolbox.py
> Traceback (most recent call last):
> File "demo_toolbox.py", line 1, in <module>
> from pathlib import Path
> ImportError: No module named pathlib
> matteo@MBP-di-matteo Real-Time-Voice-Cloning-master % sudo python demo_toolbox.py
> Password:
> Traceback (most recent call last):
> File "demo_toolbox.py", line 1, in <module>
> from pathlib import Path
> ImportError: No module named pathlib
What should I do? | closed | 2021-03-07T10:40:27Z | 2021-03-08T21:24:20Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/695 | [] | matteopuppis | 3 |
donnemartin/data-science-ipython-notebooks | pandas | 49 | Some of the links are giving 404 error | I tried rnn-lstm in Keras, the link seems to be expired.

There are other many links too showing 404 error. Please fix them. | closed | 2017-06-23T03:26:56Z | 2021-04-24T16:07:39Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/49 | [
"bug"
] | Lplenka | 2 |
Gozargah/Marzban | api | 1,027 | ماکس رو در سینگ باکس غیرفعال کنید لطفا | در اپدیت اخیر در مرزبان dev با تمپلت یا بدون تمپلت سینگ باکس در هر حالت اگه ماکس در پنل فعال باشه دیگه اون کانفیگ در سینگ باکس کار نمینه (چون ماکس در سینگ باکس فعال میشه که نباید بشه) تا قبل از این، اینطور نبود و ماکس در سینگ باکس فعال نمیشد | closed | 2024-06-02T05:42:30Z | 2024-06-02T05:59:14Z | https://github.com/Gozargah/Marzban/issues/1027 | [
"Duplicate"
] | plasticgholam | 1 |
pydantic/pydantic | pydantic | 10,851 | AliasPath support for Models | ### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing
### Description
Given what I could test/research about the `AliasPath` feature, it seems to only support grabbing nested values from dictionaries. While this is very convenient, it feels like a missed opportunity. Take the following example:
```python
from pydantic import BaseModel, AliasPath, Field
class ElegantClass(BaseModel):
regular_ol_val: int
could_be_nested: int = Field(..., validation_alias=AliasPath("nested", "could_be_nested"))
ElegantClass.model_validate({"regular_ol_val": 1, "nested": {"could_be_nested": 2}})
# Works: ElegantClass(regular_ol_val=1, could_be_nested=2)
```
But suppose we aren't passing a `nested` dictionary and instead are passing another model (or some other class really)
```python
class ElegantNestedClass(BaseModel):
nested_elegant_val: int
ElegantClass.model_validate({"regular_ol_val": 1, "nested": ElegantNestedClass(could_be_nested=2)})
# Not working: 1 validation error for ElegantClass nested.could_be_nested
# Instead we could do something like
ElegantClass.model_validate({"regular_ol_val": 1, "nested": ElegantNestedClass(could_be_nested=2).model_dump()})
# Works: ElegantClass(regular_ol_val=1, could_be_nested=2)
```
Alternatively, we could (as I have done), create a model validator that does some magic for us
```python
@model_validator(mode="before")
@classmethod
def nested_vals_as_dict(cls, data: Any) -> Any:
if isinstance(data, dict):
nested_fields = ["nested_elegant_val"]
for field in nested_fields:
if field in data and isinstance(data[field], BaseModel):
data_field: BaseModel = data[field]
data[field] = data_field.model_dump()
return data
```
However, this seems like an easy win here where there can be some additional support for `AliasPath` since it's such a convenient feature.
Thanks!
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [X] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [ ] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [ ] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [ ] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc. | open | 2024-11-15T02:51:34Z | 2025-03-20T18:24:54Z | https://github.com/pydantic/pydantic/issues/10851 | [
"feature request",
"help wanted"
] | TheCaffinatedDeveloper | 6 |
QuivrHQ/quivr | api | 3,260 | #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 #3260 Create knowledge support URL | Create knowledge should add url files | closed | 2024-09-25T16:02:00Z | 2024-09-25T16:08:31Z | https://github.com/QuivrHQ/quivr/issues/3260 | [] | linear[bot] | 1 |
recommenders-team/recommenders | data-science | 1,718 | [ASK] How can I save SASRec model for re-training and prediction? | I have tried to save trained SASRec model.
pickle, tf.saved_model.save, model.save(), and surprise.dump are not working.
While saving, I got warning saying 'Found untraced functions',
and while loading, 'AttributeError: 'SASREC' object has no attribute 'seq_max_len''.
Plz someone let me know how to save and load SASRec model! | open | 2022-05-13T18:23:23Z | 2023-08-30T14:03:13Z | https://github.com/recommenders-team/recommenders/issues/1718 | [
"help wanted"
] | beomso0 | 2 |
jschneier/django-storages | django | 959 | AWS_S3_FILE_OVERWRITE should be False by default | All Django's builtin storages do not overwrite files by default -- they append a number when there's collision.
I've been using `S3Boto3Storage` for quite some time, and suddenly found that many of my files were mixed up -- models seems to have a reference to the wrong on-disk file. After some research, it turns out this particular storage overwrites files by default.
This is very undesirable behaviour -- it's the opposite of the default used by Django, and can (and has) easily result in data loss.
Issues with the current default:
- It's the opposite of what Django does.
- It's a "delete user data by default", which is as bad as it sounds.
- It's too easy to screw up, since there's no clue that this storage behaves differently -- as I said, I only found out after researching some data loss. | open | 2020-11-22T20:36:55Z | 2023-06-22T16:46:41Z | https://github.com/jschneier/django-storages/issues/959 | [] | WhyNotHugo | 1 |
mkhorasani/Streamlit-Authenticator | streamlit | 176 | no cookie is written - help would be great ;) | I've tried a lot but I can't get it to work - any help would be much appreciated
Problem is that no cookie is written. So the reauthentication is not working.
here are the versions used:
```
Package Version
-------------------------- ---------
extra-streamlit-components 0.1.71
streamlit 1.36.0
streamlit-authenticator 0.3.2
```
part of the yaml:
```
cookie:
expiry_days: 30
key: secretkey
name: dashboard
credentials:
usernames:
test:
email: test.test@test.de
failed_login_attempts: 0
group: default
logged_in: false
name: test
password: $2b$12$KbRDEDBX12ju9IbKB1Pg4eeX9bN8oTnuM1Oj.8TGXvGa/UAvdPPzG
pre-authorized:
emails:
- test@t.com
```
and my main app.py which handles the login and redirects to the next page if login ist succsessful
```
"""
in app.py ist die initiale Authentifizierung (Login).
"""
import os
import sys
import streamlit as st
import streamlit_authenticator as stauth
import yaml
from yaml.loader import SafeLoader
from dotenv import load_dotenv
from loguru import logger
from helper.paths import (
get_userdata_file_path,
get_favicon,
get_root_dir,
get_logfile_path,
)
from helper.site_elements import footer, hide_st
from menu import menu
userfile = get_userdata_file_path()
root_dir = get_root_dir()
favicon = get_favicon()
st.set_page_config(layout="wide", page_icon=favicon, page_title="net.D Dashboard")
def load_user_config():
"""Laden der Userkonfig"""
with open(userfile, encoding="utf-8") as file:
user_data = yaml.load(file, Loader=SafeLoader)
logger.debug("Userconfig geladen")
return user_data
def update_userconf():
"""Funktion zum speichern der Userdaten"""
with open(userfile, "w", encoding="utf-8") as file:
yaml.dump(config, file, default_flow_style=False)
logger.debug("userfile updated")
load_dotenv()
@st.cache_resource
def configure_logging():
"""konfiguriert den Logger - in funktion wegen Streamlit"""
loglevel = os.environ.get("LOGLEVEL")
logger.remove()
logger.add(
get_logfile_path(),
rotation="3 days",
retention=3,
colorize=True,
level=loglevel,
)
logger.add(sys.stderr)
configure_logging()
config = load_user_config()
authenticator = stauth.Authenticate(
config["credentials"],
config["cookie"]["name"],
config["cookie"]["key"],
config["cookie"]["expiry_days"],
config["pre-authorized"]
)
if "group" not in st.session_state:
st.session_state.group = None
st.title("Login Page")
st.divider()
authenticator.login(clear_on_submit=True)
if st.session_state["authentication_status"]:
st.session_state.group = config["credentials"]["usernames"][
st.session_state.username
]["group"]
st.switch_page("pages/dashboard.py")
# st.write("hier ist der switch zum dashboard")
elif st.session_state["authentication_status"] is False:
st.session_state.group = None
st.error("Username / Passwort falsch.")
elif st.session_state["authentication_status"] is None:
st.session_state.group = None
st.warning("Bitte Username und Passwort eingeben.")
update_userconf()
menu() # Render the dynamic menu!
footer()
hide_st()
```
| closed | 2024-07-10T07:56:20Z | 2024-07-10T08:40:13Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/176 | [
"help wanted"
] | Volker-H | 1 |
joeyespo/grip | flask | 213 | Fresh install of Grip (4.3.2) installing components in /usr/local/lib/python2.7 instead of python3 | Everything works, but I want to use Python 3 instead of 2.7 (philosophical reasons + OCD).
Here's what I see when I upgrade:
```
$ pip install --upgrade grip
Requirement already up-to-date: grip in /usr/local/lib/python2.7/site-packages
Requirement already up-to-date: docopt>=0.6.2 in /usr/local/lib/python2.7/site-packages (from grip)
Requirement already up-to-date: Markdown>=2.5.1 in /usr/local/lib/python2.7/site-packages (from grip)
Requirement already up-to-date: Pygments>=1.6 in /usr/local/lib/python2.7/site-packages (from grip)
Requirement already up-to-date: path-and-address>=2.0.1 in /usr/local/lib/python2.7/site-packages (from grip)
Requirement already up-to-date: requests>=2.4.1 in /usr/local/lib/python2.7/site-packages (from grip)
Requirement already up-to-date: Flask>=0.10.1 in /usr/local/lib/python2.7/site-packages (from grip)
Requirement already up-to-date: click>=2.0 in /usr/local/lib/python2.7/site-packages (from Flask>=0.10.1->grip)
Requirement already up-to-date: Werkzeug>=0.7 in /usr/local/lib/python2.7/site-packages (from Flask>=0.10.1->grip)
Requirement already up-to-date: Jinja2>=2.4 in /usr/local/lib/python2.7/site-packages (from Flask>=0.10.1->grip)
Requirement already up-to-date: itsdangerous>=0.21 in /usr/local/lib/python2.7/site-packages (from Flask>=0.10.1->grip)
Requirement already up-to-date: MarkupSafe in /usr/local/lib/python2.7/site-packages (from Jinja2>=2.4->Flask>=0.10.1->grip)
```
But I want to see Python 3 there instead (which is installed and symlinked via Homebrew)
```
$ which python3
/usr/local/bin/python3
```
Any tips? Sorry if amateurish, and thanks for input.
| closed | 2016-09-30T18:26:49Z | 2016-09-30T21:50:58Z | https://github.com/joeyespo/grip/issues/213 | [
"not-a-bug"
] | erikr | 2 |
vanna-ai/vanna | data-visualization | 335 | Multiple rounds of conversations | Does Vanna support multiple rounds of dialogue?
Ask again based on the answer to the previous question | closed | 2024-04-03T04:19:39Z | 2024-04-04T02:03:20Z | https://github.com/vanna-ai/vanna/issues/335 | [] | tzh5477 | 0 |
allenai/allennlp | nlp | 5,734 | New version with upper bounds on dependencies removed | The upper bound on the version of spaCy allowed was removed in #5733. When can we expect a new release of AllenNLP with this change?
Thanks!
| closed | 2022-11-22T19:54:41Z | 2022-12-07T16:20:21Z | https://github.com/allenai/allennlp/issues/5734 | [
"Feature request",
"stale"
] | Frost45 | 2 |
scikit-hep/awkward | numpy | 3,403 | Question on performance | ### Version of Awkward Array
2.7.4
### Description and code to reproduce
numpy 1.26.4
pyarrow 19.0.0
The origin of the data I will use here is not really important, but for reference, it is:
[1.9GB of points](
https://github.com/geoarrow/geoarrow-data/releases/download/v0.1.0/microsoft-buildings-point.arrow) in feather2 format.
```
table = pyarrow.feather.read_table("microsoft-buildings-point.arrow")
```
130M points. The "geometry" column has x, y fields, both float64.
Issue 1
=====
(the lesser issue)
Depending on how I convert the data, I get different layouts:
```python
>>> ak.from_arrow(table)["geometry", "x"].layout
<IndexedOptionArray len='129735970'>
<index><Index dtype='int64' len='129735970'>
[ 0 1 2 ... 129735967 129735968 129735969]
</Index></index>
<content><NumpyArray dtype='float64' len='129735970'>
[ -84.95972352 -84.95973298 -84.9599375 ... -111.04598275
-111.047405 -111.0478207 ]
</NumpyArray></content>
</IndexedOptionArray>
>>> ak.from_arrow(table["geometry"])["x"].layout
<UnmaskedArray len='129735970'>
<content><NumpyArray dtype='float64' len='129735970'>
[ -84.95972352 -84.95973298 -84.9599375 ... -111.04598275
-111.047405 -111.0478207 ]
</NumpyArray></content>
</UnmaskedArray>
```
Here, the second variant is what you should get - we know there are no NULLs. If you don't select "x", you see UnmaskedArray s even for the first route.
Issue 2
======
Doing some timings:
```python
>>> x = ak.from_arrow(table["geometry"])["x"] # the unmasked variant
>>> np.max(x)
656ms
>>> ak.max(x)
666ms, OK, so dispatch does what we expect
>>> %timeit np.max(x.layout.content.data)
18ms, well that is just a bit faster
>>> %timeit np.nanmax(x.layout.content.data)
20ms, in case of nan (since we shold have no NULLs)
>>> np.nanmax(np.where(True, x.layout.content.data, np.nan))
176ms, maybe this is what awkward actually does?
```
And with a handwritten simple numba kernel:
```python
@numba.njit(nogil=True, cache=True)
def mymax(x):
max = -np.inf
for v in x:
if np.isfinite(v) and v > max:
max = v
return v
```
we get
```
>>> mymax(x)
40.3ms
>>> mymax(x.layout.content.data)
20.2ms
```
So, my question is: how can we avoid the >600ms for this operation while maintaining the awkward API? Am I seeing some kind of weird caching from the many original chunks of the arrow data? | open | 2025-02-21T23:26:44Z | 2025-02-27T15:53:16Z | https://github.com/scikit-hep/awkward/issues/3403 | [
"performance"
] | martindurant | 17 |
marimo-team/marimo | data-visualization | 3,348 | Group By Transform in `mo.ui.dataframe(df)` does not return valid Polars code | ### Describe the bug
Group By Transform in `mo.ui.dataframe(df)` does not return valid Polars code.
Details in example below. It is pretty easy to see what is going wrong.
### Environment
<details>
In WASM, so unsure how to run `marimo env`
Instead, here are the package versions I am using
```
marimo.__version__: "0.10.8-dev3"
polars.__version__: "1.18.0"
```
</details>
### Code to reproduce
```python
import marimo as mo
import polars as pl
df = pl.DataFrame({"group": ["a", "a", "b"], "age": [10, 11, 12]})
mo.ui.dataframe(df)
```
This Transform

Produces this Polars code
```python
df_next = df
df_next = df_next.group_by(["group"], maintain_order=True).agg([pl.col("age_max").max().alias("age_max_max")])
```
which raises `polars.exceptions.ColumnNotFoundError: age_max`
The correct Polars code (with no other stylistic adjustments) would be
```python
df_next = df
df_next = df_next.group_by(["group"], maintain_order=True).agg([pl.col("age").max().alias("age_max")])
``` | closed | 2025-01-06T12:58:02Z | 2025-01-08T15:48:39Z | https://github.com/marimo-team/marimo/issues/3348 | [
"bug"
] | henryharbeck | 1 |
dinoperovic/django-salesman | rest-api | 48 | Modify order | Often an order that is placed needs to be changed before shipping - i.e. customer calls in and meant to add 7 more widgets to the order.
It would be awesome if there was a method or workflow to facilitate the modification of an order. I have also considered "replacing" an order so that order history is preserved - but it might be confusing for the user if an order reference number changes.
| open | 2024-08-06T15:02:08Z | 2024-08-06T15:02:08Z | https://github.com/dinoperovic/django-salesman/issues/48 | [] | thenewguy | 0 |
amdegroot/ssd.pytorch | computer-vision | 391 | one problem | forward
loss_c[pos] = 0 # filter out pos boxes for now
IndexError: The shape of the mask [1, 8732] at index 0 does not match the shape of the indexed tensor [8732, 1] at index 0
How can i deal with it?Please help me | open | 2019-07-29T08:48:19Z | 2019-09-12T09:52:54Z | https://github.com/amdegroot/ssd.pytorch/issues/391 | [] | OscarYoungDepend | 3 |
google-research/bert | nlp | 690 | Why not use a more powerful tokenizer here | https://github.com/google-research/bert/blob/0fce551b55caabcfba52c61e18f34b541aef186a/run_squad.py#L239-L245
The word with punctuation cannot be separated. The function `improve answer span` is used to recover from this error? | open | 2019-06-11T01:57:36Z | 2019-06-11T01:57:57Z | https://github.com/google-research/bert/issues/690 | [] | lixinsu | 0 |
miguelgrinberg/Flask-SocketIO | flask | 889 | client not receiving emit from socketio.emit at a certain part of code | socketio.emit('my_response',
{'message':'First emit'},
namespace='/test') # saw 'SENDING' on the log and received by client
'''
CODE FOR SOME LONG RUNNNIG PROCESS (> 1 min)
'''
socketio.emit('my_response',
{'message':'Second emit'},
namespace='/test') # saw 'SENDING' on the log but not received by client
Hi! I've come across this kind of weird error where the message from socketio.emit within my background process (using Thread) can be received at one part of the code (before the process) but not the other (after the process) as shown above. I saw in the log that the server side tried to send the message. (I need to send some response regarding the finished process to trigger some action on a client side)
Any idea how to fix or any more information?
Thanks! | closed | 2019-01-29T20:31:52Z | 2019-05-19T07:36:57Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/889 | [
"question"
] | witchapong | 1 |
strawberry-graphql/strawberry | django | 3,759 | `all_fields=True` causes incompatibility with redis-om package in pydantic v2 | <!-- Provide a general summary of the bug in the title above. -->
I was able to narrow down a compatibility bug to adding `all_fields=True` in redis-om's custom pydantic models
namely: `HashModel`, `JsonModel`, `EmbeddedJsonModel`
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
`all_fields=True` in the experimental pydantic decorator causes:
```
Traceback (most recent call last):
File "/root/strawberry/redis-strawberry-pydantic-issue.py", line 50, in <module>
schema = strawberry.Schema(query=Query, mutation=Mutation)
File "/root/strawberry/.env/lib/python3.10/site-packages/strawberry/schema/schema.py", line 212, in __init__
raise error.__cause__ from None
File "/root/strawberry/.env/lib/python3.10/site-packages/graphql/type/definition.py", line 1472, in fields
fields = resolve_thunk(self._fields)
File "/root/strawberry/.env/lib/python3.10/site-packages/graphql/type/definition.py", line 300, in resolve_thunk
return thunk() if callable(thunk) else thunk
File "/root/strawberry/.env/lib/python3.10/site-packages/strawberry/schema/schema_converter.py", line 494, in <lambda>
fields=lambda: self.get_graphql_input_fields(type_definition),
File "/root/strawberry/.env/lib/python3.10/site-packages/strawberry/schema/schema_converter.py", line 451, in get_graphql_input_fields
return _get_thunk_mapping(
File "/root/strawberry/.env/lib/python3.10/site-packages/strawberry/schema/schema_converter.py", line 138, in _get_thunk_mapping
thunk_mapping[name_converter(field)] = field_converter(
File "/root/strawberry/.env/lib/python3.10/site-packages/strawberry/schema/schema_converter.py", line 417, in from_input_field
self.from_maybe_optional(
File "/root/strawberry/.env/lib/python3.10/site-packages/strawberry/schema/schema_converter.py", line 817, in from_maybe_optional
return self.from_type(type_.of_type)
File "/root/strawberry/.env/lib/python3.10/site-packages/strawberry/schema/schema_converter.py", line 843, in from_type
return self.from_union(type_)
File "/root/strawberry/.env/lib/python3.10/site-packages/strawberry/schema/schema_converter.py", line 861, in from_union
raise InvalidUnionTypeError(union_name, type_, union_definition=union)
strawberry.exceptions.invalid_union_type.InvalidUnionTypeError: Type `str` cannot be used in a GraphQL Union
```
## System Information
- Operating system: N/A
- Strawberry version (if applicable): Long-time bug
## Additional Context
By removing `all_fields=True` and adding all class attributes with `strawberry.auto` works. (at least for a simple [example](https://gist.github.com/XChikuX/50e0aa816e725859adb2ee65ca690087))
| open | 2025-01-30T21:10:48Z | 2025-02-15T05:45:02Z | https://github.com/strawberry-graphql/strawberry/issues/3759 | [
"bug"
] | XChikuX | 2 |
davidsandberg/facenet | tensorflow | 1,191 | Batch Size for Online Triplet Mining | Hi,
I read through the official paper of FaceNet and there it is stated, that a batch size of 1800 is used for online triplet mining. This number seems to be quite high. I have acces to an IBM Power Instance with a 32GB Nvidia Tesla V100 GPU but having a batch size that large with images from the LFW is infeasible.
Is the triplet mining performed on CPU? I tried to create an embedding of one batch (with size 1800) on aformentioned IBM instance. However, my jupyternotebook crashes - I assume that the batch size is still too large.
The triplet mining on my side performs Batch Hard Mining. How should I determine a good batch size?
| open | 2021-01-18T16:18:16Z | 2021-03-05T12:08:27Z | https://github.com/davidsandberg/facenet/issues/1191 | [] | Neihtq | 1 |
collerek/ormar | pydantic | 746 | Unable to use `.json` on pydantic Model containing ormar Model with ForeignKey | **Describe the bug**
Using `.json()` on a pydantic `Model` that has ormar `Model` with a `ForeignKey` in its fields results in
`AttributeError: 'Model' object has no attribute '_orm'`.
**To Reproduce**
```py
import asyncio
import databases
import ormar
import pydantic
import sqlalchemy
DATABASE_URL = "sqlite:///db.sqlite"
database = databases.Database(DATABASE_URL)
metadata = sqlalchemy.MetaData()
class OrmarModelA(ormar.Model):
class Meta:
database = database
metadata = metadata
id: int = ormar.Integer(primary_key=True)
class OrmarModelB(ormar.Model):
class Meta:
database = database
metadata = metadata
id: int = ormar.Integer(primary_key=True)
a: OrmarModelA = ormar.ForeignKey(OrmarModelA)
engine = sqlalchemy.create_engine(DATABASE_URL)
metadata.drop_all(engine)
metadata.create_all(engine)
class PydanticModel(pydantic.BaseModel):
ormar_b: OrmarModelB
async def main():
await database.connect()
ormar_a = await OrmarModelA.objects.create()
ormar_b = await OrmarModelB.objects.create(a=ormar_a)
pydantic_object = PydanticModel(ormar_b=ormar_b)
json = pydantic_object.json()
print(json)
await database.disconnect()
asyncio.run(main())
```
**Traceback**
```py
Traceback (most recent call last):
File "/home/shmoo/projects/kormipravilno/telegram-bot/main.py", line 55, in <module>
asyncio.run(main())
File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.10/asyncio/base_events.py", line 641, in run_until_complete
return future.result()
File "/home/shmoo/projects/kormipravilno/telegram-bot/main.py", line 49, in main
json = pydantic_object.json()
File "pydantic/main.py", line 487, in pydantic.main.BaseModel.json
File "pydantic/main.py", line 843, in _iter
File "pydantic/main.py", line 718, in pydantic.main.BaseModel._get_value
File "/home/shmoo/.cache/pypoetry/virtualenvs/telegram-bot-nW6pW6aK-py3.10/lib/python3.10/site-packages/ormar/models/newbasemodel.py", line 767, in dict
dict_instance = self._extract_nested_models(
File "/home/shmoo/.cache/pypoetry/virtualenvs/telegram-bot-nW6pW6aK-py3.10/lib/python3.10/site-packages/ormar/models/newbasemodel.py", line 659, in _extract_nested_models
nested_model = getattr(self, field)
File "/home/shmoo/.cache/pypoetry/virtualenvs/telegram-bot-nW6pW6aK-py3.10/lib/python3.10/site-packages/ormar/models/newbasemodel.py", line 193, in __getattr__
return super().__getattribute__(item)
File "/home/shmoo/.cache/pypoetry/virtualenvs/telegram-bot-nW6pW6aK-py3.10/lib/python3.10/site-packages/ormar/models/descriptors/descriptors.py", line 105, in __get__
if self.name in instance._orm:
File "/home/shmoo/.cache/pypoetry/virtualenvs/telegram-bot-nW6pW6aK-py3.10/lib/python3.10/site-packages/ormar/models/newbasemodel.py", line 193, in __getattr__
return super().__getattribute__(item)
AttributeError: 'OrmarModelB' object has no attribute '_orm'
```
**Expected behavior**
Using `.json()` on a pydantic `Model` that has ormar `Model` with a `ForeignKey` in its fields should result in a JSON representation of said pydantic `Model`.
**Versions (please complete the following information):**
- Database backend used **sqlite**
- Python version **3.10.2**
- `ormar` version **0.11.2**
- `pydantic` version **1.9.1**
**Additional context**
Using `.json` on a pydantic Model that has ormar `Model` with **no** `ForeignKey` **doesn't** result in an exception.
As of creating the issue, I haven't thought through the pipeline of `.json`. The issue really is about `.dict`, that doesn't really matter though.
| closed | 2022-07-17T04:51:39Z | 2022-07-19T15:11:18Z | https://github.com/collerek/ormar/issues/746 | [
"bug"
] | Shmookoff | 1 |
autogluon/autogluon | data-science | 4,971 | GPU Acceleration Feature Request | ## Description
This feature request proposes adding GPU acceleration capabilities through RAPIDS integration across all modules (`multimodal`, `tabular`, `timeseries`). The goal is to provide significant performance improvements for data processing and model training by leveraging GPU acceleration instead of CPU-only operations.
Key aspects of the proposal:
- Add feature flags to enable GPU-accelerated operations when available
- Provide a Docker container with pre-installed RAPIDS ecosystem
- Replace CPU-bound operations with GPU equivalents:
- cuDF instead of pandas
- cuPy instead of numpy
- cuML for accelerated ML algorithms
Example API usage with the proposed feature:
```python
from library import TabularClassifier
# Enable GPU acceleration through feature flag
classifier = TabularClassifier(use_gpu=True)
# Or through environment variable
# LIBRARY_USE_GPU=1 python script.py
```
This enhancement has been manually tested by me and other contributors by replacing the standard CPU libraries with their RAPIDS counterparts, resulting in significant performance improvements in classification tasks.
## References
- [[RAPIDS Homepage](https://rapids.ai/)](https://rapids.ai/) - Main resource for GPU-accelerated data science
- [[cuDF Documentation](https://docs.rapids.ai/api/cudf/stable/)](https://docs.rapids.ai/api/cudf/stable/) - Drop-in replacement for pandas
- [[cuML Documentation](https://docs.rapids.ai/api/cuml/stable/)](https://docs.rapids.ai/api/cuml/stable/) - GPU-accelerated ML algorithms
- [[Performance Benchmarks](https://rapids.ai/rapids-benchmarks.html)](https://rapids.ai/rapids-benchmarks.html) - Showcasing potential speed improvements
Implementation examples:
- [[DeepLearning4J GPU Support](https://github.com/deeplearning4j/deeplearning4j)](https://github.com/deeplearning4j/deeplearning4j)
- [[Dask-CUDA](https://github.com/rapidsai/dask-cuda)](https://github.com/rapidsai/dask-cuda) - For distributed GPU computing | open | 2025-03-10T18:47:13Z | 2025-03-18T13:44:18Z | https://github.com/autogluon/autogluon/issues/4971 | [
"enhancement",
"module: tabular",
"module: timeseries",
"module: core"
] | raphasamymarcura | 0 |
ultralytics/ultralytics | pytorch | 19,471 | The matrix multiplication in the post-processing stage of YOLOSEG is quite time-consuming when performed on the CPU of edge devices. Why not include this operation in the model during export and utilize the GPU for inference? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
The matrix multiplication in the post-processing stage of YOLOSEG is quite time-consuming when performed on the CPU of edge devices. Why not include this operation in the model during export and utilize the GPU for inference?
### Additional
_No response_ | open | 2025-02-28T03:38:07Z | 2025-03-04T08:44:15Z | https://github.com/ultralytics/ultralytics/issues/19471 | [
"question",
"segment",
"embedded",
"exports"
] | luoshiyong | 8 |
pytest-dev/pytest-xdist | pytest | 861 | "Pin" certain parameters to a process? | I have a bunch of tests (all in one project) following pretty much the same pattern:
```python
import pytest
@pytest.mark.parametrize("foo", ["a", "b"]) # pin those permutations to one process?
@pytest.mark.parametrize("bar", ["c", "d"])
def test_something(foo: str, bar: str):
pass
```
I'd like to parallelize them - but, if possible, I'd like "to pin" all permutation associated with one specific parameter to one process. In the above example, let's say `foo` is pinned, then one process could work through `('a', 'c')` and `('a', 'd')` while the other process could work through `('b', 'c')` and `('b', 'd')`. All variations with `foo == "a"` happen in one process, all variations with `foo == "b"` can potentially happen in another (single) process.
This is where I was hoping I could pin a parameter to a process. Is something like this possible or conceivable in some way, shape or form?
---
For context, my tests are heavily built on top of `ctypes` which is, for better or for worse, not entirely stateless as I have recently discovered. I.e. if I do something stupid with it, a completely unrelated test (many tests later) might crash. The exact behavior depends on the version of CPython (and on 32 bit Wine & Windows on a DLL's calling convention), but all from at least 3.7 to 3.11 have those hidden states of some form. The only good news is that this behavior can be reproduced if all tests run in the exact same order within a single process.
I am working on [zugbruecke](https://github.com/pleiszenburg/zugbruecke), a `ctypes` drop-in replacement that allows to call Windows DLLs from Unix-like systems or, in other words, a fancy RPC layer between a Unix process and a Wine process. The test suite can be found [here](https://github.com/pleiszenburg/zugbruecke/tree/master/tests). An original test looks as follows:
```python
@pytest.mark.parametrize("arch,conv,ctypes,dll_handle", get_context(__file__))
def test_int_with_size_without_pointer(arch, conv, ctypes, dll_handle):
"""
Test simple int passing with size
"""
sqrt_int = dll_handle.sqrt_int
sqrt_int.argtypes = (ctypes.c_int16,)
sqrt_int.restype = ctypes.c_int16
assert 3 == sqrt_int(9)
```
`arch` can either be `win32` or `win64` (for 32 bit and 64 bit DLLs). `conv` can be `cdll` or `windll` (only relevant for 32 bit DLLs). `ctypes` represents my drop-in-replacement backed by different versions of CPython on top of Wine. `dll_handle` is just a `ctypes`-like handle to a DLL. The `ctypes` parameter would need to be pinned.
The test suite currently has 1.6k tests running anywhere from 10 to 40 minutes (single process), depending on the hardware underneath. | closed | 2022-12-31T15:27:27Z | 2023-01-09T12:22:23Z | https://github.com/pytest-dev/pytest-xdist/issues/861 | [] | s-m-e | 1 |
pnkraemer/tueplots | matplotlib | 53 | Updates to the beamer styles | ### Updates to the beamer styles:
* The 0.8 in beamer should be replaced by rel_width, which should default to 0.8. (do we want to default rel_height=0.9 and rel_width=0.6?)
* The font-weights of the beamer_moml() setting should be set to "light", akin to
```python
plt.rcParams["font.weight"] = "light"
plt.rcParams["axes.labelweight"] = "light"
plt.rcParams["axes.titleweight"] = "light"
```
* The figure size could use a reference. At the moment it seems a bit like black magic, where the figure sizes stem from. (Is it \textwidth? Is it \linewidth? is is the slide-size? That is not clear.) | closed | 2022-01-12T18:01:10Z | 2022-01-13T06:41:29Z | https://github.com/pnkraemer/tueplots/issues/53 | [] | pnkraemer | 0 |
dask/dask | scikit-learn | 11,230 | Roundtripping timezone-aware DataFrame through parquet doesn't preserve timestamp resolution | While diagnosing some of the failures we're seeing over in https://github.com/coiled/dask-bigquery/pull/81, I stumbled across an issue with roundtripping timezone-aware timeseries data through parquet with Dask. Here's a minimal reproducer:
```python
import random
import pandas as pd
import dask.dataframe as dd
# Generate some random synthetic data
records = [
{
"number": random.randint(0, 100),
"timestamp": pd.Timestamp.utcnow(),
"idx": i,
}
for i in range(10)
]
df = pd.DataFrame(records)
# Change timestamp resolution to us (this is important)
df["timestamp"] = df["timestamp"].astype("datetime64[us, UTC]")
# Roundtrip through parquet with Dask
ddf = dd.from_pandas(df, npartitions=2)
outdir = "test.parquet"
ddf.to_parquet(outdir)
ddf2 = dd.read_parquet(outdir)
dd.utils.assert_eq(ddf, ddf2, check_divisions=False)
```
which raises this error:
```
Traceback (most recent call last):
File "/Users/james/projects/dask/dask/test.py", line 24, in <module>
dd.utils.assert_eq(ddf, ddf2, check_divisions=False)
File "/Users/james/projects/dask/dask/dask/dataframe/utils.py", line 603, in assert_eq
tm.assert_frame_equal(
File "/Users/james/mambaforge/envs/dask-py312/lib/python3.12/site-packages/pandas/_testing/asserters.py", line 1279, in assert_frame_equal
assert_series_equal(
File "/Users/james/mambaforge/envs/dask-py312/lib/python3.12/site-packages/pandas/_testing/asserters.py", line 975, in assert_series_equal
assert_attr_equal("dtype", left, right, obj=f"Attributes of {obj}")
File "/Users/james/mambaforge/envs/dask-py312/lib/python3.12/site-packages/pandas/_testing/asserters.py", line 421, in assert_attr_equal
raise_assert_detail(obj, msg, left_attr, right_attr)
File "/Users/james/mambaforge/envs/dask-py312/lib/python3.12/site-packages/pandas/_testing/asserters.py", line 614, in raise_assert_detail
raise AssertionError(msg)
AssertionError: Attributes of DataFrame.iloc[:, 1] (column name="timestamp") are different
Attribute "dtype" are different
[left]: datetime64[us, UTC]
[right]: datetime64[ns, UTC]
```
Note the initial `ddf` DataFrame has `us` resolution, but after roundtripping through parquet, the `ddf2` DataFrame has `ns` resolution.
A couple of additional observations:
1. The equivalent `pandas` code (i.e. removing `dd.from_pandas`) doesn't raise an error.
2. If I remove timezone information altogether (e.g. use `pd.Timestamp.now()` instead of `pd.Timestamp.utcnow()`), then this also doesn't raise an error.
cc @phofl @fjetter | closed | 2024-07-16T21:30:09Z | 2024-07-17T16:25:48Z | https://github.com/dask/dask/issues/11230 | [
"dataframe"
] | jrbourbeau | 0 |
plotly/dash | dash | 3,044 | html.Script not rendering the javascript code | Hi,
I'm trying to run a javascript code wrapped in `html.Script`. But it's not rendering the JS code.
```
recharts_js = """
const { BarChart, Bar, XAxis, YAxis, Tooltip, Legend, CartesianGrid, ResponsiveContainer } = Recharts;
const data = [
{ dmu: "dmu1", "Efficiency score": 100, Status: "Efficient" },
{ dmu: "dmu2", "Efficiency score": 78, Status: "InEfficient" },
{ dmu: "dmu3", "Efficiency score": 100, Status: "Efficient" },
{ dmu: "dmu4", "Efficiency score": 100, Status: "Efficient" },
{ dmu: "dmu5", "Efficiency score": 89, Status: "InEfficient" },
{ dmu: "dmu6", "Efficiency score": 95, Status: "InEfficient" },
];
class CustomBarChart extends React.Component {
render() {
return (
<Recharts.BarChart
width={600}
height={400}
data={data}
margin={{ top: 20, right: 30, left: 20, bottom: 5 }}
>
<Recharts.CartesianGrid strokeDasharray="3 3" />
<Recharts.XAxis dataKey="dmu" />
<Recharts.YAxis />
<Recharts.Tooltip />
<Recharts.Legend
payload={[
{ value: "Efficient", type: "square", id: "ID01", color: "#FFA500" }, // Orange for Efficient
{ value: "InEfficient", type: "square", id: "ID02", color: "#32CD32" }, // Green for InEfficient
]}
/>
<Recharts.Bar dataKey="Efficiency score">
{data.map((entry, index) => (
<Recharts.Cell
key={`cell-${index}`}
fill={entry.Status === "Efficient" ? "#FFA500" : "#32CD32"}
/>
))}
</Recharts.Bar>
</Recharts.BarChart>
);
}
}
ReactDOM.render(<CustomBarChart />, document.getElementById('recharts-container'));
"""
html.Div(
[
html.Script(children=recharts_js),
],
)
``` | closed | 2024-10-16T18:13:35Z | 2024-10-16T19:39:13Z | https://github.com/plotly/dash/issues/3044 | [] | namakshenas | 1 |
waditu/tushare | pandas | 796 | 新股数据接口,上市日期还没有,建议返回null | 如下图,新股还没有上市建议返回null哈,而不是"nan"
欢迎奖励积分
邮箱:max_lzd@163.com

| open | 2018-11-02T16:18:48Z | 2018-11-02T16:18:48Z | https://github.com/waditu/tushare/issues/796 | [] | LeoZeda | 0 |
ultralytics/ultralytics | python | 19,648 | can not run tensorrt,bug error: module 'tensorrt' has no attribute '__version__' | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Install
### Bug
i download the right cuda、cudnn、torch、vision,and the i download the tensorrt 8.5 GA in my windows.
when i run this demo code:
```
from ultralytics import YOLO
if __name__ == '__main__':
model = YOLO('./yolo11n.pt')
model.export(
format='engine',
imgsz=640,
keras=False,
optimize=False,
half=False,
int8=False,
dynamic=False,
simplify=True,
opset=None,
workspace=5.0,
nms=False,
batch=1,
device='0',
)
```
and run error blow as:
```
(yolo11) E:\yolo11>python tensorrt.py
Ultralytics 8.3.87 🚀 Python-3.9.21 torch-2.2.2+cu118 CUDA:0 (NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB)
ONNX: starting export with onnx 1.17.0 opset 17...
ONNX: slimming with onnxslim 0.1.48...
ONNX: export success ✅ 3.5s, saved as 'yolo11n.onnx' (10.2 MB)
TensorRT: export failure ❌ 3.5s: module 'tensorrt' has no attribute '__version__'
Traceback (most recent call last):
File "E:\yolo11\tensorrt.py", line 9, in <module>
model.export(
File "E:\yolo11\ultralytics\engine\model.py", line 742, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "E:\yolo11\ultralytics\engine\exporter.py", line 429, in __call__
f[1], _ = self.export_engine(dla=dla)
File "E:\yolo11\ultralytics\engine\exporter.py", line 182, in outer_func
raise e
File "E:\yolo11\ultralytics\engine\exporter.py", line 177, in outer_func
f, model = inner_func(*args, **kwargs)
File "E:\yolo11\ultralytics\engine\exporter.py", line 855, in export_engine
check_version(trt.__version__, ">=7.0.0", hard=True)
AttributeError: module 'tensorrt' has no attribute '__version__'
```
### Environment
Python 3.9.21
torch 2.2 with their vision
tensorrt 8.5
cuda 11.8 cudnn for 11.8
windows 10
```
(yolo11) E:\yolo11>pip list
Package Version
------------------- ------------
certifi 2025.1.31
charset-normalizer 3.4.1
colorama 0.4.6
coloredlogs 15.0.1
contourpy 1.3.0
cycler 0.12.1
filelock 3.17.0
flatbuffers 25.2.10
fonttools 4.56.0
fsspec 2025.3.0
humanfriendly 10.0
idna 3.10
importlib_resources 6.5.2
Jinja2 3.1.6
kiwisolver 1.4.7
MarkupSafe 3.0.2
matplotlib 3.9.4
mpmath 1.3.0
networkx 3.2.1
numpy 1.24.0
onnx 1.17.0
onnxruntime-gpu 1.19.2
onnxslim 0.1.48
opencv-python 4.11.0.86
packaging 24.2
pandas 2.2.3
pillow 11.1.0
pip 25.0
protobuf 6.30.0
psutil 7.0.0
py-cpuinfo 9.0.0
pyparsing 3.2.1
pyreadline3 3.5.4
python-dateutil 2.9.0.post0
pytz 2025.1
PyYAML 6.0.2
requests 2.32.3
scipy 1.13.1
seaborn 0.13.2
setuptools 75.8.0
six 1.17.0
sympy 1.13.1
tensorrt 8.5.1.7
torch 2.2.2+cu118
torchaudio 2.2.2+cu118
torchvision 0.17.2+cu118
tqdm 4.67.1
typing_extensions 4.12.2
tzdata 2025.1
ultralytics 8.3.87
ultralytics-thop 2.0.14
urllib3 2.3.0
wheel 0.45.1
zipp 3.21.0
```
### Minimal Reproducible Example
```
from ultralytics import YOLO
if __name__ == '__main__':
model = YOLO('./yolo11n.pt')
model.export(
format='engine',
imgsz=640,
keras=False,
optimize=False,
half=False,
int8=False,
dynamic=False,
simplify=True,
opset=None,
workspace=5.0,
nms=False,
batch=1,
device='0',
)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | closed | 2025-03-11T18:20:51Z | 2025-03-12T07:18:29Z | https://github.com/ultralytics/ultralytics/issues/19648 | [
"dependencies",
"exports"
] | Hitchliff | 4 |
simple-login/app | flask | 1,973 | Your domains got blocked in disposable list | Hi!
On https://github.com/mstfknn/email-providers/ list you got blocked!
Can you please take a look?
For now pm.me, proton.me, protonmail.com, protonmail.ch, slmail.me got blocked!
@nguyenkims @acasajus @cquintana92 | open | 2023-12-16T09:52:59Z | 2024-02-17T14:43:56Z | https://github.com/simple-login/app/issues/1973 | [] | Jasi1997 | 1 |
Farama-Foundation/Gymnasium | api | 796 | [Bug Report] Environment not resetting at termination. | ### Describe the bug
The environment not resetting when the termination condition is True.
### Code example
```shell
import numpy as np
import gymnasium as gym
from gymnasium import spaces
from stable_baselines3.common.env_checker import check_env
ARRAY = np.linspace(0, 10)
TOTAL_DAYS = len(ARRAY)
N_DISCRETE_ACTIONS = 1
class NewEnv(gym.Env):
metadata = {"render_modes": ["human"], "render_fps": 30}
def __init__(self):
super().__init__()
# Define action and observation space
# They must be gym.spaces objects
# Example when using discrete actions:
self.action_space = spaces.Discrete(N_DISCRETE_ACTIONS)
# Example for using image as input (channel-first; channel-last also works):
self.observation_space = spaces.Box(low=0, high=1,
shape=(1,), dtype=np.float64)
def step(self, action):
if action == 1:
pass
observation = [ARRAY[self.ith_day]]
observation = np.array(observation)
self.ith_day += 1
if self.ith_day >= TOTAL_DAYS - 1:
self.terminated = True
print("\n\nTERMINATION REACHED: ", self.ith_day)
info = {}
self.reward = 1
return observation, self.reward, self.terminated, self.truncated, info
def reset(self, seed=None, options=None):
super().reset(seed=seed, options=options)
self.ith_day = 0
self.truncated = False
self.terminated = False
self.reward = 0
observation = [1]
observation = np.array(observation)
info = {}
return observation, info
env = NewEnv()
check_env(env)
env = NewEnv()
episodes = 10
for episode in range(1, episodes+1):
state = env.reset()
done = False
score = 0
while not done:
action = env.action_space.sample()
n_state, reward, terminated, truncated, info = env.step(action)
print(n_state[0], end='\t')
score += reward
print(f'Episode: {episode}, Score: {score}')
env.render()
```
Error log:
```bash
TERMINATION REACHED: 49
9.795918367346939
TERMINATION REACHED: 50
10.0 Traceback (most recent call last):
File "/home/vanilla_skies/projects/sbp/sir_submission/clean_code/04_environment_issue.py", line 63, in <module>
n_state, reward, terminated, truncated, info = env.step(action)
File "/home/vanilla_skies/projects/sbp/sir_submission/clean_code/04_environment_issue.py", line 26, in step
observation = [ARRAY[self.ith_day]]
IndexError: index 50 is out of bounds for axis 0 with size 50
```
### System info
Gymnasium was installed using: pip
Version of Gymnasium: 0.29.1
OS: Ubuntu 20.04.5 LTS on WSL2
Python version: Python 3.9.7
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-11-27T05:58:36Z | 2023-11-27T09:33:39Z | https://github.com/Farama-Foundation/Gymnasium/issues/796 | [
"bug"
] | psymbio | 2 |
huggingface/transformers | python | 36,579 | AutoModel failed with empty tensor error | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.50.0.dev0
- Platform: Linux-4.18.0-553.16.1.el8_10.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.4.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_CPU
- mixed_precision: bf16
- use_cpu: True
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 4
- main_process_ip: 127.0.0.1
- main_process_port: 29500
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- ipex_config: {'ipex': False}
- mpirun_config: {'mpirun_ccl': '1', 'mpirun_hostfile': '/home/jiqingfe/jiqing_hf/HuggingFace/tests/workloads/fine-tune/hostfile'}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@SunMarc @ArthurZucker @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following codes:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("meta-llama/Llama-3.1-8B-Instruct", device_map="auto")
```
Error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jiqingfe/transformers/src/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/home/jiqingfe/transformers/src/transformers/modeling_utils.py", line 271, in _wrapper
return func(*args, **kwargs)
File "/home/jiqingfe/transformers/src/transformers/modeling_utils.py", line 4535, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/home/jiqingfe/accelerate/src/accelerate/big_modeling.py", line 496, in dispatch_model
model.to(device)
File "/home/jiqingfe/transformers/src/transformers/modeling_utils.py", line 3262, in to
return super().to(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1343, in to
return self._apply(convert)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 903, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 930, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1336, in convert
raise NotImplementedError(
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
```
### Expected behavior
Expect got a base model. | closed | 2025-03-06T07:57:25Z | 2025-03-13T17:18:16Z | https://github.com/huggingface/transformers/issues/36579 | [
"bug"
] | jiqing-feng | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.