Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    CastError
Message:      Couldn't cast
source: string
prompt: string
instance_id: string
repo: string
reward: double
task_name: string
model: string
agent: string
version: string
to
{'source': Value('string'), 'prompt': Value('string'), 'instance_id': Value('string'), 'repo': Value('string'), 'version': Value('string'), 'reward': Value('int64')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1779, in _prepare_split_single
                  for key, table in generator:
                                    ^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              source: string
              prompt: string
              instance_id: string
              repo: string
              reward: double
              task_name: string
              model: string
              agent: string
              version: string
              to
              {'source': Value('string'), 'prompt': Value('string'), 'instance_id': Value('string'), 'repo': Value('string'), 'version': Value('string'), 'reward': Value('int64')}
              because column names don't match
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1832, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

source
string
prompt
string
instance_id
string
repo
string
version
string
reward
int64
swe_bench_ml
"RuntimeError: Address already in use" when running multiple multi-gpu training (DDP). **Describe the bug** I see "RuntimeError: Address already in use" error message if I try to run two multi-gpu training session (using ddp) at the same time. **To Reproduce** Run two multi-gpu training session at the same time. ...
Lightning-AI__lightning-1010
Lightning-AI/lightning
null
swe_bench_ml
logging basic configuration level: INFO vs. WARNING (usability with W&B) Thanks for the amazing package! I am having a great time using it. # Issue Recently, I have been playing around with the weights and biases (W&B) logging functionality and I noticed that I was getting a lot of logging messages in my jupyter n...
Lightning-AI__lightning-1015
Lightning-AI/lightning
null
swe_bench_ml
Checkpoint naming broken ## 🐛 Bug I would like to be able to save checkpoints with custom names that include the value of my `val_loss`, ie. `path/epoch_2-val_loss_0.2.hdf5 `. The [documentation](https://pytorch-lightning.readthedocs.io/en/latest/pytorch_lightning.callbacks.html#pytorch_lightning.callbacks.ModelChe...
Lightning-AI__lightning-1016
Lightning-AI/lightning
null
swe_bench_ml
Disable automatic checkpoint loading **Is your feature request related to a problem? Please describe.** The last checkpoint is being automatically restored when a checkpoint exists. This is an issue for me when the model has previously been trained with different settings or I want to train a network from scratch. ...
Lightning-AI__lightning-1017
Lightning-AI/lightning
null
swe_bench_ml
Precision=16 with TPUs bug ## 🐛 Bug Setting precision=16 when training with a TPU throws an error ### To Reproduce see colab: https://colab.research.google.com/drive/1s-ZDIqzgKQ1Byf-Lw58RZ8LGgmdB6qjB Relavent stack trace: ``` Exception in device=TPU:0: str expected, not int Traceback (most recent call...
Lightning-AI__lightning-1018
Lightning-AI/lightning
null
swe_bench_ml
Support storing hparams as a dict ## 🚀 Feature Right now, we assume `model.hparams` is an `argparse.Namespace`. We've had a number of requests to support `hparams` as a simple `dict`. Let's do it.
Lightning-AI__lightning-1029
Lightning-AI/lightning
null
swe_bench_ml
Update CHANGELOG for 0.7.x ## 🐛 Bug <!-- A clear and concise description of what the bug is. --> Updated CHANGELOG according to the reset changes (about last two weeks) especially deprecated items like `data_loader` or `xxxxx_end` ### Additional context <!-- Add any other context about the problem here. --...
Lightning-AI__lightning-1091
Lightning-AI/lightning
null
swe_bench_ml
make metric-comparison in ModelCheckpoint robust to NaN ## 🐛 Bug When the metric that is used in `ModelCheckpoint` reaches `NaN` in one epoch and then returns to a number in the following epoch, the model will not be saved as comparisons to `NaN` always return `False`. Here a screenshot from my training: ![...
Lightning-AI__lightning-1097
Lightning-AI/lightning
null
swe_bench_ml
Lower default progress_bar_refresh_rate ## 🚀 Feature From a conversation in Slack. In v0.7.1, the default value of `progress_bar_refresh_rate` is 50, which I think too high. It is a measure to prevent notebook freezing, however, I believe most users run PL on CLI. In addition, the high default value confuses the ...
Lightning-AI__lightning-1100
Lightning-AI/lightning
null
swe_bench_ml
Support IterableDatasets for validation and test, not just train set [blocked by #953] ## 🚀 Feature Currently Lightning supports `IterableDatasets` only in the training set (see [code](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/data_loading.py#L177)). This makes them s...
Lightning-AI__lightning-1104
Lightning-AI/lightning
null
swe_bench_ml
enable progress bar = update freq 1 i think we can collapse two args. - remove enable progress bar arg. - disable the progress bar when update freq = 0
Lightning-AI__lightning-1108
Lightning-AI/lightning
null
swe_bench_ml
Checkpoint fails in single node multi-GPU mode using DDP ## 🐛 Bug Checkpoint fails in single node multi-GPU mode using DDP. ### To Reproduce ```bash python pl_examples/basic_examples/gpu_template.py --distributed_backend ddp --gpus 2 ``` ```bash Epoch 2: : 700it [00:28, 42.69it/s, l/home/xz/anaconda3/e...
Lightning-AI__lightning-1125
Lightning-AI/lightning
null
swe_bench_ml
ReduceLROnPlateau scheduler type check ## 🐛 Bug Incorrect type check for scheduler of class ReduceLROnPlateau. https://github.com/PyTorchLightning/pytorch-lightning/blob/bc01b9ac1c0ddf503f83755c00d5541a2679e5b4/pytorch_lightning/trainer/trainer.py#L713 I believe, this check: `isinstance(scheduler, optim.lr_sch...
Lightning-AI__lightning-1126
Lightning-AI/lightning
null
swe_bench_ml
`use_amp` is broken in 0.7.0 I see that `use_amp` is deprecated but since the Trainer still accepts it as an argument, I believe this is still a bug. The issue is that the [hook that deals with this](https://github.com/PyTorchLightning/pytorch-lightning/blob/3c2fd560aa4d31b4f48ee225b83361deec53d9c7/pytorch_lightnin...
Lightning-AI__lightning-1145
Lightning-AI/lightning
null
swe_bench_ml
Add support for hierarchical dict ## 🚀 Feature ### Motivation Since v0.7.0, LightningModule accepts dict hparams, however, still TensorBoardLogger raises an error with hierarchical dict. Considering the compatibility of the other package, especially Hydra #807, hierarchical dict should be accepted by any loggers. ...
Lightning-AI__lightning-1152
Lightning-AI/lightning
null
swe_bench_ml
clarify 3 things in docs **clarify 3 things in docs** 1. DP, DDP don't matter with TPUs. TPUs work in DDP by default (at least rn). 2. Explain how the different steps are called (i thought we did). (here: https://pytorch-lightning.readthedocs.io/en/0.7.1/lightning-module.html#training-loop-structure) ``` # how...
Lightning-AI__lightning-1164
Lightning-AI/lightning
null
swe_bench_ml
Pretty test result with pprint ## 🚀 Feature ### Motivation Test results are shown at the end of test loops since v0.7.0. The content of the indication is `prog_bar_metrics`, which is basically inputted as a `dict`. By default, this part uses built-in `print` function: ![image](https://user-images.githubusercontent....
Lightning-AI__lightning-1176
Lightning-AI/lightning
null
swe_bench_ml
Learning Rate Schedulers' default dictionary parameters should be set via the Trainer ## 🚀 Feature The default Learning Rate Schedulers (LRS) dictionary parameters should be settable from the Trainer constructor. ### Motivation The documentation doesn't seem to be clear that the LRS have the following additional...
Lightning-AI__lightning-1177
Lightning-AI/lightning
null
swe_bench_ml
CI: Force docs compilation warnings to be raised as errors ## 📚 Documentation We frequently have the issue that a part of the code is changed but the documentation gets broken. Since most doc breaking changes only raise warnings, they are often overlooked. Let's fix all the warnings and force the build to raise...
Lightning-AI__lightning-1191
Lightning-AI/lightning
null
swe_bench_ml
multi-gpu ddp calls validation and testing loops too many times When using ddp with multiple gpus, each validation and test loop is called with the entire validation dataset for each gpu. Expected behavior is that the dataset is divided appropriately across the gpus. I am using current master (cloned Mar 14), Ubu...
Lightning-AI__lightning-1192
Lightning-AI/lightning
null
swe_bench_ml
Wandb logger doesn't upload saved model checkpoint for final epoch ## 🐛 Bug When training a model on the TPU and using the wandb logger, the checkpoint for the last epoch trained doesn't get uploaded to wandb. ### To Reproduce Colab notebook: https://colab.research.google.com/drive/1oPaRWGZcz6YEol012xFADN42LV...
Lightning-AI__lightning-1193
Lightning-AI/lightning
null
swe_bench_ml
Additional dataloader created and discarded when training with reload_dataloaders_every_epoch ## 🐛 Bug I am training with reload_dataloaders_every_epoch and I've noticed it instantiates an extra DataLoader before training for which nothing is run. This is an issue for me as I am training with chunks that get loaded...
Lightning-AI__lightning-1196
Lightning-AI/lightning
null
swe_bench_ml
update docs to recommend __call__ for forward passes ## 📚 Documentation We should update the docs to recommend usage of `self(x)` for calculating the forward pass rather than `self.forward(x)`. Calling `forward()` directly can cause issues when you're using PyTorch model hooks (eg. see the additional logic in [`nn....
Lightning-AI__lightning-1211
Lightning-AI/lightning
null
swe_bench_ml
Support for non-static data for reinforcement learning What would be the best approach for reinforcement learning problems where you would need to interact with the environment for data? Maybe DataLoader is restricting?
Lightning-AI__lightning-1232
Lightning-AI/lightning
null
swe_bench_ml
Early stopping not working on 0.7.1 <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq) --> ## 🐛 Bug ...
Lightning-AI__lightning-1235
Lightning-AI/lightning
null
swe_bench_ml
Disabling validation with val_percent_check=0.0 not working <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)...
Lightning-AI__lightning-1251
Lightning-AI/lightning
null
swe_bench_ml
Train loss vs loss on progress bar ## ❓ Questions and Help <!-- If you still can't find what you need: --> #### What is your question? I don't understand why `train_loss` is different than `loss` even though I assign the same value. Perhaps one loss is calculated over the whole dataset and the other one is only ...
Lightning-AI__lightning-1253
Lightning-AI/lightning
null
swe_bench_ml
Restoring training session docs ## 📚 Documentation ["Restoring training session"](https://pytorch-lightning.readthedocs.io/en/latest/pytorch_lightning.trainer.training_io.html#restoring-training-session) documentation shows example, which does not work correctly - [Google Colab](https://colab.research.google.com/dr...
Lightning-AI__lightning-1265
Lightning-AI/lightning
null
swe_bench_ml
AdvancedProfiler error Hi, as others have pointed out, the Profiler doesn't seem to work (it prints nothing), and trying out the AdvancedProfiler as in https://pytorch-lightning.readthedocs.io/en/latest/profiler.html like: ``` from pytorch_lightning.profiler import AdvancedProfiler profiler = AdvancedProfiler(ou...
Lightning-AI__lightning-1267
Lightning-AI/lightning
null
swe_bench_ml
GAN example: Only one backward() call? In the PyTorch GAN tutorial [https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html) there are two backward() calls for the discriminator. How do you ensure this with your structure, where backward() gets ...
Lightning-AI__lightning-1269
Lightning-AI/lightning
null
swe_bench_ml
Add support for loading flattened meta_tags.csv ## 🚀 Feature ### Motivation PL+TensorBoard can log hierarchical dict after #1152, however, `meta_tags.csv` has been disabled by the change. ### Pitch - Make `meta_tags.csv` back to a hierarchical dict based on their delimiter. ### Alternatives 62de7948634b1cd...
Lightning-AI__lightning-1271
Lightning-AI/lightning
null
swe_bench_ml
Multiple undesired checkpoints created during single epoch ## 🐛 Bug Thanks for the great project! When I sent custom `ModelCheckpoint` to `Trainer` and hoping to get one checkpoint each epoch, the Trainer eventually produced a lot of versioned checkpoints within a single epoch, wasting lots of disk space and were c...
Lightning-AI__lightning-1272
Lightning-AI/lightning
null
swe_bench_ml
change Checkpoint callback's `save_best_only` to `save_top_k` **Is your feature request related to a problem? Please describe.** `save_best_only` is a special case of `save_top_k`. However, `save_tok_k` checkpoints can be used to create ensemble model during the test time. **Describe the solution you'd like** keep...
Lightning-AI__lightning-128
Lightning-AI/lightning
null
swe_bench_ml
Validation every epoch with non-finite dataloader ## 🚀 Feature <!-- A clear and concise description of the feature proposal --> Providing a way to do validation every epoch with non-finite (`__len__` not implemented) dataloaders. ### Motivation <!-- Please outline the motivation for the proposal. Is your fea...
Lightning-AI__lightning-1283
Lightning-AI/lightning
null
swe_bench_ml
Metrics: Base Metric ## 🚀 Feature Add a base class for proper metric implementation
Lightning-AI__lightning-1326
Lightning-AI/lightning
null
swe_bench_ml
Metrics: Confusion Matrix ## 🚀 Feature Implement Confusion Matrix
Lightning-AI__lightning-1327
Lightning-AI/lightning
null
swe_bench_ml
Process runs on more GPUs than specified I have a single 8-GPU machine with a faulty GPU0. I'm running imagenet_example.py on 7 GPUs on this machine by specifying `gpus=[1,2,3,4,5,6,7]` in the Trainer i.e. I do not want to use GPU0 However, when i run `nvidia-smi`, I see the Trainer's pid shows on all 8 GPUs, just...
Lightning-AI__lightning-1349
Lightning-AI/lightning
null
swe_bench_ml
incorrect run on the test set with overwritten validation_end and test_epoch_end ## 🐛 Bug If I override validation_end and test_epoch_end, TrainerEvaluationLoopMixin.evaluate works incorrectly on the test set Suppose we override `validation_epoch_end` and `test_end`, but not `validation_end` and `test_epoch_end`...
Lightning-AI__lightning-1353
Lightning-AI/lightning
null
swe_bench_ml
Log training metrics for each epoch Currently, I am able to log training metrics to Tensorboard using: ``` import pytorch_lightning as pl from pytorch_lightning.loggers import TensorBoardLogger logger = TensorBoardLogger(save_dir=save_dir, name="my_model") [...] trainer = pl.Trainer(logger=logger) ``` ...
Lightning-AI__lightning-1357
Lightning-AI/lightning
null
swe_bench_ml
WandbLogger cannot be used with 'ddp' <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq) --> ## 🐛 Bug...
Lightning-AI__lightning-1360
Lightning-AI/lightning
null
swe_bench_ml
Tensorboard logger error: lightning_logs directory not exists in multi-node DDP on nodes with rank != 0 ## 🐛 Bug In multi-node DDP train mode on all nodes except rank 0 errors appears at the start of the training caused by accessing lightning_logs directory in tensorboard logger which is not exist at the moment. ...
Lightning-AI__lightning-1377
Lightning-AI/lightning
null
swe_bench_ml
Training loop temporarily hangs after every 4 steps I am porting some of my code to pytorch lightning, and everything seems to work fine. However, for some reason after every 4 training steps I see some temporary hanging (~1 second), which is severely slowing down my overall training time. Am I missing some obvious con...
Lightning-AI__lightning-1378
Lightning-AI/lightning
null
swe_bench_ml
Trainer DDP should invoke load_spawn_weights() only in proc_rank == 0 ## 🐛 Bug Trainer DDP load_spawn_weights should happen only in proc_rank == 0 since only in this process (node) `save_spawn_weights` actually saves checkpoint ### To Reproduce Steps to reproduce the behavior: 1. setup two-node cluster. ...
Lightning-AI__lightning-1385
Lightning-AI/lightning
null
swe_bench_ml
Make Pytorch-Lightning DDP work without SLURM ## 🚀 Feature Allow pytorch-lightning DDP mode to work everywhere ordinary pytorch DDP can work. Basically if every node in a cluster defines the following environment variables it should work: - `MASTER_PORT`: A free port on the machine that will host the process wi...
Lightning-AI__lightning-1387
Lightning-AI/lightning
null
swe_bench_ml
RuntimeError: Unimplemented backend XLA on TPU ## 🐛 Bug `RuntimeError: Unimplemented backend XLA` raised for `self.batch_loss_value.append(loss)` line in `trainer/training_loop.py` file when running MNIST on TPU. I think it was introduced in 31b7148. ### To Reproduce Steps to reproduce the behavior: 1. Go ...
Lightning-AI__lightning-1396
Lightning-AI/lightning
null
swe_bench_ml
ModelCheckpoint tries to remove already removed checkpoint in DDP mode ## 🐛 Bug When training in DDP mode with ModelCheckpoint callback, the train process fails, when ModelCheckpoint callback tries to remove previous checkpoint. I assume that it was already deleted by another process. ### To Reproduce Steps ...
Lightning-AI__lightning-1408
Lightning-AI/lightning
null
swe_bench_ml
Use isinstance() instead of type() in trainer.distrib_parts.check_gpus_data_type <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/p...
Lightning-AI__lightning-1423
Lightning-AI/lightning
null
swe_bench_ml
Not auto add DistributedSampler for DDP training ## 🐛 Bug <!-- A clear and concise description of what the bug is. --> in 0.72, even if we don't set sampler, pytorch_lightning will not add DistributedSampler for us. ### To Reproduce the reason is in pytorch, if we don't set sampler, pytorch will add a sampler ...
Lightning-AI__lightning-1425
Lightning-AI/lightning
null
swe_bench_ml
Automatically pick available GPU Thanks for this great library! ## 🚀 Feature I would like to change the behavior of this code: ``` python trainer = pl.Trainer( ... snip ..., gpus=1, ) ``` Currently, when setting `gpus` to an integer `n`, the first `n` GPUs are automatically used. ...
Lightning-AI__lightning-1426
Lightning-AI/lightning
null
swe_bench_ml
run_training_batch breaks on None batch or -1 response from on_batch_start (in new 0.7.2 release) ## 🐛 Bug run_training_batch now is supposed to return a 4-tuple in 0.7.2 however, there are two places where it still returns a 3-tuple, which will cause the program to crash, saying "ValueError: not enough values ...
Lightning-AI__lightning-1431
Lightning-AI/lightning
null
swe_bench_ml
Add dataloader arg to Trainer.test() ## 🚀 Feature <!-- A clear and concise description of the feature proposal --> It would be nice if you could use a model for inference using: `Trainer.test(model, test_dataloaders=test_loader)` ### Motivation This will match the calling structure for `Trainer.fit()` and allow...
Lightning-AI__lightning-1434
Lightning-AI/lightning
null
swe_bench_ml
Test metrics is not being reported to TensorBoard since 0.7.2 <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#fa...
Lightning-AI__lightning-1441
Lightning-AI/lightning
null
swe_bench_ml
Propose to save new model before deleting previous ones in ModelCheckpointing ## 🚀 Feature <!-- A clear and concise description of the feature proposal --> In an edge case, the trainer deleted previous model and then was killed because of system error before successfully saving new model. Thus all the models were lo...
Lightning-AI__lightning-1453
Lightning-AI/lightning
null
swe_bench_ml
Early stopping conditioned on metric `val_loss` isn't recognised when setting the val_check_interval **Describe the bug** Training stops when setting `val_check_interval`<1.0 in the Trainer class as it doesn't recognise `val_loss`. I get the following warning at the end of the 3rd epoch: ``` Early stopping condition...
Lightning-AI__lightning-1458
Lightning-AI/lightning
null
swe_bench_ml
Test results not logged to tensorboard, since 0.7.3, this worked in 0.7.1 ## 🐛 Bug Test results are not logged to TensorBoard. With the exact same code, version `0.7.1` logged them flawlessly. Also, with the exact same code, validation and train results are logged. So I assumed the issue is with the test. ### T...
Lightning-AI__lightning-1459
Lightning-AI/lightning
null
swe_bench_ml
Add an option to disable Trainer.detect_nan_tensors ## 🚀 Feature Add an option to disable `Trainer.detect_nan_tensors` ### Motivation This function tends to be pretty slow when your network has got a lot of parameters, especially in small tensors. For example in my case it took ~0.5s per training iteration. ...
Lightning-AI__lightning-1475
Lightning-AI/lightning
null
swe_bench_ml
Learning rate scheduler should step after each optimizer step ## 🐛 Bug I'm not sure that this is a bug or if it is a deliberate design decision, but right now the learning rate schedule gets updated at every "step" which actually corresponds to every forward pass. I think a more standard implementation would have t...
Lightning-AI__lightning-1477
Lightning-AI/lightning
null
swe_bench_ml
Metrics: Confusion Matrix ## 🚀 Feature Implement Confusion Matrix
Lightning-AI__lightning-1488
Lightning-AI/lightning
null
swe_bench_ml
wandb logger 'global_step' affects other logger ## 🐛 Bug The wandb logger adds a 'global_step' to the metric dict which appears in all other loggers (e.g. Tensorboard). Only the wandb logger is adding 'global_step' to metric and I think it is not necessary. Another side effect of that is, that 'global_step' is also...
Lightning-AI__lightning-1492
Lightning-AI/lightning
null
swe_bench_ml
on_before_zero_grad hook ## 📚 Documentation The documentation report the method `on_before_zero_grad`. Strangely, this method is not shown in the lifecycle for hooks documentation. Moreover, when it is defined in a lightning module it is not called. Hence the question : is it a discontinued hook ? If so we could...
Lightning-AI__lightning-1493
Lightning-AI/lightning
null
swe_bench_ml
Incorrect MisconfigurationException for models without dataloaders. <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightn...
Lightning-AI__lightning-1495
Lightning-AI/lightning
null
swe_bench_ml
Mixing hparams and arguments in LightningModule.__init__() crashes load_from_checkpoint() <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLi...
Lightning-AI__lightning-1505
Lightning-AI/lightning
null
swe_bench_ml
bug(logger): wandb fails on sweep ## 🐛 Bug When using `wandb` sweeps for hyperparameters search, I get this error: > wandb: ERROR Attempted to change value of key "dropout_std" from 0.030424838979365657 to 0.030424838979365654 The reason is I ran: ```python wandb_logger.log_hyperparams(params) ``` Which I...
Lightning-AI__lightning-1512
Lightning-AI/lightning
null
swe_bench_ml
0.7.3 breaks reusable dataloaders in DDP ## 🐛 Bug 0.7.3 breaks reusable dataloaders in DDP ``` Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap fn(i, *args) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning...
Lightning-AI__lightning-1513
Lightning-AI/lightning
null
swe_bench_ml
Performance drop when activating gradient clipping Hello all, I experienced a substantial drop in computation time when activating gradient clipping (by passing a non-zero value to the keyword argument `gradient_clip_val` when initializing the Trainer). I noticed that in the current implementation of the `clippin...
Lightning-AI__lightning-1523
Lightning-AI/lightning
null
swe_bench_ml
Memory (CPU and GPU) leaks during the 1st epoch ## 🐛 Bug Hello. This memory leak occurs during the first epoch. If one has a large epoch time (I had > 10 days), the OOM error will come. It's interesting, that in precision=16 mode, it leaks out on the GPU and the CPU both. If we switch amp optimization off (precision...
Lightning-AI__lightning-1528
Lightning-AI/lightning
null
swe_bench_ml
Add support for Horovod as a distributed backend ## 🚀 Feature [Horovod](http://horovod.ai/) is a framework for performing data-parallel distributed training for PyTorch (in addition to other frameworks like TensorFlow and MXNet). It uses the allreduce technique to synchronously aggregate gradients across workers, si...
Lightning-AI__lightning-1529
Lightning-AI/lightning
null
swe_bench_ml
DDP on GPUs invalid ordinal <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq) --> ## 🐛 Bug On lat...
Lightning-AI__lightning-1541
Lightning-AI/lightning
null
swe_bench_ml
transfer_to_batch_gpu returns null when input has primitives **Describe the bug** when passing a batch such as: ```python batch = list(tensor, tensor, [0,1,2]) ``` the list of ints won't be returned correctly **Additional context** Fix should add a return of the item it no condition matches
Lightning-AI__lightning-155
Lightning-AI/lightning
null
swe_bench_ml
Native Amp Support Native automatic mixed precision support (`torch.cuda.amp`) is finally merged: https://pytorch.org/docs/master/amp.html https://pytorch.org/docs/master/notes/amp_examples.html Apex Amp has many known pain points (extension builds, forward/backward compatibilty, DataParallel support, flaky checkpoi...
Lightning-AI__lightning-1561
Lightning-AI/lightning
null
swe_bench_ml
`num_tpu_cores=8` does not work on kaggle ## 🐛 Bug When I try to train a model on Kaggle TPU's with `num_tpu_cores` set to 8, I receive an error `Exception: process 2 terminated with exit code 1` . Would be great if this worked on kaggle. ### To Reproduce Steps to reproduce the behavior: 1. Run this notebook...
Lightning-AI__lightning-1568
Lightning-AI/lightning
null
swe_bench_ml
Trainer.add_argparse_args bool type ## 🐛 Bug The boolean arguments added using `Trainer.add_argparse_args` always evaluate to True. This is caused by the following lines of the add_argparse_args fucntion: ```python if isinstance(allowed_type, bool): def allowed_type(x): return bool(distutils.util....
Lightning-AI__lightning-1571
Lightning-AI/lightning
null
swe_bench_ml
How to properly fix random seed with pytorch lightning? #### What is your question? Hello guys I wonder how to fix seed to get reproducibility of my experiments Right now I'm using this function before the start of the training ``` def seed_everything(seed=42): random.seed(seed) os.environ['PYTHONHAS...
Lightning-AI__lightning-1572
Lightning-AI/lightning
null
swe_bench_ml
Batch being moved to gpu repeatedly with multiple optimizers and single gpu training If you have multiple optimizers, then transfer_batch_to_gpu winds up getting called once per opt_idx, and the batch is copied each time via copy.copy(batch) in training_forward. Why copy the batch when there is only a single gpu? By re...
Lightning-AI__lightning-1576
Lightning-AI/lightning
null
swe_bench_ml
LightningTemplateModel is broken <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq) --> ## 🐛 Bug `...
Lightning-AI__lightning-1577
Lightning-AI/lightning
null
swe_bench_ml
After update from 0.5.x to 0.7.3 merge_dicts #1278 sometimes breaks training ## 🐛 Bug After I updated from a quite old lightning version to the newest one, I sometimes get a TypeError from merge_dicts. I guess it's related to this MR #1278 . This Type error is deterministic, meaning it always occurs at the same glo...
Lightning-AI__lightning-1582
Lightning-AI/lightning
null
swe_bench_ml
Named converted to regular tuples when sent to the gpu. <!-- ### Common bugs: 1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). 2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq) ...
Lightning-AI__lightning-1589
Lightning-AI/lightning
null
swe_bench_ml
How to resolve DeprecationWarning Version: 0.7.3 My trainer ``` trainer = pl.Trainer( max_epochs=19 ) ``` Warnings: ``` pytorch_lightning/utilities/warnings.py:18: DeprecationWarning: Argument `show_progress_bar` is now set by `progress_bar_refresh_rate` since v0.7.2 and this method will be removed in v...
Lightning-AI__lightning-1596
Lightning-AI/lightning
null
swe_bench_ml
hparams sometimes don't save in checkpoint hparams sometimes don't save in checkpoint (maybe just a SLURM thing?) checkpoints saved on slurm are missing hparams
Lightning-AI__lightning-1623
Lightning-AI/lightning
null
swe_bench_ml
Add support to log hparams and metrics to tensorboard? How can I log metrics (_e.g._ validation loss of best epoch) together with the set of hyperparameters? I have looked through the docs and through the code. It seems like an obvious thing, so maybe I'm just not getting it. Currently, the only way that I found...
Lightning-AI__lightning-1630
Lightning-AI/lightning
null
swe_bench_ml
Bug in DDP, but not DP modes. Pytorch 1.5 ``` In [3]: pytorch_lightning.__version__ ...
Lightning-AI__lightning-1632
Lightning-AI/lightning
null
swe_bench_ml
Trainer DDP invoking load_spawn_weights() on each node ## 🐛 Bug <!-- A clear and concise description of what the bug is. --> On a SLURM cluster, I am seeing the same problem as issue #1335 , despite that issue's fix being applied. ### To Reproduce Steps to reproduce the behavior: 1. Allocate 4 nodes on a ...
Lightning-AI__lightning-1645
Lightning-AI/lightning
null
swe_bench_ml
Add support to log hparams and metrics to tensorboard? How can I log metrics (_e.g._ validation loss of best epoch) together with the set of hyperparameters? I have looked through the docs and through the code. It seems like an obvious thing, so maybe I'm just not getting it. Currently, the only way that I found...
Lightning-AI__lightning-1647
Lightning-AI/lightning
null
swe_bench_ml
Allow keeping default save_dir in ModelCheckpointer ## Feature Make `filepath` argument of `ModelCheckpointer` optional. ### Motivation I'm pretty happy with all defaults of `ModelCheckpointer` except `save_top_k`. In case I want to override that parameter I have to write some awkward code related to figuring out...
Lightning-AI__lightning-1654
Lightning-AI/lightning
null
swe_bench_ml
Call load_from_checkpoint when trainer load state ## 🚀 Feature When use `Trainer` class to load the model state instead of using `load_from_checkpoint`, the hook `on_load_checkpoint` won't be called. It's better to let trainer call `load_from_checkpoint` instead of directly call `model.load_state_dict`, thus `on_load...
Lightning-AI__lightning-1666
Lightning-AI/lightning
null
swe_bench_ml
load checkpoint from URL Let's enable loading weights from a URL directly ## Option 1: Automate it with our current API ```python Trainer.load_from_checkpoint('http://') ``` ## Option 2: Have a separate method ```python Trainer.load_from_checkpoint_at_url('http://') ``` ## Resources We can use this ...
Lightning-AI__lightning-1667
Lightning-AI/lightning
null
swe_bench_ml
test_dataloader called during check_testing_model_configuration ## 🐛 Bug When testing a model that has been loaded using the `resume_from_checkpoint` Trainer flag (note: `.fit()` has **not** been called yet in this case), the `check_testing_model_configuration` method [calls](https://github.com/PyTorchLightning/pyt...
Lightning-AI__lightning-1670
Lightning-AI/lightning
null
swe_bench_ml
Do not configure python logging ## 🐛 Bug pytorch-lightning right now configures the python logging module ([here](https://github.com/PyTorchLightning/pytorch-lightning/blob/8322f1b039c890b8ccdbfe29bf42056e5273d74f/pytorch_lightning/__init__.py#L16)). This is generally not recommended when writing a library as it ma...
Lightning-AI__lightning-1718
Lightning-AI/lightning
null
swe_bench_ml
Also update progress_bar in training_epoch_end ## 🚀 Feature [PR 1357](https://github.com/PyTorchLightning/pytorch-lightning/pull/1357) implements training_epoch_end to log metrics. The comments in the issue suggest that it should behave like validation_epoch_end, but the PR only replicates the callbacks and metric lo...
Lightning-AI__lightning-1724
Lightning-AI/lightning
null
swe_bench_ml
Support training multiple models in parallel on each TPU core. ## 🚀 Feature While training a model using K-Fold method, it would be beneficial to train each model parallelly on a separate TPU core. There should be a feature by which we can assign a model training process to a particular TPU core. Similar to `gpus=[0,...
Lightning-AI__lightning-1729
Lightning-AI/lightning
null
swe_bench_ml
Checkpoint adding "version_" at the start of the logger name **To reproduce :** ```python logger = pl.loggers.TensorBoardLogger( save_dir='.', version='my_name' name='lightning_logs' ) trainer = pl.Trainer(logger=logger, log_gpu_memory='all', max_epo...
Lightning-AI__lightning-1748
Lightning-AI/lightning
null
End of preview.

ML SWE Prompts

Unified collection of ML/training-related software engineering prompts for OPD distillation training. All prompts are in English.

Filtered to core ML repos: huggingface (1,058), numpy (937), Lightning-AI (377), ray-project (342). Excludes pandas-dev, qiskit, open-mmlab, scipy, tensorflow, spaCy.

Splits

Config Source Rows Description
all Combined 6,220 All prompts combined
swe_bench_ml SWE-bench train 2,714 Problem statements from core ML repos (HF, numpy, Lightning, Ray) + keyword-matched from SWE-Dev
swe_dev_sft SWE-Dev-train 3,054 ML-related agent conversations (SFT format)
swe_dev_rft SWE-Dev-train 345 ML-related agent conversations (RFT format, with rewards)
terminal_bench_verified Terminal-Bench 2 Verified 89 Task instructions from TB2 verified tasks
terminal_bench_trajectories TB2 Leaderboard 18 Unique ML task prompts from agent trajectories (deduplicated)

Schema

All rows have at minimum source and prompt. Additional fields vary by source:

  • swe_bench_ml: instance_id, repo, version
  • swe_dev_sft/rft: instance_id, repo, reward
  • terminal_bench_verified: task_name
  • terminal_bench_trajectories: task_name, model, agent, reward

Citation

@article{merrill2026terminal,
  title={Terminal-bench: Benchmarking agents on hard, realistic tasks in command line interfaces},
  author={Merrill, Mike A and Shaw, Alexander G and Carlini, Nicholas and others},
  journal={arXiv preprint arXiv:2601.11868},
  year={2026}
}

@inproceedings{princeton2023swebench,
  title={SWE-bench: Can Language Models Resolve Real-World GitHub Issues?},
  author={Princeton NLP},
  year={2023}
}
Downloads last month
32

Paper for harithoppil/ml-swe-prompts