vericava's picture
Update README.md
5730017 verified
metadata
library_name: transformers
license: mit
tags:
  - generated_from_trainer
model-index:
  - name: gpt2-medium-vericava-posts-v3
    results: []
language:
  - ja
pipeline_tag: text-generation

gpt2-medium-vericava-posts-v3

This is a model trained from scratch using the shape of gpt2-medium on a dataset of my posts on the Internet.

It achieves the following results on the evaluation set:

  • Loss: 6.4732

Model description

It generates text resembling what I post on the Internet.

Intended uses & limitations

CAUTION: It may produce something I'd never say. I do not impose any restriction(s) on the use of this model.

Training and evaluation data

Twitter/X: https://x.com/vericava

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 1024
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.1117 11.1176 100 6.5645
1.461 22.2353 200 5.6692
1.3005 33.3529 300 5.3980
1.1776 44.4706 400 5.2793
1.0325 55.5882 500 5.3445
0.8629 66.7059 600 5.5766
0.6811 77.8235 700 5.8608
0.4943 88.9412 800 6.1404
0.3243 100.0 900 6.4732

Framework versions

  • Transformers 4.52.4
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1