metadata
license: mit
datasets:
- lmsys/lmsys-chat-1m
language:
- en
base_model:
- AGofficial/AgGPT-15
๐ค AgGPT-21
A powerful and lightweight GPT-style language model built with PyTorch, featuring word-level tokenization and GRU-based architecture.
โจ Features
- ๐ง Intelligent Architecture: GRU-based neural network with embedding layers
- ๐ Multi-File Training: Trains on multiple corpus files automatically
- โก Optimized Performance: Supports GPU (CUDA), Apple Silicon (MPS), and CPU
- ๐๏ธ Flexible Generation: Configurable temperature, top-k, and top-p sampling
- ๐ฌ Interactive Chat: Beautiful command-line chat interface
- ๐ Early Stopping: Prevents overfitting with validation-based early stopping
- ๐ Progress Tracking: Real-time training progress with tqdm
๐ Quick Start
Prerequisites
pip install torch tqdm
Training the Model
- Prepare your training data: Place your text files in the
training_corpora/folder - Start training:
python AgGPT21.py
The model will automatically:
- Load all
.txtfiles fromtraining_corpora/ - Build vocabulary from your data
- Train with validation split and early stopping
- Save the trained model as
AgGPT21.pt
Interactive Chat
Once trained, start chatting with your model:
python chat.py
๐ Project Structure
AgGPT-21-2/
โโโ banner.png # Project banner image
โโโ AgGPT21.py # Main training script
โโโ chat.py # Interactive chat interface
โโโ README.md # This file
โโโ AgGPT21.pt # Trained model (generated after training)
โโโ training_corpora/ # Training data folder
โโโ corpora_000.txt # Training file 1
โโโ corpora_001.txt # Training file 2
โโโ ... # More training files
โโโ corpora_041.txt # Training file N
โ๏ธ Configuration
Model Hyperparameters
| Parameter | Default | Description |
|---|---|---|
SEQ_LEN |
64 | Sequence length for training |
EMBED_SIZE |
128 | Embedding dimension |
HIDDEN_SIZE |
128 | GRU hidden dimension |
NUM_LAYERS |
1 | Number of GRU layers |
DROPOUT |
0.2 | Dropout rate |
Training Parameters
| Parameter | Default | Description |
|---|---|---|
BATCH_SIZE |
8 | Training batch size |
EPOCHS |
6 | Maximum training epochs |
LR |
2e-3 | Learning rate |
WEIGHT_DECAY |
1e-4 | L2 regularization |
CLIP_NORM |
1.0 | Gradient clipping |
Generation Settings
| Parameter | Default | Description |
|---|---|---|
TEMPERATURE |
0.9 | Sampling temperature (0.1-2.0) |
TOP_K |
50 | Top-k sampling limit |
TOP_P |
0.9 | Nucleus sampling threshold |
GENERATE_LENGTH |
200 | Default generation length |
๐ฎ Chat Commands
In the interactive chat mode, you can use these commands:
- Basic Chat: Just type your message
quit/exit/bye: End the conversationhelp: Show available commandsclear: Clear the screenmodel: Display model informationtemp X: Set temperature (e.g.,temp 0.8)length X: Set response length (e.g.,length 150)
๐งช Example Usage
Training Example
# Train the model (automatic multi-file loading)
python AgGPT21.py
Output:
Found 42 training files
Reading corpora_000.txt...
Reading corpora_001.txt...
...
Total words loaded: 2,847,392
Vocabulary size: 30,000
Tokens used: 1,000,000 | device=mps
Model params: 4,099,200
Train batches per epoch: 1,562 | Val batches: 79
Epochs: 100%|โโโโโโโโโโโโ| 6/6 [05:23<00:00, 53.92s/it, train=2.1847, val=2.3456]
Saved AgGPT21.pt
Chat Example
๐ค You: Tell me about artificial intelligence
๐ค AgGPT-21: Artificial intelligence is a fascinating field that focuses on creating systems capable of performing tasks that typically require human intelligence. These systems can learn from data, recognize patterns, make decisions, and solve complex problems. AI has applications in many areas including natural language processing, computer vision, robotics, and machine learning...
๐ง Advanced Usage
Custom Vocabulary Size
MAX_VOCAB = 50000 # Increase vocabulary size
Training on Subset of Data
DATA_PERCENT = 0.5 # Use only 50% of available data
MAX_TOKENS = 500_000 # Limit to 500k tokens
Multi-GPU Training
# The model automatically detects and uses available accelerators:
# - CUDA (NVIDIA GPUs)
# - MPS (Apple Silicon)
# - CPU (fallback)
๐ Model Architecture
Input โ Embedding โ Dropout โ GRU โ Dropout โ [Projection] โ Linear โ Output
โ โ โ โ
Token Vector Hidden Logits
IDs (128-dim) States (Vocab-size)
Key Features:
- Weight Tying: Output layer shares weights with embedding layer
- Gradient Clipping: Prevents exploding gradients
- Mixed Precision: Automatic FP16 on supported devices
- Early Stopping: Validation-based training termination
๐ฏ Performance Tips
- GPU Acceleration: Use CUDA or MPS for faster training
- Batch Size: Increase if you have more memory
- Sequence Length: Longer sequences capture more context
- Vocabulary: Smaller vocab = faster training, larger vocab = better coverage
- Data Quality: Clean, relevant training data improves results
๐ Troubleshooting
Common Issues
"No .txt files found"
- Ensure your training files are in
training_corpora/with.txtextension
"CUDA out of memory"
- Reduce
BATCH_SIZEorSEQ_LEN - Use
DATA_PERCENT < 1.0to train on less data
"Model file not found"
- Train the model first with
python AgGPT21.py - Ensure
AgGPT21.ptexists in the project directory
๐ Training Data Format
Your training files should be plain text. The model will automatically:
- Convert to lowercase
- Split on whitespace
- Handle special tokens like
<pad>,<eos>, etc. - Build vocabulary from all files combined
Example format:
user: how are you today
<pad>
ai: I'm doing well, thank you for asking! How are you?
<eos>
๐ค Contributing
- Fork the repository
- Create a feature branch
- Make your improvements
- Test thoroughly
- Submit a pull request
๐ License
This project is open source. Feel free to use, modify, and distribute as needed.
๐โโ๏ธ Support
If you encounter issues or have questions:
- Check the troubleshooting section
- Review your training data format
- Ensure all dependencies are installed
- Verify your PyTorch installation supports your hardware
Made with โค๏ธ for the AI community
AgGPT-21 - Where conversation meets intelligence.
