LoGPT โ€” GPT-2 Fine-Tuned for Log Analysis

LoGPT is a GPT-2 (124M) causal language model fine-tuned for structured system log analysis.
It generates:

  • Log summaries
  • Root cause analysis
  • Actionable recommendations

Runs fully offline on local hardware.


Model Details

  • Base model: GPT-2 (124M)
  • Architecture: Decoder-only Transformer
  • Context length: 1,024 tokens
  • Training samples: 5,055
  • Fine-tuning steps: 2,000

Evaluation (Test Set)

  • Perplexity: 2.05
  • ROUGE-L: 0.324
  • BERTScore F1: 0.898

Intended Use

Designed for structured system/server logs (Apache, Linux, HDFS, Spark, OpenSSH, etc.).

Not intended for:

  • General chat
  • Medical / legal advice
  • Safety-critical automation

Prompt Format

<|log_start|> log content <|log_end|> <|query_start|> Summarize these logs <|query_end|> <|response_start|>

Model generates until <|response_end|>.


Limitations

  • Trained on ~5k synthetic samples
  • May hallucinate plausible root causes
  • Performance may drop on unseen log formats

Human verification recommended.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using Dhruv-Panicker/logpt-gpt2-log-analyzer 1