Llama-PcapLog / README.md
choihyuunmin's picture
Update README.md
05e7518 verified
metadata
license: apache-2.0
tags:
  - llama
  - llama-3
  - causal-lm
  - lora
  - fine-tuned
  - peft
  - syslog
  - network packet
base_model: meta-llama/Meta-Llama-3-8B
model_type: llama

LlamaTrace (Merged LoRA + Base)

Model Information

  • Base Model: meta-llama/Meta-Llama-3-8B
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Training Objective: Network traffic analysis, anomaly detection, syslog/pcap summarization
  • Tokenizer: base model tokenizer

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("choihyuunmin/LLaMa-PcapLog")
tokenizer = AutoTokenizer.from_pretrained("choihyuunmin/LLaMa-PcapLog")

input_text = "Anaylze below network packet : \n"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))