--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-3B-Instruct --- # Base-AMAN (AMAN stand for Automated Monitoring and Anomaly Notifier it also mean safety in Arabic 🔒) This is a fine-tuned version of Qwen/Qwen2.5-3B-Instruct for log undanrstanding and analysis and cybersecurity tasks. ## Model Details - **Base Model**: Qwen/Qwen2.5-3B-Instruct - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) - **Task**: Causal Language Modeling for Log Analysis ## Usage You can load and use this model directly like any other Hugging Face model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained( "chYassine/AMAN-merged", torch_dtype=torch.float16, device_map="auto", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained( "chYassine/AMAN-merged", trust_remote_code=True ) # Use the model prompt = "Analyze this log session:" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=512, temperature=0.7) print(tokenizer.decode(outputs[0])) ``` ## Training Details This model was fine-tuned using LoRA adapters that have been merged into the base model. The adapter was trained on log analysis and cybersecurity datasets. ## Limitations - This model is specialized for log analysis tasks - Performance may vary on general language tasks - Always review outputs for accuracy in security-critical applications