File size: 1,524 Bytes
ccf5a0d
 
5129fd7
 
ccf5a0d
 
f7b4503
ccf5a0d
b91daf3
ccf5a0d
 
 
5129fd7
 
 
ccf5a0d
5129fd7
ccf5a0d
5129fd7
ccf5a0d
5129fd7
 
 
ccf5a0d
5129fd7
 
 
 
 
 
 
ccf5a0d
5129fd7
 
 
 
ccf5a0d
5129fd7
 
 
 
 
 
ccf5a0d
 
 
5129fd7
 
ccf5a0d
5129fd7
ccf5a0d
5129fd7
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-3B-Instruct
---

# Base-AMAN (AMAN stand for Automated Monitoring and Anomaly Notifier it also mean safety in Arabic 🔒)

This is a fine-tuned version of Qwen/Qwen2.5-3B-Instruct for log undanrstanding and analysis and cybersecurity tasks.

## Model Details

- **Base Model**: Qwen/Qwen2.5-3B-Instruct
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Task**: Causal Language Modeling for Log Analysis

## Usage

You can load and use this model directly like any other Hugging Face model:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
    "chYassine/AMAN-merged",
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)

tokenizer = AutoTokenizer.from_pretrained(
    "chYassine/AMAN-merged",
    trust_remote_code=True
)

# Use the model
prompt = "Analyze this log session:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7)
print(tokenizer.decode(outputs[0]))
```

## Training Details

This model was fine-tuned using LoRA adapters that have been merged into the base model.
The adapter was trained on log analysis and cybersecurity datasets.

## Limitations

- This model is specialized for log analysis tasks
- Performance may vary on general language tasks
- Always review outputs for accuracy in security-critical applications