File size: 1,652 Bytes
1e143c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
language: vi
license: apache-2.0
tags:
- text-classification
- clickbait-detection
- vietnamese
- llama
- fine-tuned
datasets:
- clickbait-dataset
metrics:
- accuracy
- f1
pipeline_tag: text-classification
---

# Vietnamese Clickbait Detection Model

This model is a fine-tuned version of Llama for Vietnamese clickbait detection.

## Model Description

- **Model type:** Causal Language Model (Fine-tuned for Classification)
- **Language:** Vietnamese
- **Base model:** meta-llama/Llama-3.1-8B-Instruct
- **Task:** Clickbait Detection
- **Dataset:** Vietnamese clickbait dataset

## Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load model and tokenizer
model_name = "PhaaNe/clickbait_KLTN"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Example usage
text = "Bạn sẽ không tin được điều này xảy ra!"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=10)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```

## Training Details

- Fine-tuned using LoRA (Low-Rank Adaptation)
- Training framework: Transformers + PEFT
- Hardware: GPU-enabled server

## Performance

The model achieves good performance on Vietnamese clickbait detection tasks.

## Citation

If you use this model, please cite:

```
@misc{clickbait_kltn_2025,
  title={Vietnamese Clickbait Detection using Fine-tuned Llama},
  author={PhaaNe},
  year={2025},
  url={https://huggingface.co/PhaaNe/clickbait_KLTN}
}
```