File size: 1,539 Bytes
0388fb1
c5e6447
f8c88bb
 
 
 
 
0388fb1
 
 
 
 
 
 
f8c88bb
 
 
 
 
0388fb1
 
 
 
f8c88bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0388fb1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
base_model: meta-llama/Meta-Llama-Guard-2-8B
language:
- en
license: other
license_name: llama3
license_link: LICENSE
library_name: transformers
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- facebook
- meta
- pytorch
- llama
- llama-3
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# meta-llama/Meta-Llama-Guard-2-8B AWQ

- Model creator: [meta-llama](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-Guard-2-8B](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B)

## Model Summary

Meta Llama Guard 2 is an 8B parameter Llama 3-based [1] LLM safeguard model. Similar to [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), it can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.
Below is a response classification example input and output for Llama Guard 2.

<p align="center">
  <img src="https://github.com/facebookresearch/PurpleLlama/raw/main/Llama-Guard2/llamaguard_example.png" width="800"/>
</p>

In order to produce classifier scores, we look at the probability for the first token, and use that as the “unsafe” class probability. We can then apply score thresholding to make binary decisions.