Spaces:
Running
Running
Update README.md
#5
by
reach-vb
- opened
README.md
CHANGED
|
@@ -16,12 +16,14 @@ In order to access models here, please visit a repo of one of the three families
|
|
| 16 |
In this organization, you can find models in both the original Meta format as well as the Hugging Face transformers format. You can find:
|
| 17 |
|
| 18 |
Current:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
* **Llama 3.1:** a collection of pretrained and fine-tuned text models with sizes ranging from 8 billion to 405 billion parameters pre-trained on ~15 trillion tokens.
|
| 20 |
* **Llama 3.1 Evals:** a collection that provides detailed information on how we derived the reported benchmark metrics for the Llama 3.1 models, including the configurations, prompts and model responses used to generate evaluation results.
|
| 21 |
* **Llama Guard 3:** a Llama-3.1-8B pretrained model, aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities.
|
| 22 |
* **Prompt Guard:** a mDeBERTa-v3-base (86M backbone parameters and 192M word embedding parameters) fine-tuned multi-label model that categorizes input strings into 3 categories - benign, injection, and jailbreak. It is suitable to run as a filter prior to each call to an LLM in an application.
|
| 23 |
-
|
| 24 |
-
History:
|
| 25 |
* **Llama 2:** a collection of pretrained and fine-tuned text models ranging in scale from 7 billion to 70 billion parameters.
|
| 26 |
* **Code Llama:** a collection of code-specialized versions of Llama 2 in three flavors (base model, Python specialist, and instruct tuned).
|
| 27 |
* **Llama Guard:** a 8B Llama 3 safeguard model for classifying LLM inputs and responses.
|
|
|
|
| 16 |
In this organization, you can find models in both the original Meta format as well as the Hugging Face transformers format. You can find:
|
| 17 |
|
| 18 |
Current:
|
| 19 |
+
* **Llama 3.2:** The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out).
|
| 20 |
+
* **Llama 3.2 Vision:** The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out)
|
| 21 |
+
|
| 22 |
+
History:
|
| 23 |
* **Llama 3.1:** a collection of pretrained and fine-tuned text models with sizes ranging from 8 billion to 405 billion parameters pre-trained on ~15 trillion tokens.
|
| 24 |
* **Llama 3.1 Evals:** a collection that provides detailed information on how we derived the reported benchmark metrics for the Llama 3.1 models, including the configurations, prompts and model responses used to generate evaluation results.
|
| 25 |
* **Llama Guard 3:** a Llama-3.1-8B pretrained model, aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities.
|
| 26 |
* **Prompt Guard:** a mDeBERTa-v3-base (86M backbone parameters and 192M word embedding parameters) fine-tuned multi-label model that categorizes input strings into 3 categories - benign, injection, and jailbreak. It is suitable to run as a filter prior to each call to an LLM in an application.
|
|
|
|
|
|
|
| 27 |
* **Llama 2:** a collection of pretrained and fine-tuned text models ranging in scale from 7 billion to 70 billion parameters.
|
| 28 |
* **Code Llama:** a collection of code-specialized versions of Llama 2 in three flavors (base model, Python specialist, and instruct tuned).
|
| 29 |
* **Llama Guard:** a 8B Llama 3 safeguard model for classifying LLM inputs and responses.
|