Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,21 +1,12 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
- en
|
| 4 |
-
- de
|
| 5 |
-
- fr
|
| 6 |
-
- it
|
| 7 |
-
- pt
|
| 8 |
-
- hi
|
| 9 |
-
- es
|
| 10 |
-
- th
|
| 11 |
-
library_name: transformers
|
| 12 |
-
pipeline_tag: text-generation
|
| 13 |
tags:
|
| 14 |
-
-
|
| 15 |
-
-
|
| 16 |
-
-
|
| 17 |
-
-
|
| 18 |
-
|
|
|
|
| 19 |
license: llama3.2
|
| 20 |
extra_gated_prompt: >-
|
| 21 |
### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
|
|
@@ -163,7 +154,7 @@ extra_gated_prompt: >-
|
|
| 163 |
4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
|
| 164 |
5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
|
| 165 |
6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
|
| 166 |
-
7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta
|
| 167 |
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
|
| 168 |
8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
|
| 169 |
9. Guns and illegal weapons (including weapon development)
|
|
@@ -177,7 +168,7 @@ extra_gated_prompt: >-
|
|
| 177 |
16. Generating, promoting, or further distributing spam
|
| 178 |
17. Impersonating another individual without consent, authorization, or legal right
|
| 179 |
18. Representing that the use of Llama 3.2 or outputs are human-generated
|
| 180 |
-
19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
|
| 181 |
4. Fail to appropriately disclose to end users any known dangers of your AI system
|
| 182 |
5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
|
| 183 |
|
|
@@ -219,219 +210,137 @@ extra_gated_description: >-
|
|
| 219 |
extra_gated_button_content: Submit
|
| 220 |
---
|
| 221 |
|
| 222 |
-
#
|
| 223 |
-
|
| 224 |
-
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
|
| 225 |
-
|
| 226 |
-
**Model Developer:** Meta
|
| 227 |
-
|
| 228 |
-
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
| 229 |
-
|
| 230 |
-
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
|
| 231 |
-
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
|
| 232 |
-
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
|
| 233 |
-
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
|
| 234 |
-
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
|
| 235 |
-
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
|
| 236 |
-
|
| 237 |
-
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
|
| 238 |
-
|
| 239 |
-
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
|
| 240 |
-
|
| 241 |
-
**Model Release Date:** Sept 25, 2024
|
| 242 |
-
|
| 243 |
-
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
|
| 244 |
-
|
| 245 |
-
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
|
| 246 |
-
|
| 247 |
-
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
| 248 |
-
|
| 249 |
-
## Intended Use
|
| 250 |
-
|
| 251 |
-
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
|
| 252 |
-
|
| 253 |
-
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
|
| 254 |
-
|
| 255 |
-
## Hardware and Software (Original Model)
|
| 256 |
-
|
| 257 |
-
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
|
| 258 |
-
|
| 259 |
-
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
|
| 260 |
-
|
| 261 |
-
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
|
| 262 |
-
|
| 263 |
-
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
|
| 264 |
-
| :---- | :---: | ----- | :---: | :---: | :---: |
|
| 265 |
-
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
|
| 266 |
-
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
|
| 267 |
-
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
|
| 268 |
-
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
|
| 269 |
-
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
|
| 270 |
-
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
|
| 271 |
-
| Total | 833k | 86k | | 240 | 0 |
|
| 272 |
-
|
| 273 |
-
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
|
| 274 |
-
|
| 275 |
-
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
|
| 276 |
-
|
| 277 |
-
## Training Data
|
| 278 |
-
|
| 279 |
-
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
|
| 280 |
-
|
| 281 |
-
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
|
| 282 |
-
|
| 283 |
-
## Quantization (Original Model)
|
| 284 |
-
|
| 285 |
-
### Quantization Scheme (Original Model)
|
| 286 |
-
|
| 287 |
-
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
|
| 288 |
-
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
|
| 289 |
-
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
|
| 290 |
-
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
|
| 291 |
-
|
| 292 |
-
|
| 293 |
-
### Quantization-Aware Training and LoRA (Original Model)
|
| 294 |
-
|
| 295 |
-
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
|
| 296 |
-
|
| 297 |
-
### SpinQuant (Original Model)
|
| 298 |
-
|
| 299 |
-
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
|
| 300 |
-
|
| 301 |
-
## Benchmarks \- English Text (Original Model)
|
| 302 |
-
|
| 303 |
-
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
|
| 304 |
-
|
| 305 |
-
### Base Pretrained Models (Original Model)
|
| 306 |
-
|
| 307 |
-
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
|
| 308 |
-
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
|
| 309 |
-
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
|
| 310 |
-
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
|
| 311 |
-
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
|
| 312 |
-
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
|
| 313 |
-
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
|
| 314 |
-
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
|
| 315 |
-
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
|
| 316 |
|
| 317 |
-
##
|
| 318 |
|
| 319 |
-
|
| 320 |
-
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
| 321 |
-
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
|
| 322 |
-
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
|
| 323 |
-
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
|
| 324 |
-
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
|
| 325 |
-
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
|
| 326 |
-
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
|
| 327 |
-
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
|
| 328 |
-
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
|
| 329 |
-
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
|
| 330 |
-
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
|
| 331 |
-
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
|
| 332 |
-
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
|
| 333 |
-
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
|
| 334 |
-
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
|
| 335 |
-
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
|
| 336 |
|
| 337 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 338 |
|
| 339 |
-
##
|
| 340 |
|
| 341 |
-
|
| 342 |
-
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
| 343 |
-
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
|
| 344 |
-
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
|
| 345 |
-
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
|
| 346 |
-
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
|
| 347 |
-
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
|
| 348 |
-
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
|
| 349 |
-
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
|
| 350 |
|
| 351 |
-
|
|
|
|
|
|
|
| 352 |
|
| 353 |
-
## Inference time (Original Model)
|
| 354 |
|
| 355 |
-
|
| 356 |
|
| 357 |
-
|
| 358 |
-
| :---- | ----- | ----- | ----- | ----- | ----- |
|
| 359 |
-
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
|
| 360 |
-
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
|
| 361 |
-
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
|
| 362 |
-
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
|
| 363 |
-
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
|
| 364 |
-
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
|
| 365 |
|
| 366 |
-
|
| 367 |
-
|
| 368 |
-
|
| 369 |
|
| 370 |
-
|
| 371 |
|
| 372 |
-
|
| 373 |
-
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
|
| 374 |
-
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
|
| 375 |
-
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
|
| 376 |
-
- *RSS size \- Memory usage in resident set size (RSS)*
|
| 377 |
|
| 378 |
-
##
|
|
|
|
| 379 |
|
| 380 |
-
|
| 381 |
|
| 382 |
-
|
| 383 |
-
|
| 384 |
-
|
|
|
|
|
|
|
| 385 |
|
| 386 |
-
#
|
|
|
|
|
|
|
|
|
|
| 387 |
|
| 388 |
-
|
| 389 |
|
| 390 |
-
|
| 391 |
|
| 392 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 393 |
|
| 394 |
-
|
| 395 |
|
| 396 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 397 |
|
| 398 |
-
|
| 399 |
|
| 400 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 401 |
|
| 402 |
-
##
|
| 403 |
|
| 404 |
-
|
| 405 |
|
| 406 |
-
|
| 407 |
|
| 408 |
-
|
|
|
|
|
|
|
|
|
|
| 409 |
|
| 410 |
-
|
| 411 |
|
| 412 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 413 |
|
| 414 |
-
###
|
| 415 |
|
| 416 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 417 |
|
| 418 |
-
|
| 419 |
|
| 420 |
-
**
|
| 421 |
|
| 422 |
-
|
| 423 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 424 |
|
| 425 |
-
##
|
| 426 |
|
| 427 |
-
**
|
| 428 |
|
| 429 |
-
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
|
| 430 |
|
| 431 |
-
|
| 432 |
|
| 433 |
-
|
|
|
|
|
|
|
| 434 |
|
| 435 |
-
|
| 436 |
|
| 437 |
-
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
library_name: llima
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
tags:
|
| 4 |
+
- llm
|
| 5 |
+
- generative_ai
|
| 6 |
+
- embedded
|
| 7 |
+
- sima
|
| 8 |
+
pipeline_tag: text-generation
|
| 9 |
+
base_model: meta-llama/Llama-3.2-3B-Instruct
|
| 10 |
license: llama3.2
|
| 11 |
extra_gated_prompt: >-
|
| 12 |
### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
|
|
|
|
| 154 |
4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
|
| 155 |
5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
|
| 156 |
6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
|
| 157 |
+
7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta
|
| 158 |
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
|
| 159 |
8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
|
| 160 |
9. Guns and illegal weapons (including weapon development)
|
|
|
|
| 168 |
16. Generating, promoting, or further distributing spam
|
| 169 |
17. Impersonating another individual without consent, authorization, or legal right
|
| 170 |
18. Representing that the use of Llama 3.2 or outputs are human-generated
|
| 171 |
+
19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
|
| 172 |
4. Fail to appropriately disclose to end users any known dangers of your AI system
|
| 173 |
5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
|
| 174 |
|
|
|
|
| 210 |
extra_gated_button_content: Submit
|
| 211 |
---
|
| 212 |
|
| 213 |
+
# Llama-3.2-3B-Instruct: Optimized for SiMa.ai Modalix
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 214 |
|
| 215 |
+
## Overview
|
| 216 |
|
| 217 |
+
This repository contains the **Llama-3.2-3B-Instruct** model, optimized and compiled for the **SiMa.ai Modalix** platform for **text-only** inference.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 218 |
|
| 219 |
+
- **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture (3.21B parameters).
|
| 220 |
+
- **Quantization:** Hybrid
|
| 221 |
+
- **Prompt Processing:** A16W8 (16-bit activations, 8-bit weights)
|
| 222 |
+
- **Token Generation:** A16W4 (16-bit activations, 4-bit weights)
|
| 223 |
+
- **Maximum context length:** 2048
|
| 224 |
+
- **Source Model:** [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
|
| 225 |
|
| 226 |
+
## Performance
|
| 227 |
|
| 228 |
+
The following performance metrics were measured with an input sequence length of 128 tokens.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 229 |
|
| 230 |
+
| Model | Precision | Device | Response Rate (tokens/sec) | Time To First Token (sec) |
|
| 231 |
+
|:---:|:---:|:---:|:---:|:---:|
|
| 232 |
+
| Llama-3.2-3B-Instruct | A16W8/A16W4 | Modalix | 19.2 tokens/sec | 0.12 sec |
|
| 233 |
|
|
|
|
| 234 |
|
| 235 |
+
## Prerequisites
|
| 236 |
|
| 237 |
+
To run this model, you need:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 238 |
|
| 239 |
+
1. **SiMa.ai Modalix Device**
|
| 240 |
+
2. **SiMa.ai CLI**: [Installed](https://docs.sima.ai/pages/sima_cli/main.html#installation) on your Modalix device.
|
| 241 |
+
3. **Hugging Face CLI**: For downloading the model.
|
| 242 |
|
| 243 |
+
## Installation & Deployment
|
| 244 |
|
| 245 |
+
Follow these steps to deploy the model to your Modalix device.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 246 |
|
| 247 |
+
### 1. Install LLiMa Demo Application
|
| 248 |
+
> **Note:** This is a **one-time setup**. If you have already installed the LLiMa demo application (e.g. for another model), you can skip this step and continue with model download.
|
| 249 |
|
| 250 |
+
On your Modalix device, install the LLiMa demo application using the `sima-cli`:
|
| 251 |
|
| 252 |
+
```bash
|
| 253 |
+
# Create a directory for LLiMa
|
| 254 |
+
cd /media/nvme
|
| 255 |
+
mkdir llima
|
| 256 |
+
cd llima
|
| 257 |
|
| 258 |
+
# Install the LLiMa runtime code
|
| 259 |
+
sima-cli install -v 2.0.0 samples/llima -t select
|
| 260 |
+
```
|
| 261 |
+
> **Note:** To only download the LLiMa runtime code, select **🚫 Skip** when prompted.
|
| 262 |
|
| 263 |
+
### 2. Download the Model
|
| 264 |
|
| 265 |
+
Download the compiled model assets from this repository directly to your device.
|
| 266 |
|
| 267 |
+
```bash
|
| 268 |
+
# Download the model to a local directory
|
| 269 |
+
cd /media/nvme/llima
|
| 270 |
+
hf download meta-llama/Llama-3.2-3B-Instruct --local-dir Llama-3.2-3B-Instruct-a16w4
|
| 271 |
+
```
|
| 272 |
|
| 273 |
+
Alternatively, you can download the compiled model to a Host and copy it to the Modalix device:
|
| 274 |
|
| 275 |
+
```bash
|
| 276 |
+
hf download meta-llama/Llama-3.2-3B-Instruct --local-dir Llama-3.2-3B-Instruct-a16w4
|
| 277 |
+
scp -r Llama-3.2-3B-Instruct-a16w4 sima@<modalix-ip>:/media/nvme/llima/
|
| 278 |
+
```
|
| 279 |
+
*Replace \<modalix-ip\> with the IP address of your Modalix device.*
|
| 280 |
|
| 281 |
+
**Expected Directory Structure:**
|
| 282 |
|
| 283 |
+
```text
|
| 284 |
+
/media/nvme/llima/
|
| 285 |
+
├── simaai-genai-demo/ # The demo app
|
| 286 |
+
└── Llama-3.2-3B-Instruct-a16w4/ # Your downloaded model
|
| 287 |
+
```
|
| 288 |
|
| 289 |
+
## Usage
|
| 290 |
|
| 291 |
+
### Run the Application
|
| 292 |
|
| 293 |
+
Navigate to the demo directory and start the application:
|
| 294 |
|
| 295 |
+
```bash
|
| 296 |
+
cd /media/nvme/llima/simaai-genai-demo
|
| 297 |
+
./run.sh
|
| 298 |
+
```
|
| 299 |
|
| 300 |
+
The script will detect the installed model(s) and prompt you to select one.
|
| 301 |
|
| 302 |
+
Once the application is running, open a browser and navigate to:
|
| 303 |
+
```text
|
| 304 |
+
https://<modalix-ip>:5000/
|
| 305 |
+
```
|
| 306 |
+
*Replace \<modalix-ip\> with the IP address of your Modalix device.*
|
| 307 |
|
| 308 |
+
### API Usage
|
| 309 |
|
| 310 |
+
To use OpenAI-compatible API, run the model in API mode:
|
| 311 |
+
```bash
|
| 312 |
+
cd /media/nvme/llima/simaai-genai-demo
|
| 313 |
+
./run.sh --httponly --api-only
|
| 314 |
+
```
|
| 315 |
|
| 316 |
+
You can interact with it using `curl` or Python.
|
| 317 |
|
| 318 |
+
**Example: Chat Completion**
|
| 319 |
|
| 320 |
+
```bash
|
| 321 |
+
curl -N -k -X POST "https://<modalix-ip>:5000/v1/chat/completions" \\
|
| 322 |
+
-H "Content-Type: application/json" \\
|
| 323 |
+
-d '{
|
| 324 |
+
"messages": [
|
| 325 |
+
{ "role": "user", "content": "Why is the sky blue?" }
|
| 326 |
+
],
|
| 327 |
+
"stream": true
|
| 328 |
+
}'
|
| 329 |
+
```
|
| 330 |
+
*Replace \<modalix-ip\> with the IP address of your Modalix device.*
|
| 331 |
|
| 332 |
+
## Limitations
|
| 333 |
|
| 334 |
+
- **Quantization**: This model is quantized (A16W4/A16W8) for optimal performance on embedded devices. While this maintains high accuracy, minor deviations from the full-precision model may occur.
|
| 335 |
|
|
|
|
| 336 |
|
| 337 |
+
## Troubleshooting
|
| 338 |
|
| 339 |
+
- **`sima-cli` not found**: Ensure that sima-cli is installed on your Modalix device.
|
| 340 |
+
- **Model can't be run**: Verify the model directory is exactly inside `/media/nvme/llima/` and not nested (e.g., `/media/nvme/llima/Llama-3.2-3B-Instruct-a16w4/Llama-3.2-3B-Instruct-a16w4`).
|
| 341 |
+
- **Permission Denied**: Ensure you have read/write permissions for the `/media/nvme` directory.
|
| 342 |
|
| 343 |
+
## Resources
|
| 344 |
|
| 345 |
+
- [SiMa.ai Documentation](https://docs.sima.ai)
|
| 346 |
+
- [SiMa.ai Hugging Face Organization](https://huggingface.co/simaai)
|