--- library_name: pytorch license: other tags: - llm - generative_ai - android pipeline_tag: text-generation --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/llama_v3_2_3b_instruct/web-assets/model_demo.png) # Llama-v3.2-3B-Instruct: Optimized for Qualcomm Devices Llama 3 is a family of LLMs. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. This is based on the implementation of Llama-v3.2-3B-Instruct found [here](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/). This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/llama_v3_2_3b_instruct) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary). Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device. ## Deploying Llama 3.2 3B on-device Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial. ## Getting Started Due to licensing restrictions, we cannot distribute pre-exported model assets for this model. Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/llama_v3_2_3b_instruct) Python library to compile and export the model with your own: - Custom weights (e.g., fine-tuned checkpoints) - Custom input shapes - Target device and runtime configurations See our repository for [Llama-v3.2-3B-Instruct on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/llama_v3_2_3b_instruct) for usage instructions. ## Model Details **Model Type:** Model_use_case.text_generation **Model Stats:** - Input sequence length for Prompt Processor: 128 - Maximum context length: 4096 - Quantization Type: w4 + w8 (few layers) with fp16 activations and w4a16 + w8a16 (few layers) are supported - Supported languages: English. - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens). - Response Rate: Rate of response generation after the first response token. ## Performance Summary | Model | Runtime | Precision | Chipset | Context Length | Response Rate (tokens per second) | Time To First Token (range, seconds) |---|---|---|---|---|---|--- | Llama-v3.2-3B-Instruct | GENIE | w4 | Snapdragon® 8 Elite Mobile | 4096 | 13.83 | 0.088195 - 2.82225 | Llama-v3.2-3B-Instruct | GENIE | w4 | Qualcomm® SA8295P | 1024 | 3.523 | 0.37311700000000003 - 2.9849360000000003 | Llama-v3.2-3B-Instruct | GENIE | w4 | Snapdragon® 8 Elite Gen 5 Mobile | 4096 | 18.00883 | 0.131546 - 4.209475 | Llama-v3.2-3B-Instruct | GENIE | w4a16 | Snapdragon® 8 Elite Mobile | 4096 | 28.03 | 0.08204900000000001 - 2.6255680000000003 | Llama-v3.2-3B-Instruct | GENIE | w4a16 | Snapdragon® X Elite | 4096 | 11.87 | 0.116884 - 3.740288 | Llama-v3.2-3B-Instruct | GENIE | w4a16 | Qualcomm® SA8775P | 4096 | 17.47 | 0.109614 - 3.507648 | Llama-v3.2-3B-Instruct | GENIE | w4a16 | Snapdragon® 8 Elite Gen 5 Mobile | 4096 | 32.65 | 0.06895399999999999 - 2.2065279999999996 | Llama-v3.2-3B-Instruct | GENIE | w4a16 | Snapdragon® X2 Elite | 4096 | 42.77 | 0.075045 - 2.40144 ## License * The license for the original implementation of Llama-v3.2-3B-Instruct can be found [here](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/blob/main/LICENSE.txt). ## References * [LLaMA: Open and Efficient Foundation Language Models](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/) * [Source Model Implementation](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). ## Usage and Limitations This model may not be used for or in connection with any of the following applications: - Accessing essential private and public services and benefits; - Administration of justice and democratic processes; - Assessing or recognizing the emotional state of a person; - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics; - Education and vocational training; - Employment and workers management; - Exploitation of the vulnerabilities of persons resulting in harmful behavior; - General purpose social scoring; - Law enforcement; - Management and operation of critical infrastructure; - Migration, asylum and border control management; - Predictive policing; - Real-time remote biometric identification in public spaces; - Recommender systems of social media platforms; - Scraping of facial images (from the internet or otherwise); and/or - Subliminal manipulation