--- base_model: Replete-AI/Llama-3-13B license: other license_name: llama-3 license_link: https://llama.meta.com/llama3/license/ thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png" library_name: transformers tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible pipeline_tag: text-generation inference: false quantized_by: Suparious --- # Replete-AI/Llama-3-13B AWQ - Model creator: [Replete-AI](https://huggingface.co/Replete-AI) - Original model: [Llama-3-13B](https://huggingface.co/Replete-AI/Llama-3-13B) ## Model Summary This is the first version of upscaling llama-3. Version 2 is now out and does not have any of the issues that this version has. Please use version 2 instead. Linked bellow: - https://huggingface.co/Replete-AI/Llama-3-11.5B-v2 __________________________________________________________________ Llama-3-13B Thank you to Meta for the weights for Meta-Llama-3-8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png) This is an upscaling of the Llama-3-8B Ai using techniques created for Mistral-Evolved-11b-v0.1. This Ai model has been upscaled from 8b parameters to 13b parameters without any continuous pretraining or fine-tuning. From testing, the model seems to function perfectly at fp16, but has some issues at 4-bit quantization using bitsandbytes. The model that was used to create this one is linked below: https://huggingface.co/meta-llama/Meta-Llama-3-8B