EVALUATION-ONLY ACCESS (30-DAY TESTING)
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
This is a private evaluation version of LlamaOra-6.4B-Instruct.
By agreeing, you accept:
- 30-day internal testing only
- No commercial use, redistribution, or reverse-engineering
- Deletion of all files after evaluation
- Full terms in
LICENSE
Access is granted only to approved licensees.
Log in or Sign Up to review the conditions and access this model content.
LlamaOra-6.4B-Instruct
This repository contains the LlamaOra-6.4B-Instruct model developed by Ora Computing. This is a compressed and fine-tuned derivative of the Llama 3.1‑8B‑Instruct model (Built with Llama).
Model Overview
Model name: LlamaOra-6.4B-Instruct
Base model: Llama 3.1‑8B‑Instruct
Derived size: ~6.4 billion parameters (compressed from the base model’s ~8.0 billion)
Purpose: Evaluation/test‑use only; optimized for internal benchmarking and non‑production integration.
License: See LICENSE (Custom Model License Agreement)
Intended Use & Restrictions
Permitted use
- Internal testing, benchmarking and evaluation of the model by the named Licensee.
- Exploration of model behaviours, prompt engineering, and non‑production prototypes.
Prohibited use
- Deployment in a production or commercial service, publicly‑facing API, resale or redistribution.
- Fine‑tuning or creating derivative models for production use without separate agreement.
- Disclosure or sharing of the model (or its weights) to third parties beyond the named Licensee.
Out‑of‑scope use
- Any use that triggers the “Additional Commercial Terms” of the Llama 3.1 Community Licence (e.g., >700 million monthly active users).
- Use of the model in regulated or safety‑critical contexts (unless separately permitted).
Accuracy
| Benchmark | Llama-3.1-8B-Instruct (base model) | LlamaOra-6.4B-Instruct (this model) | Recovery |
| HumanEval (0-shot) | 69.5 | 68.3 | 98.27% |
| IFEval (0-shot) | 73.6 | 73.8 | 100.27% |
| GSM8K (8-shot) | 81.6 | 80.4 | 98.53% |
| Average | 74.90 | 74.17 | 99.02% |
Compression
- Parameter count compression: The parameter count was reduced from ~8.0 billion to ~6.4 billion by compessing MLP layers.
Limitations & Risks
- Compressed models may not replicate the full behaviour of the base model under all prompt categories, particularly domain‑specific or rare inputs.
- The model is provided as‑is for testing only and is not certified for production use.
- Users should validate outputs carefully and monitor for bias or unintended behaviours.
Upstream Attribution
This model is derived from the Llama 3.1 model family released by Meta Platforms, Inc. under the Llama 3.1 Community License.
“Llama 3.1 is licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”
For full terms, see: https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE
Contact & Support
For licensing inquiries or to request extended evaluation rights, please contact:
info@oracomputing.com
Repository and model access are regulated. Do not redistribute or share without explicit written permission from Ora Computing.
- Downloads last month
- 6
Model tree for oracomputing/LlamaOra-6.4B-Instruct
Base model
meta-llama/Llama-3.1-8B