| | --- |
| | inference: false |
| | license: cc |
| | datasets: |
| | - VMware/open-instruct-v1-oasst-dolly-hhrlhf |
| | language: |
| | - en |
| | library_name: transformers |
| | pipeline_tag: text-generation |
| | --- |
| | # blackmount8/open-llama-7B-open-instruct-ct2-float16 |
| |
|
| | Float16 version of [VMware/open-llama-7b-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct), quantized using CTranslate2. |
| |
|
| | ## VMware/open-llama-7B-open-instruct |
| |
|
| | Instruction-tuned version of the fully trained Open LLama 7B model. The model is open for `<b>`COMMERCIAL USE `</b>`. `<br>` |
| |
|
| | `<b>` NOTE `</b>` : The model was trained using the Alpaca prompt template |
| | `<b>` NOTE `</b>` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer |
| |
|
| | ## License |
| |
|
| | - `<b>`Commercially Viable `</b>` |
| | - Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0 |
| | - Language Model, ([openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)) is under apache-2.0 |
| |
|
| | ## Nomenclature |
| |
|
| | - Model : Open-llama |
| | - Model Size: 7B parameters |
| | - Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf) |
| |
|
| | ## Use in CTranslate2 |
| |
|
| | ``` |
| | import ctranslate2 |
| | from transformers import AutoTokenizer |
| | |
| | model_name = "blackmount8/open-llama-7b-open-instruct-ct2-float16" |
| | |
| | tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left") |
| | model = ctranslate2.Generator(model_name, device="auto", compute_type="float16") |
| | |
| | input_text = ["What is the meaning of stonehenge?", "Hello mate!"] |
| | |
| | input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids |
| | input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids] |
| | |
| | outputs = model.generate_batch(input_tokens, max_length=128) |
| | |
| | output_tokens = [ |
| | ele.sequences_ids[0] for ele in outputs |
| | ] |
| | |
| | output = tokenizer.batch_decode(output_tokens) |
| | |
| | print(output) |
| | ``` |
| |
|