| | --- |
| | base_model: cognitivecomputations/dolphin-2.8-experiment26-7b |
| | language: |
| | - en |
| | license: apache-2.0 |
| | datasets: |
| | - ehartford/dolphin |
| | - jondurbin/airoboros-2.2.1 |
| | - ehartford/dolphin-coder |
| | - teknium/openhermes |
| | - m-a-p/Code-Feedback |
| | tags: |
| | - quantized |
| | - 4-bit |
| | - AWQ |
| | - transformers |
| | - pytorch |
| | - mistral |
| | - text-generation |
| | - conversational |
| | - license:apache-2.0 |
| | - autotrain_compatible |
| | - endpoints_compatible |
| | - text-gen |
| | library_name: transformers |
| | model_creator: hydra-project |
| | model_name: ChatHercules-2.5-Mistral-7B |
| | model_type: mistral |
| | pipeline_tag: text-generation |
| | inference: false |
| | prompt_template: '<|im_start|>system |
| | |
| | {system_message}<|im_end|> |
| | |
| | <|im_start|>user |
| | |
| | {prompt}<|im_end|> |
| | |
| | <|im_start|>assistant |
| | |
| | ' |
| | quantized_by: Suparious |
| | --- |
| | # cognitivecomputations/dolphin-2.8-experiment26-7b AWQ |
| |
|
| | - Model creator: [cognitivecomputations](https://huggingface.co/cognitivecomputations) |
| | - Original model: [dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b) |
| |
|
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> |
| |
|
| | ## Model Summary |
| |
|
| | Sponsored by [MassedCompute](https://massedcompute.com/) |
| |
|
| | Discord https://discord.gg/cognitivecomputations |
| |
|
| | This model is based on [Experiment-26 by Yam Peleg](https://huggingface.co/yam-peleg/Experiment26-7B). |
| |
|
| | The base model has 16k context |
| |
|
| | This Dolphin is *really good* at coding, @ehartford trained this with a lot of coding data. |
| |
|
| | It took 3 days to train 3 epochs on 7x A6000s using qlora on Axolotl |
| |
|
| | ## How to use |
| |
|
| | ### Install the necessary packages |
| |
|
| | ```bash |
| | pip install --upgrade autoawq autoawq-kernels |
| | ``` |
| |
|
| | ### Example Python code |
| |
|
| | ```python |
| | from awq import AutoAWQForCausalLM |
| | from transformers import AutoTokenizer, TextStreamer |
| | |
| | model_path = "solidrust/dolphin-2.8-experiment26-7b-AWQ" |
| | system_message = "You are Hercules, incarnated as a powerful AI." |
| | |
| | # Load model |
| | model = AutoAWQForCausalLM.from_quantized(model_path, |
| | fuse_layers=True) |
| | tokenizer = AutoTokenizer.from_pretrained(model_path, |
| | trust_remote_code=True) |
| | streamer = TextStreamer(tokenizer, |
| | skip_prompt=True, |
| | skip_special_tokens=True) |
| | |
| | # Convert prompt to tokens |
| | prompt_template = """\ |
| | <|im_start|>system |
| | {system_message}<|im_end|> |
| | <|im_start|>user |
| | {prompt}<|im_end|> |
| | <|im_start|>assistant""" |
| | |
| | prompt = "You're standing on the surface of the Earth. "\ |
| | "You walk one mile south, one mile west and one mile north. "\ |
| | "You end up exactly where you started. Where are you?" |
| | |
| | tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), |
| | return_tensors='pt').input_ids.cuda() |
| | |
| | # Generate output |
| | generation_output = model.generate(tokens, |
| | streamer=streamer, |
| | max_new_tokens=512) |
| | |
| | ``` |
| |
|
| | ### About AWQ |
| |
|
| | AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. |
| |
|
| | AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. |
| |
|
| | It is supported by: |
| |
|
| | - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ |
| | - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. |
| | - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) |
| | - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers |
| | - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code |
| |
|
| | ## Prompt template: ChatML |
| |
|
| | ```plaintext |
| | <|im_start|>system |
| | {system_message}<|im_end|> |
| | <|im_start|>user |
| | {prompt}<|im_end|> |
| | <|im_start|>assistant |
| | ``` |
| |
|