| Quantization made by Richard Erkhov. | |
| [Github](https://github.com/RichardErkhov) | |
| [Discord](https://discord.gg/pvy7H8DZMG) | |
| [Request more models](https://github.com/RichardErkhov/quant_request) | |
| dolly-v2-7b - bnb 4bits | |
| - Model creator: https://huggingface.co/databricks/ | |
| - Original model: https://huggingface.co/databricks/dolly-v2-7b/ | |
| Original model description: | |
| --- | |
| license: mit | |
| language: | |
| - en | |
| library_name: transformers | |
| inference: false | |
| datasets: | |
| - databricks/databricks-dolly-15k | |
| --- | |
| # dolly-v2-7b Model Card | |
| ## Summary | |
| Databricks' `dolly-v2-7b`, an instruction-following large language model trained on the Databricks machine learning platform | |
| that is licensed for commercial use. Based on `pythia-6.9b`, Dolly is trained on ~15k instruction/response fine tuning records | |
| [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated | |
| by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, | |
| information extraction, open QA and summarization. `dolly-v2-7b` is not a state-of-the-art model, but does exhibit surprisingly | |
| high quality instruction following behavior not characteristic of the foundation model on which it is based. | |
| Dolly v2 is also available in these other models sizes: | |
| * [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), a 12 billion parameter based on `pythia-12b` | |
| * [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b` | |
| Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on | |
| running inference for various GPU configurations. | |
| **Owner**: Databricks, Inc. | |
| ## Model Overview | |
| `dolly-v2-7b` is a 6.9 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from | |
| [EleutherAI's](https://www.eleuther.ai/) [Pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) and fine-tuned | |
| on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) | |
| ## Usage | |
| To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. | |
| In a Databricks notebook you could run: | |
| ```python | |
| %pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" | |
| ``` | |
| The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` | |
| found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. | |
| Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. | |
| It is also fine to remove it if there is sufficient memory. | |
| ```python | |
| import torch | |
| from transformers import pipeline | |
| generate_text = pipeline(model="databricks/dolly-v2-7b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") | |
| ``` | |
| You can then use the pipeline to answer instructions: | |
| ```python | |
| res = generate_text("Explain to me the difference between nuclear fission and fusion.") | |
| print(res[0]["generated_text"]) | |
| ``` | |
| Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), | |
| store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: | |
| ```python | |
| import torch | |
| from instruct_pipeline import InstructionTextGenerationPipeline | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-7b", padding_side="left") | |
| model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-7b", device_map="auto", torch_dtype=torch.bfloat16) | |
| generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) | |
| ``` | |
| ### LangChain Usage | |
| To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned | |
| and the default for the pipeline is to only return the new text. | |
| ```python | |
| import torch | |
| from transformers import pipeline | |
| generate_text = pipeline(model="databricks/dolly-v2-7b", torch_dtype=torch.bfloat16, | |
| trust_remote_code=True, device_map="auto", return_full_text=True) | |
| ``` | |
| You can create a prompt that either has only an instruction or has an instruction with context: | |
| ```python | |
| from langchain import PromptTemplate, LLMChain | |
| from langchain.llms import HuggingFacePipeline | |
| # template for an instrution with no input | |
| prompt = PromptTemplate( | |
| input_variables=["instruction"], | |
| template="{instruction}") | |
| # template for an instruction with input | |
| prompt_with_context = PromptTemplate( | |
| input_variables=["instruction", "context"], | |
| template="{instruction}\n\nInput:\n{context}") | |
| hf_pipeline = HuggingFacePipeline(pipeline=generate_text) | |
| llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt) | |
| llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context) | |
| ``` | |
| Example predicting using a simple instruction: | |
| ```python | |
| print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip()) | |
| ``` | |
| Example predicting using an instruction with context: | |
| ```python | |
| context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman, | |
| and Founding Father who served as the first president of the United States from 1789 to 1797.""" | |
| print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip()) | |
| ``` | |
| ## Known Limitations | |
| ### Performance Limitations | |
| **`dolly-v2-7b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform | |
| competitively with more modern model architectures or models subject to larger pretraining corpuses. | |
| The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. | |
| In particular, `dolly-v2-7b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, | |
| dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. | |
| Moreover, we find that `dolly-v2-7b` does not have some capabilities, such as well-formatted letter writing, present in the original model. | |
| ### Dataset Limitations | |
| Like all language models, `dolly-v2-7b` reflects the content and limitations of its training corpuses. | |
| - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, | |
| it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly | |
| in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit | |
| associations. | |
| - **`databricks-dolly-15k`**: The training data on which `dolly-v2-7b` is instruction tuned represents natural language instructions generated | |
| by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages | |
| for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or | |
| personally identifying information about non-public figures, but it may contain typos and factual errors. | |
| The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects | |
| the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. | |
| Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that | |
| maximize the potential of all individuals and organizations. | |
| ### Benchmark Metrics | |
| Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); | |
| model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-7b` is not state of the art, | |
| and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets, | |
| but a robust statement as to the sources of these variations requires further study. | |
| | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean | | |
| | --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------| | |
| | EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 | | |
| | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 | | |
| | databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 | | |
| | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 | | |
| | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 | | |
| | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 | | |
| | databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 | | |
| | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 | | |
| | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 | | |
| # Citation | |
| ``` | |
| @online{DatabricksBlog2023DollyV2, | |
| author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, | |
| title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, | |
| year = {2023}, | |
| url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, | |
| urldate = {2023-06-30} | |
| } | |
| ``` | |
| # Happy Hacking! | |