Delete .ipynb_checkpoints
Browse files
.ipynb_checkpoints/README-checkpoint.md
DELETED
|
@@ -1,173 +0,0 @@
|
|
| 1 |
-
---
|
| 2 |
-
extra_gated_heading: You need to share contact information with Databricks to access this model
|
| 3 |
-
extra_gated_prompt: >-
|
| 4 |
-
|
| 5 |
-
### DBRX Terms of Use
|
| 6 |
-
|
| 7 |
-
Use of DBRX is governed by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and the [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
|
| 8 |
-
|
| 9 |
-
extra_gated_fields:
|
| 10 |
-
First Name: text
|
| 11 |
-
Last Name: text
|
| 12 |
-
Organization: text
|
| 13 |
-
Purpose for Base Model Access: text
|
| 14 |
-
By clicking 'Submit' below, I accept the terms of the license and acknowledge that the information I provide will be collected, stored, processed, and shared in accordance with Databricks' Privacy Notice and I understand I can update my preferences at any time: checkbox
|
| 15 |
-
extra_gated_description: >-
|
| 16 |
-
The information you provide will be collected, stored, processed, and shared in accordance with Databricks [Privacy Notice](https://www.databricks.com/legal/privacynotice).
|
| 17 |
-
extra_gated_button_content: Submit
|
| 18 |
-
inference: false
|
| 19 |
-
license: other
|
| 20 |
-
license_name: databricks-open-model-license
|
| 21 |
-
license_link: https://www.databricks.com/legal/open-model-license
|
| 22 |
-
---
|
| 23 |
-
|
| 24 |
-
# Re-upload because original repo is gated
|
| 25 |
-
|
| 26 |
-
Don't do that shit. Come on. Open weights mean open weights. Not gate.
|
| 27 |
-
|
| 28 |
-
# DBRX Base
|
| 29 |
-
|
| 30 |
-
* DBRX Base is a mixture-of-experts (MoE) large language model trained from scratch by Databricks.
|
| 31 |
-
* We are releasing both DBRX Base, a pretrained base model, and DBRX Instruct, a fine-tuned version for few-turn interactions, under [an open license](https://www.databricks.com/legal/open-model-license).
|
| 32 |
-
* This is the repository for DBRX Base. DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct).
|
| 33 |
-
* For full details on the DBRX models, please read our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
## Model Overview
|
| 37 |
-
DBRX is a [transformer-based](https://www.isattentionallyouneed.com/) decoder-only large language model (LLM) that was trained using next-token prediction.
|
| 38 |
-
It uses a *fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input.
|
| 39 |
-
It was pre-trained on 12T tokens of text and code data.
|
| 40 |
-
Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2.
|
| 41 |
-
This provides 65x more possible combinations of experts and we found that this improves model quality.
|
| 42 |
-
DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA).
|
| 43 |
-
It uses the GPT-4 tokenizer as provided in the [tiktoken](https://github.com/openai/tiktoken) repository.
|
| 44 |
-
We made these choices based on exhaustive evaluation and scaling experiments.
|
| 45 |
-
|
| 46 |
-
DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens.
|
| 47 |
-
We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models.
|
| 48 |
-
This new dataset was developed using the full suite of Databricks tools, including Apache Spark™ and Databricks notebooks for data processing, and Unity Catalog for data management and governance.
|
| 49 |
-
We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality.
|
| 50 |
-
|
| 51 |
-
* **Inputs:** DBRX only accepts text-based inputs and accepts a context length of up to 32768 tokens.
|
| 52 |
-
* **Outputs:** DBRX only produces text-based outputs.
|
| 53 |
-
* **Model Architecture:** More detailed information about DBRX Instruct and DBRX Base can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
|
| 54 |
-
* **License:** [Databricks Open Model License](https://www.databricks.com/legal/open-model-license)
|
| 55 |
-
* **Acceptable Use Policy:** [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model)
|
| 56 |
-
* **Version:** 1.0
|
| 57 |
-
* **Owner:** Databricks, Inc.
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
## Usage
|
| 61 |
-
These are several general ways to use the DBRX models:
|
| 62 |
-
* DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Base; DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct).
|
| 63 |
-
* The DBRX model repository can be found on GitHub [here](https://github.com/databricks/dbrx).
|
| 64 |
-
* DBRX Base and DBRX Instruct are available with [Databricks Foundation Model APIs](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) via both *Pay-per-token* and *Provisioned Throughput* endpoints. These are enterprise-ready deployments.
|
| 65 |
-
* For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning [documentation](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md).
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
## Quickstart Guide
|
| 69 |
-
**NOTE: This is DBRX Base, and has not been instruction finetuned. It has not been trained for interactive chat and is only a completion model.**
|
| 70 |
-
If you are looking for the finetuned model, please use [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct).
|
| 71 |
-
|
| 72 |
-
Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages:
|
| 73 |
-
|
| 74 |
-
```bash
|
| 75 |
-
pip install transformers tiktoken
|
| 76 |
-
```
|
| 77 |
-
|
| 78 |
-
If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads).
|
| 79 |
-
```bash
|
| 80 |
-
pip install hf_transfer
|
| 81 |
-
export HF_HUB_ENABLE_HF_TRANSFER=1
|
| 82 |
-
```
|
| 83 |
-
|
| 84 |
-
### Run the model on a CPU:
|
| 85 |
-
```python
|
| 86 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 87 |
-
import torch
|
| 88 |
-
|
| 89 |
-
tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-base", trust_remote_code=True)
|
| 90 |
-
model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-base", device_map="cpu", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
| 91 |
-
|
| 92 |
-
input_text = "Databricks was founded in "
|
| 93 |
-
input_ids = tokenizer(input_text, return_tensors="pt")
|
| 94 |
-
|
| 95 |
-
outputs = model.generate(**input_ids, max_new_tokens=100)
|
| 96 |
-
print(tokenizer.decode(outputs[0]))
|
| 97 |
-
```
|
| 98 |
-
|
| 99 |
-
### Run the model on multiple GPUs:
|
| 100 |
-
```python
|
| 101 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 102 |
-
import torch
|
| 103 |
-
|
| 104 |
-
tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-base", trust_remote_code=True)
|
| 105 |
-
model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-base", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
| 106 |
-
|
| 107 |
-
input_text = "Databricks was founded in "
|
| 108 |
-
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
| 109 |
-
|
| 110 |
-
outputs = model.generate(**input_ids, max_new_tokens=100)
|
| 111 |
-
print(tokenizer.decode(outputs[0]))
|
| 112 |
-
```
|
| 113 |
-
If your GPU system supports [FlashAttention2](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2), you can add `attn_implementation=”flash_attention_2”` as a keyword to `AutoModelForCausalLM.from_pretrained()` to achieve faster inference.
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
## Limitations and Ethical Considerations
|
| 117 |
-
### Training Dataset Limitations
|
| 118 |
-
The DBRX models were trained on 12T tokens of text, with a knowledge cutoff date of December 2023.
|
| 119 |
-
|
| 120 |
-
The training mix used for DBRX contains both natural-language and code examples. The vast majority of our training data is in the English language. We did not test DBRX for non-English proficiency. Therefore, DBRX should be considered a generalist model for text-based use in the English language.
|
| 121 |
-
|
| 122 |
-
DBRX does not have multimodal capabilities.
|
| 123 |
-
|
| 124 |
-
### Associated Risks and Recommendations
|
| 125 |
-
All foundation models are novel technologies that carry various risks, and may output information that is inaccurate, incomplete, biased, or offensive.
|
| 126 |
-
Users should exercise judgment and evaluate such output for accuracy and appropriateness for their desired use case before using or sharing it.
|
| 127 |
-
Databricks recommends [using retrieval augmented generation (RAG)](https://www.databricks.com/glossary/retrieval-augmented-generation-rag) in scenarios where accuracy and fidelity are important.
|
| 128 |
-
We also recommend that anyone using or fine-tuning either DBRX Base or DBRX Instruct perform additional testing around safety in the context of their particular application and domain.
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
## Intended Uses
|
| 132 |
-
### Intended Use Cases
|
| 133 |
-
The DBRX models are open, general-purpose LLMs intended and licensed for both commercial and research applications.
|
| 134 |
-
They can be further fine-tuned for various domain-specific natural language and coding tasks.
|
| 135 |
-
DBRX Base can be used as an off-the-shelf model for text completion for general English-language and coding tasks.
|
| 136 |
-
|
| 137 |
-
Please review the Associated Risks section above, as well as the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model) for further information about permissible uses of DBRX Base and its derivatives.
|
| 138 |
-
|
| 139 |
-
### Out-of-Scope Use Cases
|
| 140 |
-
DBRX models are not intended to be used out-of-the-box in non-English languages and do not support native code execution, or other forms of function-calling.
|
| 141 |
-
DBRX models should not be used in any manner that violates applicable laws or regulations or in any other way that is prohibited by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
## Training Stack
|
| 145 |
-
MoE models are complicated to train, and the training of DBRX Base and DBRX Instruct was heavily supported by Databricks’ infrastructure for data processing and large-scale LLM training (e.g., [Composer](https://github.com/mosaicml/composer), [Streaming](https://github.com/mosaicml/streaming), [Megablocks](https://github.com/stanford-futuredata/megablocks), and [LLM Foundry](https://github.com/mosaicml/llm-foundry)).
|
| 146 |
-
|
| 147 |
-
Composer is our core library for large-scale training.
|
| 148 |
-
It provides an optimized training loop, easy [checkpointing](https://docs.mosaicml.com/projects/composer/en/latest/trainer/checkpointing.html) and [logging](https://docs.mosaicml.com/projects/composer/en/latest/trainer/logging.html#wood-logging),
|
| 149 |
-
[FSDP](https://pytorch.org/docs/stable/fsdp.html)-based [model sharding](https://docs.mosaicml.com/projects/composer/en/latest/notes/distributed_training.html#fullyshardeddataparallel-fsdp),
|
| 150 |
-
convenient [abstractions](https://docs.mosaicml.com/projects/composer/en/latest/trainer/time.html), extreme customizability via [callbacks](https://docs.mosaicml.com/projects/composer/en/latest/trainer/callbacks.html), and more.
|
| 151 |
-
|
| 152 |
-
Streaming enables fast, low cost, and scalable training on large datasets from cloud storage. It handles a variety of challenges around deterministic resumption as node counts change, avoiding redundant downloads across devices, high-quality shuffling at scale, sample-level random access, and speed.
|
| 153 |
-
|
| 154 |
-
Megablocks is a lightweight library for MoE training. Crucially, it supports “dropless MoE,” which avoids inefficient padding and is intended to provide deterministic outputs for a given sequence no matter what other sequences are in the batch.
|
| 155 |
-
|
| 156 |
-
LLM Foundry ties all of these libraries together to create a simple LLM pretraining, fine-tuning, and inference experience.
|
| 157 |
-
|
| 158 |
-
DBRX was trained using proprietary optimized versions of the above open source libraries, along with our [LLM training platform](https://www.databricks.com/product/machine-learning/mosaic-ai-training).
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
## Evaluation
|
| 162 |
-
We find that DBRX outperforms established open-source and open-weight base models on the [Databricks Model Gauntlet](https://www.databricks.com/blog/llm-evaluation-for-icl), the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and HumanEval.
|
| 163 |
-
The Databricks Model Gauntlet measures performance on more than 30 tasks across six categories: world knowledge, common sense reasoning, language understanding, reading comprehension, symbolic problem solving, and programming.
|
| 164 |
-
The Hugging Face Open LLM Leaderboard measures the average of ARC-Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande and GSM8k.
|
| 165 |
-
HumanEval measures coding ability.
|
| 166 |
-
|
| 167 |
-
Full evaluation details can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
## Acknowledgements
|
| 171 |
-
The DBRX models were made possible thanks in large part to the open-source community, especially:
|
| 172 |
-
* The [MegaBlocks](https://arxiv.org/abs/2211.15841) library, which established a foundation for our MoE implementation.
|
| 173 |
-
* [PyTorch FSDP](https://arxiv.org/abs/2304.11277), which we built on for distributed training.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|