Update README.md
Browse files
README.md
CHANGED
|
@@ -10,10 +10,12 @@ pipeline_tag: text-generation
|
|
| 10 |
base_model: meta-llama/Llama-2-70b-hf
|
| 11 |
---
|
| 12 |
|
| 13 |
-
#
|
| 14 |
|
| 15 |
This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the first 25k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single H100 (80 GB PCIe) for roughly 17 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
| 16 |
|
|
|
|
|
|
|
| 17 |
## Benchmark metrics
|
| 18 |
|
| 19 |
| Metric | Value |
|
|
@@ -30,13 +32,13 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
|
|
| 30 |
|
| 31 |
* Model license: Llama 2 Community License Agreement
|
| 32 |
* Basic usage: [notebook](assets/basic_inference_llama_2_dolphin.ipynb)
|
| 33 |
-
* Finetuning code: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-
|
| 34 |
-
* Loss curves: [plot](https://huggingface.co/dfurman/
|
| 35 |
-
* Runtime stats: [table](https://huggingface.co/dfurman/
|
| 36 |
|
| 37 |
## Loss curve
|
| 38 |
|
| 39 |
-
)
|
|
| 227 |
|:-----------------------------:|:----------------------:|:---------------------:|:-------------:|:-----------------------:|
|
| 228 |
| 4.50 | 1x H100 (80 GB PCIe) | torch | nf4 | 39 |
|
| 229 |
|
| 230 |
-
The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/postprocessing-
|
| 231 |
|
| 232 |
## Acknowledgements
|
| 233 |
|
|
|
|
| 10 |
base_model: meta-llama/Llama-2-70b-hf
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# Llama-2-70B-Instruct-v0.1 🦙🐬
|
| 14 |
|
| 15 |
This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the first 25k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single H100 (80 GB PCIe) for roughly 17 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
| 16 |
|
| 17 |
+
*Note*: This model was ranked 6th on 🤗's Open LLM Leaderboard in Aug 2023
|
| 18 |
+
|
| 19 |
## Benchmark metrics
|
| 20 |
|
| 21 |
| Metric | Value |
|
|
|
|
| 32 |
|
| 33 |
* Model license: Llama 2 Community License Agreement
|
| 34 |
* Basic usage: [notebook](assets/basic_inference_llama_2_dolphin.ipynb)
|
| 35 |
+
* Finetuning code: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-Llama-2-70B-Instruct-v0.1-peft.py)
|
| 36 |
+
* Loss curves: [plot](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1-peft#finetuning-description)
|
| 37 |
+
* Runtime stats: [table](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1-peft#runtime-tests)
|
| 38 |
|
| 39 |
## Loss curve
|
| 40 |
|
| 41 |
+

|
| 42 |
|
| 43 |
The above loss curve was generated from the run's private wandb.ai log.
|
| 44 |
|
|
|
|
| 49 |
**User**:
|
| 50 |
> You are a helpful assistant. Write me a numbered list of things to do in New York City.\n
|
| 51 |
|
| 52 |
+
**Llama-2-70B-Instruct-v0.1-peft**:
|
| 53 |
> 1. Visit the Statue of Liberty: Take a ferry to Liberty Island and see the iconic symbol of freedom and democracy.
|
| 54 |
> 2. Explore Central Park: Enjoy the lush greenery, take a stroll, or rent a bike to explore the park's many attractions, such as the Central Park Zoo and Strawberry Fields.
|
| 55 |
> 3. Visit the Empire State Building: Take an elevator to the observation deck for breathtaking views of the city.
|
|
|
|
| 67 |
|
| 68 |
> You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n
|
| 69 |
|
| 70 |
+
**Llama-2-70B-Instruct-v0.1-peft**:
|
| 71 |
|
| 72 |
> Subject: Dinner Party Invitation - Friday, 7 PM
|
| 73 |
>
|
|
|
|
| 90 |
|
| 91 |
> You are a helpful assistant. Tell me a recipe for vegan banana bread.\n
|
| 92 |
|
| 93 |
+
**Llama-2-70B-Instruct-v0.1-peft**:
|
| 94 |
|
| 95 |
> Sure! Here's a delicious and easy vegan banana bread recipe:
|
| 96 |
>
|
|
|
|
| 229 |
|:-----------------------------:|:----------------------:|:---------------------:|:-------------:|:-----------------------:|
|
| 230 |
| 4.50 | 1x H100 (80 GB PCIe) | torch | nf4 | 39 |
|
| 231 |
|
| 232 |
+
The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/postprocessing-Llama-2-70B-Instruct-v0.1-peft.ipynb).
|
| 233 |
|
| 234 |
## Acknowledgements
|
| 235 |
|