Afjalru commited on
Commit
3bee635
·
verified ·
1 Parent(s): 364c756

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -11
README.md CHANGED
@@ -1,13 +1,22 @@
1
  ---
2
- license: openrail
3
- task_categories:
4
- - text-classification
5
- language:
6
- - en
7
- tags:
8
- - llama2
9
- pretty_name: llama2 DataSet
10
- size_categories:
11
- - n<1K
 
 
 
 
 
12
  ---
13
- Data Set For Loan Prediction
 
 
 
 
 
1
  ---
2
+ dataset_info:
3
+ features:
4
+ - name: text
5
+ dtype: string
6
+ splits:
7
+ - name: train
8
+ num_bytes: 1654448
9
+ num_examples: 1000
10
+ download_size: 966693
11
+ dataset_size: 1654448
12
+ configs:
13
+ - config_name: default
14
+ data_files:
15
+ - split: train
16
+ path: data/train-*
17
  ---
18
+ # Guanaco-1k: Lazy Llama 2 Formatting
19
+
20
+ This is a subset (1000 samples) of the excellent [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). It was created using the following [colab notebook](https://colab.research.google.com/drive/1Ad7a9zMmkxuXTOh1Z7-rNSICA4dybpM2?usp=sharing).
21
+
22
+ Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for [this article](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) about fine-tuning a Llama 2 (chat) model in a Google Colab.