mrm8488 commited on
Commit
edc2efd
·
1 Parent(s): a265cc2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bigscience-bloom-rail-1.0
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ tags:
6
+ - dolly
7
+ - bloomz
8
+ - Spanish
9
+ - French
10
+ - German
11
+ datasets:
12
+ - argilla/databricks-dolly-15k-multilingual
13
+ inference: false
14
+ widget:
15
+ - text: >-
16
+ Below is an instruction that describes a task, paired with an input that
17
+ provides further context.
18
+
19
+ Write a response that appropriately completes the request.
20
+
21
+ ### Instruction:
22
+
23
+ Tell me about alpacas
24
+ language:
25
+ - es
26
+ - fr
27
+ - de
28
+ ---
29
+
30
+ <div style="text-align:center;width:250px;height:250px;">
31
+ <img src="https://huggingface.co/mrm8488/dolloom/resolve/main/dolloom_logo.png" alt="Alpacoom logo"">
32
+ </div>
33
+
34
+
35
+
36
+ # DOLLCERBEROOM: 3 x Dolly 🐑 + BLOOMz 💮
37
+
38
+
39
+ ## Adapter Description
40
+ This adapter was created with the [PEFT](https://github.com/huggingface/peft) library and allowed the base model **BigScience/BLOOMz 7B1** to be fine-tuned on the **Dolly's Dataset (tanslated to Spanish, French and German by Argilla)** by using the method **LoRA**.
41
+
42
+ ## Model Description
43
+ Instruction Tuned version of BigScience Large Open-science Open-access Multilingual.
44
+
45
+ [BLOOMz 7B1 MT](https://huggingface.co/bigscience/bloomz-7b1-mt)
46
+
47
+ ## Training data
48
+
49
+ TBA
50
+
51
+ ### Supported Tasks and Leaderboards
52
+
53
+ TBA
54
+
55
+ ### Training procedure
56
+
57
+ TBA
58
+
59
+ ## How to use
60
+
61
+ TBA
62
+
63
+ ## Citation