Text Generation
Safetensors
GGUF
llama
conversational
juliehunter commited on
Commit
31a4ddd
·
verified ·
1 Parent(s): 82117bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -36,11 +36,11 @@ pipeline_tag: text-generation
36
 
37
  ## Model Description
38
 
39
- Lucie-7B-Instruct-human-data is a fine-tuned version of [Lucie-7B](), an open-source, multilingual causal language model created by OpenLLM-France.
40
 
41
- Lucie-7B-Instruct-human-data is fine-tuned on human-produced instructions collected either from open annotation campaigns or by applying templates to extant datasets. The performance of Lucie-7B-Instruct-human-data falls below that of [Lucie-7B-Instruct](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct); the interest of the model is to show what can be done to fine-tune LLMs to follow instructions without appealing to third party LLMs.
42
 
43
- Note that both Lucie-7B-Instruct-human-data and Lucie-7B-Instruct are optimized for generation of French text. They have not been trained for code generation or optimized for math. Such capacities can be improved through further fine-tuning and alignment with methods such as DPO, RLHF, etc.
44
 
45
  While Lucie-7B-Instruct-human-data is trained on sequences of 4096 tokens, its base model, Lucie-7B has a context size of 32K tokens. Based on Needle-in-a-haystack evaluations, Lucie-7B-Instruct-human-data maintains the capacity of the base model to handle 32K-size context windows.
46
 
 
36
 
37
  ## Model Description
38
 
39
+ Lucie-7B-Instruct-human-data is a fine-tuned version of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), an open-source, multilingual causal language model created by OpenLLM-France.
40
 
41
+ Lucie-7B-Instruct-human-data is fine-tuned on human-produced instructions collected either from open annotation campaigns or by applying templates to extant datasets. The performance of Lucie-7B-Instruct-human-data falls below that of [Lucie-7B-Instruct-v1.1](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct-v1.1); the interest of the model is to show what can be done to fine-tune LLMs to follow instructions without appealing to third party LLMs.
42
 
43
+ Note that Lucie-7B-Instruct-human-data is optimized for the generation of French text. It has not been trained for code generation or optimized for math. Such capacities can be improved through further fine-tuning and alignment with methods such as DPO, RLHF, etc.
44
 
45
  While Lucie-7B-Instruct-human-data is trained on sequences of 4096 tokens, its base model, Lucie-7B has a context size of 32K tokens. Based on Needle-in-a-haystack evaluations, Lucie-7B-Instruct-human-data maintains the capacity of the base model to handle 32K-size context windows.
46