Text Generation
PEFT
English
nmitchko commited on
Commit
9e25f86
·
1 Parent(s): 1769902

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -56,7 +56,7 @@ Below is an instruction that describes a task. Write a response that appropriate
56
  `nmitchko/ML1-34b-previews` is a large language model repository of LoRA checkpoints specifically fine-tuned to add text-book synthesized data in the style of Phi 1/1.5.
57
  It is based on [`codellama-34b-hf`](https://huggingface.co/codellama/CodeLlama-34b-hf) at 34 billion parameters.
58
 
59
- The primary goal of this model is to improve research accuracy with the i2b2 tool.
60
  It was trained using [LoRA](https://arxiv.org/abs/2106.09685), specifically [QLora Multi GPU](https://github.com/ChrisHayduk/qlora-multi-gpu), to reduce memory footprint.
61
 
62
  See Training Parameters for more info This Lora supports 4-bit and 8-bit modes.
 
56
  `nmitchko/ML1-34b-previews` is a large language model repository of LoRA checkpoints specifically fine-tuned to add text-book synthesized data in the style of Phi 1/1.5.
57
  It is based on [`codellama-34b-hf`](https://huggingface.co/codellama/CodeLlama-34b-hf) at 34 billion parameters.
58
 
59
+ The primary goal of this model is to test various fine tuning methods around high quality data.
60
  It was trained using [LoRA](https://arxiv.org/abs/2106.09685), specifically [QLora Multi GPU](https://github.com/ChrisHayduk/qlora-multi-gpu), to reduce memory footprint.
61
 
62
  See Training Parameters for more info This Lora supports 4-bit and 8-bit modes.