Question Answering
Transformers
Safetensors
English
doge
text-generation
trl
sft
dpo
custom_code
JingzeShi commited on
Commit
ef0498e
verified
1 Parent(s): a012470

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -85,7 +85,6 @@ outputs = model.generate(
85
 
86
  We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) and then DPO on [UltraFeedback Binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
87
 
88
- > TODO: The larger model is under training and will be uploaded soon.
89
 
90
  **SFT**:
91
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
 
85
 
86
  We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) and then DPO on [UltraFeedback Binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
87
 
 
88
 
89
  **SFT**:
90
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |