--- license: apache-2.0 library_name: transformers base_model: - nbeerbower/mistral-nemo-kartoffel-PRUNE3 datasets: - nbeerbower/GreatFirewall-DPO - nbeerbower/Schule-DPO - nbeerbower/Purpura-DPO - nbeerbower/Arkhaios-DPO - jondurbin/truthy-dpo-v0.1 - antiven0m/physical-reasoning-dpo - flammenai/Date-DPO-NoAsterisks - flammenai/Prude-Phi3-DPO - Atsunori/HelpSteer2-DPO - jondurbin/gutenberg-dpo-v0.1 - nbeerbower/gutenberg2-dpo - nbeerbower/gutenberg-moderne-dpo --- > 🧪 **Experimental** > > An attempt to recover intelligence with a quick train, results are meh # Dumpling-Mistral-Nemo-8B [nbeerbower/mistral-nemo-kartoffel-PRUNE3](https://huggingface.co/nbeerbower/mistral-nemo-kartoffel-PRUNE3) finetuned on: * [nbeerbower/GreatFirewall-DPO](https://huggingface.co/datasets/nbeerbower/GreatFirewall-DPO) * [nbeerbower/Schule-DPO](https://huggingface.co/datasets/nbeerbower/Schule-DPO) * [nbeerbower/Purpura-DPO](https://huggingface.co/datasets/nbeerbower/Purpura-DPO) * [nbeerbower/Arkhaios-DPO](https://huggingface.co/datasets/nbeerbower/Arkhaios-DPO) * [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) * [antiven0m/physical-reasoning-dpo](https://huggingface.co/datasets/antiven0m/physical-reasoning-dpo) * [flammenai/Date-DPO-NoAsterisks](https://huggingface.co/datasets/flammenai/Date-DPO-NoAsterisks) * [flammenai/Prude-Phi3-DPO](https://huggingface.co/datasets/flammenai/Prude-Phi3-DPO) * [Atsunori/HelpSteer2-DPO](https://huggingface.co/datasets/Atsunori/HelpSteer2-DPO) (1,000 samples) * [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) * [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo) * [nbeerbower/gutenberg-moderne-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg-moderne-dpo). ### Method [QLoRA ORPO tune](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 2x RTX 3090 for 2 epochs.