| | --- |
| | license: apache-2.0 |
| | datasets: |
| | - OpenAssistant/oasst1 |
| | - rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored |
| | --- |
| | |
| | # falcon-40b-megacode2-oasst |
| |
|
| | - wandb: stage 1: [run37_megacode_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run37_megacode_falcon40), stage 2: [run38_megacode_oasst_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run38_megacode_oasst_falcon40) |
| | - sampling report: [2023-08-17_OpenAssistant_falcon-40b-megacode2-oasst_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-08-17_OpenAssistant_falcon-40b-megacode2-oasst_sampling_noprefix2.json) |
| | - stage 1 model: [andreaskoepf/falcon-40b-megacode2](https://huggingface.co/andreaskoepf/falcon-40b-megacode2) |
| |
|
| | ## Prompt Template |
| |
|
| | [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used: |
| | "<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n" |
| |
|
| | Multi-line: |
| |
|
| | ``` |
| | <|im_start|>user |
| | {user prompt}<|im_end|> |
| | <|im_start|>assistant |
| | {Assistant answer}<|im_end|> |
| | ``` |
| |
|
| | ### Credits & Special Thanks |
| |
|
| | - Compute was generously sponsored by the eplf [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/) |
| | - The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning. |
| | - [rombodawg](https://huggingface.co/rombodawg) curated and published [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored). |
| | - [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training. |
| |
|