audio audioduration (s) 12.4 29.3 | text stringclasses 6
values | start_time stringclasses 6
values | end_time stringclasses 6
values |
|---|---|---|---|
Spin. Self-play, fine-tuning that improves LLMs. Tricksy, it's a form of fast inference involving sparsity. Phi-2 is a model from Microsoft. Lightning Attention 2 is an alternative to Flash Attention. Mixtral-8x7B, this is a mixture of Experts model. Solar 10.7B is a Mistral model with some extra layers added in. | 00:00:00.000 | 00:00:29.220 | |
OpenChat is a SPIN fine-tune of the Mistral model. Notux-8x7B-v1. This is a version of the Mixtral-8x7B, fine-tuned. SPIN is Google's best model, or perhaps not as good as Gemini Ultra. Phi-2, I've already mentioned. DeciLM-7B is a high-speed 7B | 00:00:29.220 | 00:00:58.240 | |
model. That's DeciLM-7B. Arena Elo is a means of comparing LMs, MT-bench is another metric, and MMLU is also | 00:00:58.240 | 00:01:10.560 | |
gpt4 turbo is a fast gpt4 model by openai. Mistral Medium is a mixture of experts but with larger experts than Mixtral-8x7B. CLOD1, CLOD2 or CLOD2.0 are the | 00:01:11.680 | 00:01:30.060 | |
latest CLOD models. Mixtral-8x7B-Instruct-v0.1 that's the latest mixture of experts. Yi-34B-Chat is a very strong fine-tune of Llama. Claude-Instant-1 is one of the Claude models. Tulu 2 DPO 70B is a | 00:01:30.060 | 00:01:58.960 | |
DPO fine-tuned model by Allen AI. It's a fine-tune of the Llama-2 model, the 70B. WizardLM-70B-v1.0 WizardLM-70B-v1.0 is also a fine-tune of the Llama-2 70B model. | 00:01:58.960 | 00:02:16.000 |
README.md exists but content is empty.
- Downloads last month
- 5