Update README.md
Browse files
README.md
CHANGED
|
@@ -2,11 +2,6 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
|
| 5 |
-
Experiment Title: MermaidLLama vs Mistral: Testing the Power of 8 Billion Parameters
|
| 6 |
-
---
|
| 7 |
-
|
| 8 |
-
I'm thrilled to share a groundbreaking experiment comparing the capabilities of MermaidLLama, an 8.3 billion-parameter model based on LLaMA-PRO-Instruct, against Mistral, a 7-billion-parameter model. The motivation behind this test is to explore whether "bigger is better" when it comes to language models.
|
| 9 |
-
|
| 10 |
**MermaidLLama: Unleashing the Power of 8 Billion Parameters**
|
| 11 |
|
| 12 |
Introducing MermaidLLama, a robust language model designed for Python code understanding and crafting captivating story flow maps. With a staggering 8.3 billion parameters, this model builds on the success of LLaMA-PRO-Instruct ability to maintain versatility in programming, mathematical reasoning, and general language processing.
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
**MermaidLLama: Unleashing the Power of 8 Billion Parameters**
|
| 6 |
|
| 7 |
Introducing MermaidLLama, a robust language model designed for Python code understanding and crafting captivating story flow maps. With a staggering 8.3 billion parameters, this model builds on the success of LLaMA-PRO-Instruct ability to maintain versatility in programming, mathematical reasoning, and general language processing.
|