Update README.md
Browse files
README.md
CHANGED
|
@@ -210,6 +210,6 @@ Quantized models like **triangulum-10b-f16.gguf** are optimized for performance
|
|
| 210 |
1. Ensure your system has sufficient VRAM or CPU resources.
|
| 211 |
2. Use the `.gguf` model format for compatibility with Ollama.
|
| 212 |
|
| 213 |
-
|
| 214 |
|
| 215 |
Running the **Triangulum-10B** model with Ollama provides a robust way to leverage open-source LLMs locally for diverse use cases. By following these steps, you can explore the capabilities of other open-source models in the future.
|
|
|
|
| 210 |
1. Ensure your system has sufficient VRAM or CPU resources.
|
| 211 |
2. Use the `.gguf` model format for compatibility with Ollama.
|
| 212 |
|
| 213 |
+
# **Conclusion**
|
| 214 |
|
| 215 |
Running the **Triangulum-10B** model with Ollama provides a robust way to leverage open-source LLMs locally for diverse use cases. By following these steps, you can explore the capabilities of other open-source models in the future.
|