Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,7 @@ datasets:
|
|
| 18 |
- HuggingFaceH4/ultrachat_200k
|
| 19 |
language:
|
| 20 |
- en
|
|
|
|
| 21 |
---
|
| 22 |
# Cascade0
|
| 23 |
My first ever LLM trained on a single RTX 4080 locally, in 1.5 - 2 weeks.
|
|
@@ -38,4 +39,4 @@ According to Gemini 2.5 Flash after analyzing the responses, its verdict was:
|
|
| 38 |
|
| 39 |
This project started in May 2025. Code for training is AI generated, BUT it took a lot of human effort (Rather debugging and prompt engineering) to reach this state, including lots of trial and error, AI changing (from GPT-Gemini-Deepsek) wasted electricity in training and time... And lots of furstration.
|
| 40 |
It was only recently when i bought ChatGPT Plus when I could this pull off, after almost abandoning everything.
|
| 41 |
-
But after all, this is my dream, and I just feel good when I see this on my page. <3
|
|
|
|
| 18 |
- HuggingFaceH4/ultrachat_200k
|
| 19 |
language:
|
| 20 |
- en
|
| 21 |
+
pipeline_tag: text-generation
|
| 22 |
---
|
| 23 |
# Cascade0
|
| 24 |
My first ever LLM trained on a single RTX 4080 locally, in 1.5 - 2 weeks.
|
|
|
|
| 39 |
|
| 40 |
This project started in May 2025. Code for training is AI generated, BUT it took a lot of human effort (Rather debugging and prompt engineering) to reach this state, including lots of trial and error, AI changing (from GPT-Gemini-Deepsek) wasted electricity in training and time... And lots of furstration.
|
| 41 |
It was only recently when i bought ChatGPT Plus when I could this pull off, after almost abandoning everything.
|
| 42 |
+
But after all, this is my dream, and I just feel good when I see this on my page. <3
|