Update README.md
Browse files
README.md
CHANGED
|
@@ -1,17 +1,3 @@
|
|
| 1 |
-
---
|
| 2 |
-
base_model: unsloth/Qwen3-0.6B-unsloth-bnb-4bit
|
| 3 |
-
tags:
|
| 4 |
-
- text-generation-inference
|
| 5 |
-
- transformers
|
| 6 |
-
- unsloth
|
| 7 |
-
- qwen3
|
| 8 |
-
- gguf
|
| 9 |
-
license: apache-2.0
|
| 10 |
-
language:
|
| 11 |
-
- en
|
| 12 |
-
datasets:
|
| 13 |
-
- srisree/nextjs_typescript_fim_dataset
|
| 14 |
-
---
|
| 15 |

|
| 16 |
|
| 17 |
|
|
@@ -39,4 +25,41 @@ The Colab script will include:
|
|
| 39 |
- ✅ Training configuration (LoRA, batch size, sequence length, etc.)
|
| 40 |
- ✅ Evaluation and inference examples
|
| 41 |
|
| 42 |
-
🚀 **Coming soon...** Stay tuned for the full release!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |

|
| 2 |
|
| 3 |
|
|
|
|
| 25 |
- ✅ Training configuration (LoRA, batch size, sequence length, etc.)
|
| 26 |
- ✅ Evaluation and inference examples
|
| 27 |
|
| 28 |
+
🚀 **Coming soon...** Stay tuned for the full release!
|
| 29 |
+
|
| 30 |
+
---
|
| 31 |
+
|
| 32 |
+
# ⚙️ Setup and Run NanoCoder Locally with Ollama in VS Code
|
| 33 |
+
|
| 34 |
+
> Step-by-step guide to install, configure, and use **NanoCoder** for intelligent React frontend code completion with the **Continue** VS Code extension.
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
## 🧠 Prerequisites
|
| 39 |
+
|
| 40 |
+
Before getting started, ensure you have the following installed:
|
| 41 |
+
|
| 42 |
+
- [VS Code](https://code.visualstudio.com/)
|
| 43 |
+
- [Ollama](https://ollama.ai) (latest version)
|
| 44 |
+
- [Continue extension](https://marketplace.visualstudio.com/items?itemName=Continue.continue)
|
| 45 |
+
- A system with at least **8GB RAM** (recommended for 0.6B models)
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
## 🧩 Step 1: Install Ollama
|
| 50 |
+
|
| 51 |
+
If you haven’t already, download and install **Ollama**:
|
| 52 |
+
|
| 53 |
+
- macOS / Linux / Windows: [https://ollama.ai/download](https://ollama.ai/download)
|
| 54 |
+
|
| 55 |
+
Once installed, open your terminal and verify the installation.
|
| 56 |
+
|
| 57 |
+
## 💾 Step 2: Pull NanoCoder Model
|
| 58 |
+
|
| 59 |
+
>ollama pull srisree/nanocoder
|
| 60 |
+
|
| 61 |
+
## ⚡ Step 3: Run NanoCoder with Ollama
|
| 62 |
+
Once downloaded, you can test NanoCoder directly in the terminal:
|
| 63 |
+
>ollama run nanocoder
|
| 64 |
+
|
| 65 |
+
Read more [Continue Docs](https://docs.continue.dev/customize/deep-dives/autocomplete)
|