Update README.md
Browse files
README.md
CHANGED
|
@@ -34,11 +34,14 @@ With its robust natural language processing capabilities, **Codepy 3B Deep Think
|
|
| 34 |
| `tokenizer.json` | 17.2 MB | Full tokenizer vocabulary and merges. | Uploaded (LFS) |
|
| 35 |
| `tokenizer_config.json` | 57.5 kB | Tokenizer configuration details. | Uploaded |
|
| 36 |
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
-
---
|
| 42 |
# **<span style="color:red;">Sample Deepthink Inference</span>**
|
| 43 |
|
| 44 |
>>> Develop a Python program to generate random passwords that consist of 8 characters. Not applicable
|
|
@@ -143,14 +146,6 @@ To customize the password length, modify the `password_length` variable in the s
|
|
| 143 |
- This implementation uses Python's `random` module, which is suitable for general-purpose randomness. For cryptographically secure passwords, consider using the `secrets` module.
|
| 144 |
- The character set includes spaces for additional complexity, but you can modify the `characters` string to include other symbols (e.g., `!@#$%^&*`).
|
| 145 |
|
| 146 |
-
---
|
| 147 |
-
# **Run with LM Studio**
|
| 148 |
-
|
| 149 |
-
| Feature Run | Details |
|
| 150 |
-
|--------------------------|-----------------------------------------------------------------------------------------------|
|
| 151 |
-
| **Run with LM Studio** | https://lmstudio.ai/ |
|
| 152 |
-
| **Demo on LM Studio** | https://drive.google.com/file/d/1CHdfjYrwMnk9ACvS40Abfy3xNXnCubKG/view?usp=sharing |
|
| 153 |
-
|
| 154 |
# **Model Architecture**
|
| 155 |
|
| 156 |
Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
|
|
|
| 34 |
| `tokenizer.json` | 17.2 MB | Full tokenizer vocabulary and merges. | Uploaded (LFS) |
|
| 35 |
| `tokenizer_config.json` | 57.5 kB | Tokenizer configuration details. | Uploaded |
|
| 36 |
|
| 37 |
+
# **Run with LM Studio**
|
| 38 |
+
|
| 39 |
+
| Feature Run | Details |
|
| 40 |
+
|--------------------------|-----------------------------------------------------------------------------------------------|
|
| 41 |
+
| **Run with LM Studio** | https://lmstudio.ai/ |
|
| 42 |
+
| **Demo on LM Studio** | https://drive.google.com/file/d/1CHdfjYrwMnk9ACvS40Abfy3xNXnCubKG/view?usp=sharing |
|
| 43 |
+
| **Codepy-Deepthink-3B-GGUF** | https://huggingface.co/prithivMLmods/Codepy-Deepthink-3B-GGUF |
|
| 44 |
|
|
|
|
| 45 |
# **<span style="color:red;">Sample Deepthink Inference</span>**
|
| 46 |
|
| 47 |
>>> Develop a Python program to generate random passwords that consist of 8 characters. Not applicable
|
|
|
|
| 146 |
- This implementation uses Python's `random` module, which is suitable for general-purpose randomness. For cryptographically secure passwords, consider using the `secrets` module.
|
| 147 |
- The character set includes spaces for additional complexity, but you can modify the `characters` string to include other symbols (e.g., `!@#$%^&*`).
|
| 148 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 149 |
# **Model Architecture**
|
| 150 |
|
| 151 |
Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|