prithivMLmods commited on
Commit
0b1c42a
·
verified ·
1 Parent(s): a10fd3b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -3
README.md CHANGED
@@ -1,3 +1,50 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - prithivMLmods/OpenScienceReasoning-Qwen-e10
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - text-generation-inference
11
+ - medical
12
+ - science
13
+ ---
14
+
15
+ # **OpenScienceReasoning-Qwen-e10-GGUF**
16
+
17
+ > OpenScienceReasoning-Qwen-e10 is a high-efficiency scientific reasoning model fine-tuned from Qwen3-1.7B using the nvidia/OpenScienceReasoning-2 dataset, encompassing 10,000 curated science and math entries that strengthen analytical problem-solving, chain-of-thought exploration, and code reasoning. The model excels at hybrid symbolic-AI thinking by performing structured logic, scientific derivations, multi-language coding, and generating outputs in formats such as LaTeX, Markdown, JSON, CSV, and YAML, making it ideal for research, education, and technical documentation on mid-range GPUs and edge clusters. Optimized for STEM applications, OpenScienceReasoning-Qwen-e10 delivers robust performance for tutoring, research assistance, and structured data generation while maintaining a lightweight deployment footprint.
18
+
19
+ ## Model Files
20
+
21
+ | File Name | Quant Type | File Size |
22
+ | - | - | - |
23
+ | OpenScienceReasoning-Qwen-e10.BF16.gguf | BF16 | 3.45 GB |
24
+ | OpenScienceReasoning-Qwen-e10.F16.gguf | F16 | 3.45 GB |
25
+ | OpenScienceReasoning-Qwen-e10.F32.gguf | F32 | 6.89 GB |
26
+ | OpenScienceReasoning-Qwen-e10.Q2_K.gguf | Q2_K | 778 MB |
27
+ | OpenScienceReasoning-Qwen-e10.Q3_K_L.gguf | Q3_K_L | 1 GB |
28
+ | OpenScienceReasoning-Qwen-e10.Q3_K_M.gguf | Q3_K_M | 940 MB |
29
+ | OpenScienceReasoning-Qwen-e10.Q3_K_S.gguf | Q3_K_S | 867 MB |
30
+ | OpenScienceReasoning-Qwen-e10.Q4_0.gguf | Q4_0 | 1.05 GB |
31
+ | OpenScienceReasoning-Qwen-e10.Q4_1.gguf | Q4_1 | 1.14 GB |
32
+ | OpenScienceReasoning-Qwen-e10.Q4_K.gguf | Q4_K | 1.11 GB |
33
+ | OpenScienceReasoning-Qwen-e10.Q4_K_M.gguf | Q4_K_M | 1.11 GB |
34
+ | OpenScienceReasoning-Qwen-e10.Q4_K_S.gguf | Q4_K_S | 1.06 GB |
35
+ | OpenScienceReasoning-Qwen-e10.Q5_0.gguf | Q5_0 | 1.23 GB |
36
+ | OpenScienceReasoning-Qwen-e10.Q5_1.gguf | Q5_1 | 1.32 GB |
37
+ | OpenScienceReasoning-Qwen-e10.Q5_K.gguf | Q5_K | 1.26 GB |
38
+ | OpenScienceReasoning-Qwen-e10.Q5_K_M.gguf | Q5_K_M | 1.26 GB |
39
+ | OpenScienceReasoning-Qwen-e10.Q5_K_S.gguf | Q5_K_S | 1.23 GB |
40
+ | OpenScienceReasoning-Qwen-e10.Q6_K.gguf | Q6_K | 1.42 GB |
41
+ | OpenScienceReasoning-Qwen-e10.Q8_0.gguf | Q8_0 | 1.83 GB |
42
+
43
+ ## Quants Usage
44
+
45
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
46
+
47
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
48
+ types (lower is better):
49
+
50
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)