GGUF
aashish1904 commited on
Commit
d5ed78f
·
verified ·
1 Parent(s): b51617e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+
6
+ ---
7
+
8
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
9
+
10
+ # QuantFactory/TriLM_3.9B_Unpacked-GGUF
11
+ This is quantized version of [SpectraSuite/TriLM_3.9B_Unpacked](https://huggingface.co/SpectraSuite/TriLM_3.9B_Unpacked) created using llama.cpp
12
+
13
+ # Original Model Card
14
+
15
+
16
+ # TriLM 3.9B Unpacked
17
+
18
+ TriLM (ternary model), unpacked to FP16 format - compatible with FP16 GEMMs. After unpacking, TriLM has the same architecture as LLaMa.
19
+
20
+ ```python
21
+ import transformers as tf, torch
22
+ model_name = "SpectraSuite/TriLM_3.9B_Unpacked"
23
+
24
+ # Please adjust the temperature, repetition penalty, top_k, top_p and other sampling parameters according to your needs.
25
+ pipeline = tf.pipeline("text-generation", model=model_id, model_kwargs={"torch_dtype": torch.float16}, device_map="auto")
26
+
27
+ # These are base (pretrained) LLMs that are not instruction and chat tuned. You may need to adjust your prompt accordingly.
28
+ pipeline("Once upon a time")
29
+ ```
30
+
31
+ * License: Apache 2.0
32
+ * We will use our GitHub repo for communication (including HF repo related queries). Feel free to open an issue here https://github.com/NolanoOrg/SpectraSuite
33
+