FlameF0X commited on
Commit
be61277
·
verified ·
1 Parent(s): 6c23f82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -2
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
  base_model:
@@ -7,4 +7,61 @@ base_model:
7
  pipeline_tag: text-generation
8
  tags:
9
  - gguf
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
  language:
4
  - en
5
  base_model:
 
7
  pipeline_tag: text-generation
8
  tags:
9
  - gguf
10
+ ---
11
+
12
+ ![banner by CroissantWhyNot](banner.png)
13
+
14
+ *Banner by [Croissant](https://huggingface.co/CroissantWhyNot)*
15
+
16
+ # N1 - A Chain-of-Thought Language Model
17
+
18
+ N1 is a small but experimental Chain-of-Thought (COT) model based on the LLaMA architecture, developed by GoofyLM.
19
+
20
+ ## Model Details
21
+
22
+ - **Architecture**: LLaMA-based
23
+ - **Parameters**: 135M
24
+ - **Training Data**: Closed-source dataset
25
+ - **Special Features**: Chain-of-Thought reasoning capabilities
26
+ - **Note**: The model has schizophrenia
27
+
28
+ ## Intended Use
29
+
30
+ This model is designed for text generation tasks with a focus on reasoning through problems step-by-step (Chain-of-Thought).
31
+
32
+ ## Limitations
33
+
34
+ - Small parameter size may limit reasoning capabilities
35
+ - May produce unstable or inconsistent outputs
36
+ - Not suitable for production use without further testing
37
+
38
+ ---
39
+
40
+ ## Usage
41
+
42
+ The model can be loaded using the following:
43
+
44
+ ### Transformers:
45
+
46
+ ```python
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer
48
+
49
+ model = AutoModelForCausalLM.from_pretrained("GoofyLM/N1")
50
+ tokenizer = AutoTokenizer.from_pretrained("GoofyLM/N1")
51
+ ```
52
+
53
+ ### llama-cpp-python:
54
+
55
+ ```python
56
+ from llama_cpp import Llama
57
+
58
+ llm = Llama.from_pretrained(
59
+ repo_id="GoofyLM/N1",
60
+ filename="N1_Q8_0.gguf",
61
+ )
62
+ ```
63
+
64
+ ### Ollama:
65
+ ```python
66
+ ollama run hf.co/GoofyLM/N1:Q4_K_M
67
+ ```