Corelyn commited on
Commit
2daf6d6
·
verified ·
1 Parent(s): eac17a0
Files changed (1) hide show
  1. README.md +72 -72
README.md CHANGED
@@ -1,73 +1,73 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - text-generation
5
- - instruction-tuned
6
- - llama
7
- - gguf
8
- - chatbot
9
- library_name: llama.cpp
10
- language: en
11
- datasets:
12
- - custom
13
- model-index:
14
- - name: Corelyn NeoH
15
- results: []
16
- ---
17
-
18
-
19
- # Corelyn NeoH GGUF Model
20
-
21
- ## Specifications :
22
- - Model Name: Corelyn NeoH
23
- - Base Name: NeoH-3.2
24
- - Type: Instruct / Fine-tuned
25
- - Architecture: LLaMA
26
- - Size: 3B parameters
27
- - Organization: Corelyn
28
-
29
- ## Model Overview
30
-
31
- Corelyn NeoH is a 3-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases.
32
-
33
- - Fine-tuning type: Instruct
34
-
35
- - Base architecture: LLaMA
36
-
37
- - Parameter count: 3B
38
-
39
- - Context length: 131,072 tokens
40
-
41
- ### This model is suitable for applications such as:
42
-
43
- - Chatbots and conversational AI
44
-
45
- - Knowledge retrieval and Q&A
46
-
47
- - Code and text generation
48
-
49
- - Instruction-following tasks
50
-
51
- ## Usage
52
-
53
- Download from : [NeoH3.2](https://huggingface.co/CorelynAI/NeoH/resolve/main/NeoH3.2.gguf?download=true)
54
-
55
- ```python
56
-
57
- # pip install pip install llama-cpp-python
58
-
59
- from llama_cpp import Llama
60
-
61
- # Load the model (update the path to where your .gguf file is)
62
- llm = Llama(model_path="path/to/the/file/NeoH3.2.gguf")
63
-
64
- # Create chat completion
65
- response = llm.create_chat_completion(
66
- messages=[{"role": "user", "content": "Create a Haiku about AI"}]
67
- )
68
-
69
- # Print the generated text
70
- print(response.choices[0].message["content"])
71
-
72
-
73
  ```
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-generation
5
+ - instruction-tuned
6
+ - llama
7
+ - gguf
8
+ - chatbot
9
+ library_name: llama.cpp
10
+ language: en
11
+ datasets:
12
+ - custom
13
+ model-index:
14
+ - name: Corelyn NeoH
15
+ results: []
16
+ ---
17
+
18
+
19
+ # Corelyn NeoH GGUF Model
20
+
21
+ ## Specifications :
22
+ - Model Name: Corelyn NeoH
23
+ - Base Name: NeoH-3.2
24
+ - Type: Instruct / Fine-tuned
25
+ - Architecture: LLaMA
26
+ - Size: 3B parameters
27
+ - Organization: Corelyn
28
+
29
+ ## Model Overview
30
+
31
+ Corelyn NeoH is a 4-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases.
32
+
33
+ - Fine-tuning type: Instruct
34
+
35
+ - Base architecture: LLaMA
36
+
37
+ - Parameter count: 4B
38
+
39
+ - Context length: 131,072 tokens
40
+
41
+ ### This model is suitable for applications such as:
42
+
43
+ - Chatbots and conversational AI
44
+
45
+ - Knowledge retrieval and Q&A
46
+
47
+ - Code and text generation
48
+
49
+ - Instruction-following tasks
50
+
51
+ ## Usage
52
+
53
+ Download from : [NeoH3.2](https://huggingface.co/CorelynAI/NeoH/resolve/main/NeoH3.2.gguf?download=true)
54
+
55
+ ```python
56
+
57
+ # pip install pip install llama-cpp-python
58
+
59
+ from llama_cpp import Llama
60
+
61
+ # Load the model (update the path to where your .gguf file is)
62
+ llm = Llama(model_path="path/to/the/file/NeoH3.2.gguf")
63
+
64
+ # Create chat completion
65
+ response = llm.create_chat_completion(
66
+ messages=[{"role": "user", "content": "Create a Haiku about AI"}]
67
+ )
68
+
69
+ # Print the generated text
70
+ print(response.choices[0].message["content"])
71
+
72
+
73
  ```