Corelyn commited on
Commit
dd9c896
·
verified ·
1 Parent(s): d366900

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Corelyn NeoH GGUF Model
2
+
3
+ ## Specifications :
4
+ - Model Name: Corelyn NeoH
5
+ - Base Name: NeoH-3.2
6
+ - Type: Instruct / Fine-tuned
7
+ - Architecture: LLaMA
8
+ - Size: 3B parameters
9
+ - Organization: Corelyn
10
+
11
+ ## Model Overview
12
+
13
+ Corelyn NeoH is a 3-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases.
14
+
15
+ - Fine-tuning type: Instruct
16
+
17
+ - Base architecture: LLaMA
18
+
19
+ - Parameter count: 3B
20
+
21
+ - Context length: 131,072 tokens
22
+
23
+ ### This model is suitable for applications such as:
24
+
25
+ - Chatbots and conversational AI
26
+
27
+ - Knowledge retrieval and Q&A
28
+
29
+ - Code and text generation
30
+
31
+ - Instruction-following tasks
32
+
33
+ ## Usage
34
+ ```python
35
+
36
+ # pip install pip install llama-cpp-python
37
+
38
+ from llama_cpp import Llama
39
+
40
+ # Load the model (update the path to where your .gguf file is)
41
+ llm = Llama(model_path="path/to/the/file/NeoH3.2.gguf")
42
+
43
+ # Create chat completion
44
+ response = llm.create_chat_completion(
45
+ messages=[{"role": "user", "content": "Create a Haiku about AI"}]
46
+ )
47
+
48
+ # Print the generated text
49
+ print(response.choices[0].message["content"])
50
+
51
+
52
+ ```