buley commited on
Commit
d2abf04
·
verified ·
1 Parent(s): 845e743

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: llama-cpp
6
+ tags:
7
+ - gguf
8
+ - forkjoin-ai
9
+
10
+ pipeline_tag: text-generation
11
+ ---
12
+
13
+ # Qwen2.5 Coder 7B Instruct
14
+
15
+ Forkjoin.ai conversion of [qwen2.5-coder-7b-instruct-gguf](https://huggingface.co/qwen2.5-coder-7b-instruct-gguf) to GGUF format for edge deployment.
16
+
17
+ ## Model Details
18
+
19
+ - **Source Model**: [See upstream](https://huggingface.co/qwen2.5-coder-7b-instruct-gguf)
20
+ - **Format**: GGUF
21
+ - **Converted by**: [Forkjoin.ai](https://forkjoin.ai)
22
+
23
+ ## Usage
24
+
25
+ ### With llama.cpp
26
+
27
+ ```bash
28
+ ./llama-cli -m qwen2.5-coder-7b-instruct-gguf.gguf -p "Your prompt here" -n 256
29
+ ```
30
+
31
+ ### With Ollama
32
+
33
+ Create a `Modelfile`:
34
+ ```
35
+ FROM ./qwen2.5-coder-7b-instruct-gguf.gguf
36
+ ```
37
+
38
+ ```bash
39
+ ollama create qwen2.5-coder-7b-instruct-gguf -f Modelfile
40
+ ollama run qwen2.5-coder-7b-instruct-gguf
41
+ ```
42
+
43
+ ## About Forkjoin.ai
44
+
45
+ [Forkjoin.ai](https://forkjoin.ai) runs AI models at the edge -- in-browser, on-device, zero cloud cost. These converted models power real-time inference, speech recognition, and natural language capabilities.
46
+
47
+ All conversions are optimized for edge deployment within browser and mobile memory constraints.
48
+
49
+ ## License
50
+
51
+ Apache 2.0 (follows upstream model license)