adamtechguy commited on
Commit
f739bbd
·
verified ·
1 Parent(s): 27ee069

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +158 -2
README.md CHANGED
@@ -67,6 +67,91 @@ print(response.json()['response'])
67
 
68
  Simply select `thatdamai/tinyclaude-1b` from the model dropdown after pulling.
69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
  ## Model Details
71
 
72
  | Property | Value |
@@ -124,6 +209,77 @@ ollama create my-tinyclaude -f Modelfile
124
  ollama run my-tinyclaude
125
  ```
126
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
  ## Contributing
128
 
129
  Suggestions and improvements are welcome. Feel free to:
@@ -144,6 +300,6 @@ This model inherits the Apache 2.0 license from TinyLlama. The system prompt and
144
 
145
  ---
146
 
147
- **Authors**: thatdamai/crystalai35
148
  **Model**: thatdamai/tinyclaude-1b
149
- **Platform**: [Ollama](https://ollama.ai/thatdamai/tinyclaude:1b)
 
67
 
68
  Simply select `thatdamai/tinyclaude-1b` from the model dropdown after pulling.
69
 
70
+ ### Hugging Face Transformers
71
+
72
+ ```python
73
+ from transformers import AutoModelForCausalLM, AutoTokenizer
74
+
75
+ # Load model and tokenizer
76
+ model_name = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
77
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
78
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
79
+
80
+ # Define the TinyClaude system prompt
81
+ system_prompt = """You are a helpful, harmless, and honest AI assistant..."""
82
+
83
+ # Format with chat template
84
+ messages = [
85
+ {"role": "system", "content": system_prompt},
86
+ {"role": "user", "content": "Explain quantum computing simply."}
87
+ ]
88
+
89
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
90
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
91
+
92
+ # Generate response
93
+ outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7, do_sample=True)
94
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
95
+ print(response)
96
+ ```
97
+
98
+ ### Hugging Face with llama-cpp-python
99
+
100
+ ```python
101
+ from llama_cpp import Llama
102
+
103
+ # Download GGUF from Hugging Face Hub
104
+ llm = Llama.from_pretrained(
105
+ repo_id="TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF",
106
+ filename="tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf",
107
+ n_ctx=2048,
108
+ n_gpu_layers=-1 # Use all GPU layers
109
+ )
110
+
111
+ system_prompt = """You are a helpful, harmless, and honest AI assistant..."""
112
+
113
+ output = llm.create_chat_completion(
114
+ messages=[
115
+ {"role": "system", "content": system_prompt},
116
+ {"role": "user", "content": "What is machine learning?"}
117
+ ],
118
+ temperature=0.7,
119
+ max_tokens=512
120
+ )
121
+
122
+ print(output['choices'][0]['message']['content'])
123
+ ```
124
+
125
+ ### Hugging Face CLI
126
+
127
+ ```bash
128
+ # Install huggingface_hub
129
+ pip install huggingface_hub
130
+
131
+ # Download model files
132
+ huggingface-cli download TinyLlama/TinyLlama-1.1B-Chat-v1.0 --local-dir ./tinyllama
133
+
134
+ # Download GGUF quantized version
135
+ huggingface-cli download TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf --local-dir ./tinyllama-gguf
136
+ ```
137
+
138
+ ### Text Generation Inference (TGI)
139
+
140
+ ```bash
141
+ # Run with Docker
142
+ docker run --gpus all --shm-size 1g -p 8080:80 \
143
+ ghcr.io/huggingface/text-generation-inference:latest \
144
+ --model-id TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
145
+ --max-input-length 1024 \
146
+ --max-total-tokens 2048
147
+
148
+ # Query the endpoint
149
+ curl http://localhost:8080/generate \
150
+ -X POST \
151
+ -H 'Content-Type: application/json' \
152
+ -d '{"inputs": "<|system|>\nYou are a helpful assistant.</s>\n<|user|>\nHello!</s>\n<|assistant|>\n", "parameters": {"max_new_tokens": 256}}'
153
+ ```
154
+
155
  ## Model Details
156
 
157
  | Property | Value |
 
209
  ollama run my-tinyclaude
210
  ```
211
 
212
+ ## Hugging Face Integration
213
+
214
+ ### Uploading to Hugging Face Hub
215
+
216
+ ```bash
217
+ # Install required tools
218
+ pip install huggingface_hub
219
+
220
+ # Login to Hugging Face
221
+ huggingface-cli login
222
+
223
+ # Create a new model repository
224
+ huggingface-cli repo create tinyclaude-1b --type model
225
+
226
+ # Upload model files
227
+ huggingface-cli upload thatdamai/tinyclaude-1b ./model-files --repo-type model
228
+ ```
229
+
230
+ ### Converting Ollama to GGUF for Hugging Face
231
+
232
+ ```bash
233
+ # Find your Ollama model location
234
+ ollama show thatdamai/tinyclaude-1b --modelfile
235
+
236
+ # Models are stored in ~/.ollama/models or /usr/share/ollama/.ollama/models
237
+ # Copy the blob files and upload to HF
238
+
239
+ # Alternative: Use ollama's model export (if available)
240
+ cp /usr/share/ollama/.ollama/models/blobs/<sha256-hash> ./tinyclaude.gguf
241
+ ```
242
+
243
+ ### Creating a Hugging Face Model Card
244
+
245
+ Create a `README.md` in your HF repo with YAML frontmatter:
246
+
247
+ ```yaml
248
+ ---
249
+ license: apache-2.0
250
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
251
+ tags:
252
+ - tinyllama
253
+ - gguf
254
+ - ollama
255
+ - assistant
256
+ - conversational
257
+ model_type: llama
258
+ pipeline_tag: text-generation
259
+ inference: false
260
+ ---
261
+ ```
262
+
263
+ ### Downloading from Hugging Face to Ollama
264
+
265
+ ```bash
266
+ # Method 1: Create Modelfile pointing to HF GGUF
267
+ cat << 'EOF' > Modelfile
268
+ FROM hf.co/thatdamai/tinyclaude-1b-gguf
269
+ EOF
270
+
271
+ ollama create tinyclaude-local -f Modelfile
272
+
273
+ # Method 2: Download GGUF first, then import
274
+ huggingface-cli download thatdamai/tinyclaude-1b-gguf tinyclaude-1b.Q4_K_M.gguf --local-dir ./
275
+
276
+ cat << EOF > Modelfile
277
+ FROM ./tinyclaude-1b.Q4_K_M.gguf
278
+ EOF
279
+
280
+ ollama create tinyclaude-local -f Modelfile
281
+ ```
282
+
283
  ## Contributing
284
 
285
  Suggestions and improvements are welcome. Feel free to:
 
300
 
301
  ---
302
 
303
+ **Author**: thatdamai
304
  **Model**: thatdamai/tinyclaude-1b
305
+ **Platform**: [Ollama](https://ollama.ai/thatdamai/tinyclaude-1b)