Sh2425 commited on
Commit
9a02d58
·
verified ·
1 Parent(s): f0960f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -1
README.md CHANGED
@@ -17,9 +17,41 @@ license: apache-2.0
17
  This is a fine tune of Qwen 3 4B 2507 Instruct, a lightweight but capable model that can outperform many larger models. Then used Unsloth LoRA Finetuning on an extensive range of high quality diverse datasets. Dolphy 1.0 was fine tuned on 1.5M Examples throughout it's fine tuning pipeline. As a fine tuned Qwen model, it still supports the extensive range of languages Qwen provided, but now with more nuanced responces and more native understanding. Another aspect of Dolphy 1.0 we focused on training it on Instruction Following datasets and personality datasets to give it a human like flair.
18
 
19
  **Compatibility**
20
- As Dolphy 1.0 and Qwen3 2507 Instruct models share the same base, Dolphy 1.0 is compatible with Qwen3's extensive tool use, function calling and multilingual capibilities. The tokenizer is unchanged
 
21
  You can also find this model in upcoming Dolphy AI releases.
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ## Available Model files:
24
  - `model-00001-of-00002.safetensors`
25
  - `model-00002-of-00002.safetensors`
 
17
  This is a fine tune of Qwen 3 4B 2507 Instruct, a lightweight but capable model that can outperform many larger models. Then used Unsloth LoRA Finetuning on an extensive range of high quality diverse datasets. Dolphy 1.0 was fine tuned on 1.5M Examples throughout it's fine tuning pipeline. As a fine tuned Qwen model, it still supports the extensive range of languages Qwen provided, but now with more nuanced responces and more native understanding. Another aspect of Dolphy 1.0 we focused on training it on Instruction Following datasets and personality datasets to give it a human like flair.
18
 
19
  **Compatibility**
20
+
21
+ As Dolphy 1.0 and Qwen3 2507 Instruct models share the same base, Dolphy 1.0 is compatible with Qwen3's extensive tool use, function calling and multilingual capibilities. The tokenizer is unchanged and the model archetecture is intact.
22
  You can also find this model in upcoming Dolphy AI releases.
23
 
24
+ **How to run locally**
25
+ For running locally we recommend using our GGUF models for fast inference. Nonetheless you can run the safetensors by using Hugging Face Transformers.
26
+ Our model requires no steps before inference and are ready.
27
+
28
+ ```
29
+ {
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+
32
+ model_name = "Dolphy-AI/Dolphy-1.0"
33
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
34
+ model = AutoModelForCausalLM.from_pretrained(model_name, safe_serialization=True)
35
+
36
+ print("Type 'exit' to quit.\n")
37
+
38
+ while True:
39
+ user_input = input("Enter your prompt: ")
40
+ if user_input.lower() == "exit":
41
+ print("Goodbye!")
42
+ break
43
+
44
+ # Tokenize and generate
45
+ inputs = tokenizer(user_input, return_tensors="pt")
46
+ outputs = model.generate(**inputs, max_new_tokens=100)
47
+
48
+ # Decode and print result
49
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
50
+ print("\nModel response:\n", response, "\n")
51
+
52
+ }
53
+ ```
54
+
55
  ## Available Model files:
56
  - `model-00001-of-00002.safetensors`
57
  - `model-00002-of-00002.safetensors`