Osaurus-AI commited on
Commit
c75f732
·
verified ·
1 Parent(s): f4ccebe

Update usage to Osaurus branding

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -65,9 +65,13 @@ mlx-vlm's default quantization predicate automatically keeps MLP gate/up/down pr
65
 
66
  ## Usage
67
 
68
- ### With mlx-vlm
 
 
 
69
 
70
  ```python
 
71
  from mlx_vlm import load, generate
72
 
73
  model, processor = load("OsaurusAI/gemma-4-E4B-it-4bit")
@@ -79,12 +83,6 @@ output = generate(model, processor, "Explain quantum computing", max_tokens=500)
79
  output = generate(model, processor, "Describe this image", ["path/to/image.jpg"], max_tokens=500)
80
  ```
81
 
82
- ### With vMLX / vllm-mlx
83
-
84
- ```bash
85
- vllm-mlx serve OsaurusAI/gemma-4-E4B-it-4bit
86
- ```
87
-
88
  ## Conversion Details
89
 
90
  | Detail | Value |
 
65
 
66
  ## Usage
67
 
68
+ ```bash
69
+ # Requires Osaurus (https://osaurus.ai)
70
+ osaurus serve OsaurusAI/gemma-4-E4B-it-4bit
71
+ ```
72
 
73
  ```python
74
+ # Python API
75
  from mlx_vlm import load, generate
76
 
77
  model, processor = load("OsaurusAI/gemma-4-E4B-it-4bit")
 
83
  output = generate(model, processor, "Describe this image", ["path/to/image.jpg"], max_tokens=500)
84
  ```
85
 
 
 
 
 
 
 
86
  ## Conversion Details
87
 
88
  | Detail | Value |