awni commited on
Commit
df6678c
·
verified ·
1 Parent(s): 7a4d28d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -29,11 +29,11 @@ pip install mlx-lm
29
  You can use `mlx-lm` from the command line. For example:
30
 
31
  ```
32
- mlx_lm.generate --model mlx-community/Mistral-7B-Instruct-v0.3-4bit --prompt "hello"
33
  ```
34
 
35
- This will download a Mistral 7B model from the Hugging Face Hub and generate
36
- text using the given prompt.
37
 
38
  To chat with an LLM use:
39
 
@@ -55,7 +55,7 @@ mlx_lm.chat --help
55
  To quantize a model from the command line run:
56
 
57
  ```
58
- mlx_lm.convert --hf-path mistralai/Mistral-7B-Instruct-v0.3 -q
59
  ```
60
 
61
  For more options run:
@@ -65,14 +65,14 @@ mlx_lm.convert --help
65
  ```
66
 
67
  You can upload new models to Hugging Face by specifying `--upload-repo` to
68
- `convert`. For example, to upload a quantized Mistral-7B model to the
69
  [MLX Hugging Face community](https://huggingface.co/mlx-community) you can do:
70
 
71
  ```
72
  mlx_lm.convert \
73
- --hf-path mistralai/Mistral-7B-Instruct-v0.3 \
74
  -q \
75
- --upload-repo mlx-community/my-4bit-mistral
76
  ```
77
 
78
  Models can also be converted and quantized directly in the
 
29
  You can use `mlx-lm` from the command line. For example:
30
 
31
  ```
32
+ mlx_lm.generate --model mlx-community/Qwen3-4B-Instruct-2507-4bit --prompt "hello"
33
  ```
34
 
35
+ This will download a Qwen3 4B model from the Hugging Face Hub and generate
36
+ text using the given prompt.
37
 
38
  To chat with an LLM use:
39
 
 
55
  To quantize a model from the command line run:
56
 
57
  ```
58
+ mlx_lm.convert --model Qwen/Qwen3-4B-Instruct-2507 -q
59
  ```
60
 
61
  For more options run:
 
65
  ```
66
 
67
  You can upload new models to Hugging Face by specifying `--upload-repo` to
68
+ `convert`. For example, to upload a quantized Qwen3 4B model to the
69
  [MLX Hugging Face community](https://huggingface.co/mlx-community) you can do:
70
 
71
  ```
72
  mlx_lm.convert \
73
+ --model Qwen/Qwen3-4B-Instruct-2507 \
74
  -q \
75
+ --upload-repo mlx-community/Qwen3-4B-Instruct-2507-4bit
76
  ```
77
 
78
  Models can also be converted and quantized directly in the