CyberYui's picture
Update README.md
3816b9d verified
metadata
license: apache-2.0
base_model: mistralai/Codestral-22B-v0.1
library_name: mlx
pipeline_tag: text-generation
language:
  - en
tags:
  - codestral
  - mlx
  - code
  - mistral
  - apple-silicon
  - FIM
  - Fill-in-the-Middle
  - code-generation
  - 4bit
model-index:
  - name: Codestral-22B-Yui-MLX
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: code
          name: Code
        metrics:
          - type: vram
            value: 12.5 GB

🇬🇧 English | 🇨🇳 中文

👇 Scroll down for Chinese version / 向下滚动查看中文版本


🇬🇧 English

Codestral-22B-Yui-MLX

This model is CyberYui's custom-converted MLX format port of Mistral AI's official mistralai/Codestral-22B-v0.1 model. No modifications, alterations, or fine-tuning of any kind were applied to the original model's weights, architecture, or parameters; this is strictly a format conversion for MLX, optimized exclusively for Apple Silicon (M1/M2/M3/M4) chips.

📌 Model Details

  • Base Model: mistralai/Codestral-22B-v0.1
  • Conversion Tool: mlx-lm 0.29.1
  • Quantization: 4-bit (≈12.5GB total size)
  • Framework: MLX (native Apple GPU acceleration)
  • Use Cases: Code completion, code generation, programming assistance, FIM (Fill-In-the-Middle)

🚀 How to Use

1. Command Line (mlx-lm)

First, install the required package:

pip install mlx-lm

Then run the model directly:

mlx_lm.generate --model CyberYui/Codestral-22B-Yui-MLX --prompt "def quicksort(arr):"

2. Python Code

from mlx_lm import load, generate

# Load this model
model, tokenizer = load("CyberYui/Codestral-22B-Yui-MLX")

# Define your prompt
prompt = "Write a Python function for quicksort with comments"

# Apply chat template if available
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False
)

# Generate response
response = generate(model, tokenizer, prompt=prompt, verbose=True)

3. LM Studio

  1. Open LM Studio and log in to your Hugging Face account
  2. Go to the Publish tab
  3. Search for this modelCyberYui/Codestral-22B-Yui-MLX
  4. Download and load the model to enjoy native MLX acceleration!

📄 License

This model is distributed under the Apache License 2.0, strictly following the original model's open-source license.


🇨🇳 中文

Codestral-22B-Yui-MLX

本模型名为 Codestral-22B ,是基于 Mistral AI 官方 mistralai/Codestral-22B-v0.1 模型,无任何修改并由CyberYui个人转换的 MLX 格式专属版本模型,专为 Apple Silicon(M1/M2/M3/M4) 系列芯片深度适配。

📌 模型详情

  • 基础模型mistralai/Codestral-22B-v0.1
  • 转换工具mlx-lm 0.29.1
  • 量化精度:4-bit(总大小约 12.5GB)
  • 运行框架:MLX(原生苹果 GPU 加速)
  • 适用场景:代码补全、代码生成、编程辅助、FIM(中间填充)

🚀 使用方法

1. 命令行(mlx-lm)

首先安装依赖包:

pip install mlx-lm

然后直接运行模型:

mlx_lm.generate --model CyberYui/Codestral-22B-Yui-MLX --prompt "def quicksort(arr):"

2. Python 代码

from mlx_lm import load, generate

# 加载本模型
model, tokenizer = load("CyberYui/Codestral-22B-Yui-MLX")

# 定义你的提示词
prompt = "写一个带注释的Python快速排序函数"

# 应用对话模板
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False
)

# 模型会生成回复
response = generate(model, tokenizer, prompt=prompt, verbose=True)

3. LM Studio 使用

  1. 打开 LM Studio,登录你的 Hugging Face 账号
  2. 进入 Publish 标签页
  3. 搜索 CyberYui/Codestral-22B-Yui-MLX
  4. 下载并加载模型,即可享受原生 MLX 版本的 Codestral 模型了!

📄 开源协议

本模型遵循 Apache License 2.0 协议分发,严格遵守原模型的开源要求。 ```