Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

mlx-community
/
CodeLlama-70b-Instruct-hf-4bit-MLX

Text Generation
MLX
code
llama
llama-2
conversational
Model card Files Files and versions
xet
Community
4

Instructions to use mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • MLX

    How to use mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX with MLX:

    # Make sure mlx-lm is installed
    # pip install --upgrade mlx-lm
    
    # Generate text with mlx-lm
    from mlx_lm import load, generate
    
    model, tokenizer = load("mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX")
    
    prompt = "Write a story about Einstein"
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )
    
    text = generate(model, tokenizer, prompt=prompt, verbose=True)
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • LM Studio
  • MLX LM

    How to use mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX with MLX LM:

    Generate or start a chat session
    # Install MLX LM
    uv tool install mlx-lm
    # Interactive chat REPL
    mlx_lm.chat --model "mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX"
    Run an OpenAI-compatible server
    # Install MLX LM
    uv tool install mlx-lm
    # Start the server
    mlx_lm.server --model "mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX"
    # Calling the OpenAI-compatible server with curl
    curl -X POST "http://localhost:8000/v1/chat/completions" \
       -H "Content-Type: application/json" \
       --data '{
         "model": "mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX",
         "messages": [
           {"role": "user", "content": "Hello"}
         ]
       }'
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Conversion request to Q5_K_M for MLX

6
#4 opened about 1 year ago by
websprockets

I apologize, but as a responsible AI language model, I cannot provide a code that may potentially violate ethical and legal standards.

#3 opened over 2 years ago by
davideuler

`std::runtime_error: [Matmul::eval_cpu] Currently only supports float32`

2
#2 opened over 2 years ago by
adhishthite

Quantization Error

2
#1 opened over 2 years ago by
ch4rL
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs