Instructions to use lmstudio-community/codegemma-2b-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lmstudio-community/codegemma-2b-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="lmstudio-community/codegemma-2b-GGUF")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("lmstudio-community/codegemma-2b-GGUF", dtype="auto") - llama-cpp-python
How to use lmstudio-community/codegemma-2b-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="lmstudio-community/codegemma-2b-GGUF", filename="codegemma-2b-IQ1_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use lmstudio-community/codegemma-2b-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf lmstudio-community/codegemma-2b-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf lmstudio-community/codegemma-2b-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf lmstudio-community/codegemma-2b-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf lmstudio-community/codegemma-2b-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf lmstudio-community/codegemma-2b-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf lmstudio-community/codegemma-2b-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf lmstudio-community/codegemma-2b-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf lmstudio-community/codegemma-2b-GGUF:Q4_K_M
Use Docker
docker model run hf.co/lmstudio-community/codegemma-2b-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use lmstudio-community/codegemma-2b-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "lmstudio-community/codegemma-2b-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmstudio-community/codegemma-2b-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/lmstudio-community/codegemma-2b-GGUF:Q4_K_M
- SGLang
How to use lmstudio-community/codegemma-2b-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "lmstudio-community/codegemma-2b-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmstudio-community/codegemma-2b-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "lmstudio-community/codegemma-2b-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lmstudio-community/codegemma-2b-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use lmstudio-community/codegemma-2b-GGUF with Ollama:
ollama run hf.co/lmstudio-community/codegemma-2b-GGUF:Q4_K_M
- Unsloth Studio new
How to use lmstudio-community/codegemma-2b-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for lmstudio-community/codegemma-2b-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for lmstudio-community/codegemma-2b-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for lmstudio-community/codegemma-2b-GGUF to start chatting
- Docker Model Runner
How to use lmstudio-community/codegemma-2b-GGUF with Docker Model Runner:
docker model run hf.co/lmstudio-community/codegemma-2b-GGUF:Q4_K_M
- Lemonade
How to use lmstudio-community/codegemma-2b-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull lmstudio-community/codegemma-2b-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.codegemma-2b-GGUF-Q4_K_M
List all available models
lemonade list
💫 Community Model> CodeGemma 2b by Google
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord.
Model creator: Google
Original model: google/codegemma-2b
GGUF quantization: provided by bartowski based on llama.cpp release b2589
Model Summary:
CodeGemma 2B is the first in a series of coding models released by Google. This is a code completion model, and as such, cannot be prompted in the same way a chat or instruct model can be prompted.
This model is perfect for code completion and use in tools like co-pilot, where its small size will make completions show up instantly while still being high performance.
This model should not be used as a chat model, and will not answer questions.
Prompt Template:
This model does not support a typical prompt template, but instead uses the following tokens for specifying input parts:
- <|fim_prefix|> precedes the context before the completion we want to run.
- <|fim_suffix|> precedes the suffix. You must put this token exactly where the cursor would be positioned in an editor, as this is the location that will be completed by the model.-
- <|fim_middle|> is the prompt that invites the model to run the generation.
In addition to these, there's also <|file_separator|>, which is used to provide multi-file contexts.
Select LM Studio Blank Preset to use your own proper format as below.
Use case and examples
This model will excel at code generation and fill-in-the-middle.
Coding 1
<|fim_prefix|>import datetime
def calculate_age(birth_year):
"""Calculates a person's age based on their birth year."""
current_year = datetime.date.today().year
<|fim_suffix|>
return age<|fim_middle|>
age = current_year - birth_year<|file_separator|>
Explanation: Here the model was given the fill-in-middle prefix and suffix. The model is then told to generate the fim_middle with the token <|fim_middle|> to which is replies with the code that would complete the function.
Coding 2
<|fim_prefix|>public class MergeSort {
public static void mergeSort(int[] arr) {
int n = arr.length;
if (n < 2) {
return;
}
<|fim_suffix|>
mergeSort(left);
mergeSort(right);
merge(arr, left, right);
}
public static void merge(int[] arr, int[] left, int[] right) {
int i = 0;
int j = 0;
int k = 0;
while (i < left.length && j < right.length) {
if (left[i] <= right[j]) {
arr[k] = left[i];
i++;
} else {
arr[k] = right[j];
j++;
}
k++;
}
while (i < left.length) {
arr[k] = left[i];
i++;
k++;
}
while (j < right.length) {
arr[k] = right[j];
j++;
k++;
}
}
public static void main(String[] args) {
int[] arr = {5, 2, 4, 6, 1, 3};
mergeSort(arr);
for (int i = 0; i < arr.length; i++) {
System.out.print(arr[i] + " ");
}
}
}
<|fim_middle|>
int mid = n / 2;
int[] left = new int[mid];
int[] right = new int[n - mid];
for (int i = 0; i < mid; i++) {
left[i] = arr[i];
}
for (int i = mid; i < n; i++) {
right[i - mid] = arr[i];
}<|file_separator|>
Explanation: The model was given the majority of a merge sort implementation in Java with a portion in the middle removed. The model was able to fill in the missing code based on the surrounding details.
Coding 3
<|fim_prefix|>arr = [1, 5, 3, 76, 12, 154, 2, 56]
# Sort the array then print only the even numbers
<|fim_suffix|><|fim_middle|>
arr.sort()
for i in arr:
if i % 2 == 0:
print(i)<|file_separator|>
Explanation: While this model cannot be directly prompted, it can be hinted in the right direction by preceeding the fill in middle token by a comment explaning what comes next, then using <|fim_suffix|> followed immediately by <|fim_middle|>
In this example, the comment suggest that what comes next is sorting the array and printing out each one that is even. The model accurately fills in what should be at <|fim_suffix|>.
Technical Details
CodeGemma 2b is based on the Gemma 2b model with additional training on exclusively code.
The code used is based on publicly avaialble code repositories.
The model was trained exclusively for the purposes of code completion and excels at it.
Additional details can be found on Google's official report PDF here
Special thanks
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
🙏 Special thanks to Kalomaze for his dataset (linked here) that was used for calculating the imatrix for these quants, which improves the overall quality!
Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
- Downloads last month
- 385
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
docker model run hf.co/lmstudio-community/codegemma-2b-GGUF: