Instructions to use Volko76/Lucie-7B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Volko76/Lucie-7B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Volko76/Lucie-7B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Volko76/Lucie-7B-Instruct") model = AutoModelForCausalLM.from_pretrained("Volko76/Lucie-7B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - llama-cpp-python
How to use Volko76/Lucie-7B-Instruct with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Volko76/Lucie-7B-Instruct", filename="Lucie-7B-q4_k_m.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Volko76/Lucie-7B-Instruct with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Volko76/Lucie-7B-Instruct:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Volko76/Lucie-7B-Instruct:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Volko76/Lucie-7B-Instruct:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Volko76/Lucie-7B-Instruct:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Volko76/Lucie-7B-Instruct:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Volko76/Lucie-7B-Instruct:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Volko76/Lucie-7B-Instruct:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Volko76/Lucie-7B-Instruct:Q4_K_M
Use Docker
docker model run hf.co/Volko76/Lucie-7B-Instruct:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use Volko76/Lucie-7B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Volko76/Lucie-7B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Volko76/Lucie-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Volko76/Lucie-7B-Instruct:Q4_K_M
- SGLang
How to use Volko76/Lucie-7B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Volko76/Lucie-7B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Volko76/Lucie-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Volko76/Lucie-7B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Volko76/Lucie-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use Volko76/Lucie-7B-Instruct with Ollama:
ollama run hf.co/Volko76/Lucie-7B-Instruct:Q4_K_M
- Unsloth Studio new
How to use Volko76/Lucie-7B-Instruct with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Volko76/Lucie-7B-Instruct to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Volko76/Lucie-7B-Instruct to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Volko76/Lucie-7B-Instruct to start chatting
- Docker Model Runner
How to use Volko76/Lucie-7B-Instruct with Docker Model Runner:
docker model run hf.co/Volko76/Lucie-7B-Instruct:Q4_K_M
- Lemonade
How to use Volko76/Lucie-7B-Instruct with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Volko76/Lucie-7B-Instruct:Q4_K_M
Run and chat with the model
lemonade run user.Lucie-7B-Instruct-Q4_K_M
List all available models
lemonade list
YAML Metadata Warning:The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Model Card for Lucie-7B-Instruct
Model Description
Lucie-7B-Instruct is a fine-tuned version of Lucie-7B, an open-source, multilingual causal language model created by OpenLLM-France.
Lucie-7B-Instruct is fine-tuned on synthetic instructions produced by ChatGPT and Gemma and a small set of customized prompts about OpenLLM and Lucie.
Training details
Training data
Lucie-7B-Instruct is trained on the following datasets:
- Alpaca-cleaned (English; 51604 samples)
- Alpaca-cleaned-fr (French; 51655 samples)
- Magpie-Gemma (English; 195167 samples)
- Wildchat (French subset; 26436 samples)
- Hard-coded prompts concerning OpenLLM and Lucie (based on allenai/tulu-3-hard-coded-10x)
- French: openllm_french.jsonl (24x10 samples)
- English: openllm_english.jsonl (24x10 samples)
Preprocessing
- Filtering by language: Magpie-Gemma and Wildchat were filtered to keep only English and French samples, respectively.
- Filtering by keyword: Examples containing assistant responses were filtered out from the four synthetic datasets if the responses contained a keyword from the list filter_strings. This filter is designed to remove examples in which the assistant is presented as model other than Lucie (e.g., ChatGPT, Gemma, Llama, ...).
Training procedure
The model architecture and hyperparameters are the same as for Lucie-7B during the annealing phase with the following exceptions:
- context length: 4096
- batch size: 1024
- max learning rate: 3e-5
- min learning rate: 3e-6
Testing the model
Test with ollama
- Download and install Ollama
- Run in a shell:
ollama run hf.co/OpenLLM-France/Lucie-7B-Instruct
- Once ">>>" appears, type your prompt(s) and press Enter.
- Optionally, restart a conversation by typing "
/clear" - End the session by typing "
/bye".
Useful for debug:
- How to print input requests and output responses in Ollama server?
- Documentation on Modelfile
- Examples: Ollama model library
- Llama 3 example: https://ollama.com/library/llama3.1
- Examples: Ollama model library
- Add GUI : https://docs.openwebui.com/
Test with vLLM
1. Run vLLM Docker Container
Use the following command to deploy the model,
replacing INSERT_YOUR_HF_TOKEN with your Hugging Face Hub token.
docker run --runtime nvidia --gpus=all \
--env "HUGGING_FACE_HUB_TOKEN=INSERT_YOUR_HF_TOKEN" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model OpenLLM-France/Lucie-7B-Instruct
2. Test using OpenAI Client in Python
To test the deployed model, use the OpenAI Python client as follows:
from openai import OpenAI
# Initialize the client
client = OpenAI(base_url='http://localhost:8000/v1', api_key='empty')
# Define the input content
content = "Hello Lucie"
# Generate a response
chat_response = client.chat.completions.create(
model="OpenLLM-France/Lucie-7B-Instruct",
messages=[
{"role": "user", "content": content}
],
)
print(chat_response.choices[0].message.content)
Citation
When using the Lucie-7B-Instruct model, please cite the following paper:
✍ Olivier Gouvert, Julie Hunter, Jérôme Louradour, Evan Dufraisse, Yaya Sy, Pierre-Carl Langlais, Anastasia Stasenko, Laura Rivière, Christophe Cerisara, Jean-Pierre Lorré (2025) Lucie-7B LLM and its training dataset
@misc{openllm2023claire,
title={The Lucie-7B LLM and the Lucie Training Dataset:
open resources for multilingual language generation},
author={Olivier Gouvert and Julie Hunter and Jérôme Louradour and Evan Dufraisse and Yaya Sy and Pierre-Carl Langlais and Anastasia Stasenko and Laura Rivière and Christophe Cerisara and Jean-Pierre Lorré},
year={2025},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Acknowledgements
This work was performed using HPC resources from GENCI–IDRIS (Grant 2024-GC011015444).
Lucie-7B was created by members of LINAGORA and the OpenLLM-France community, including in alphabetical order: Olivier Gouvert (LINAGORA), Ismaïl Harrando (LINAGORA/SciencesPo), Julie Hunter (LINAGORA), Jean-Pierre Lorré (LINAGORA), Jérôme Louradour (LINAGORA), Michel-Marie Maudet (LINAGORA), and Laura Rivière (LINAGORA).
We thank Clément Bénesse (Opsci), Christophe Cerisara (LORIA), Evan Dufraisse (CEA), Guokan Shang (MBZUAI), Joël Gombin (Opsci), Jordan Ricker (Opsci), and Olivier Ferret (CEA) for their helpful input.
Contact
- Downloads last month
- 16