Instructions to use Trelis/falcon-7b-chat-SFT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Trelis/falcon-7b-chat-SFT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Trelis/falcon-7b-chat-SFT", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Trelis/falcon-7b-chat-SFT", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("Trelis/falcon-7b-chat-SFT", trust_remote_code=True) - llama-cpp-python
How to use Trelis/falcon-7b-chat-SFT with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Trelis/falcon-7b-chat-SFT", filename="falcon-7b-chat-SFT.Q4_K.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Trelis/falcon-7b-chat-SFT with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Trelis/falcon-7b-chat-SFT # Run inference directly in the terminal: llama-cli -hf Trelis/falcon-7b-chat-SFT
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Trelis/falcon-7b-chat-SFT # Run inference directly in the terminal: llama-cli -hf Trelis/falcon-7b-chat-SFT
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Trelis/falcon-7b-chat-SFT # Run inference directly in the terminal: ./llama-cli -hf Trelis/falcon-7b-chat-SFT
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Trelis/falcon-7b-chat-SFT # Run inference directly in the terminal: ./build/bin/llama-cli -hf Trelis/falcon-7b-chat-SFT
Use Docker
docker model run hf.co/Trelis/falcon-7b-chat-SFT
- LM Studio
- Jan
- vLLM
How to use Trelis/falcon-7b-chat-SFT with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Trelis/falcon-7b-chat-SFT" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Trelis/falcon-7b-chat-SFT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Trelis/falcon-7b-chat-SFT
- SGLang
How to use Trelis/falcon-7b-chat-SFT with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Trelis/falcon-7b-chat-SFT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Trelis/falcon-7b-chat-SFT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Trelis/falcon-7b-chat-SFT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Trelis/falcon-7b-chat-SFT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use Trelis/falcon-7b-chat-SFT with Ollama:
ollama run hf.co/Trelis/falcon-7b-chat-SFT
- Unsloth Studio new
How to use Trelis/falcon-7b-chat-SFT with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Trelis/falcon-7b-chat-SFT to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Trelis/falcon-7b-chat-SFT to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Trelis/falcon-7b-chat-SFT to start chatting
- Docker Model Runner
How to use Trelis/falcon-7b-chat-SFT with Docker Model Runner:
docker model run hf.co/Trelis/falcon-7b-chat-SFT
- Lemonade
How to use Trelis/falcon-7b-chat-SFT with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Trelis/falcon-7b-chat-SFT
Run and chat with the model
lemonade run user.falcon-7b-chat-SFT-{{QUANT_TAG}}List all available models
lemonade list
You need to agree to share your contact information to access this model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
Access to this repo requires the purchase of a license (see link on model card below)
Log in or Sign Up to review the conditions and access this model content.
β¨ Falcon-7B-chat-SFT
This is a chat fine-tuned version of Falcon-7b and Falcon-40b trained using OpenAssistant conversations.
Notably:
- The data used is Apache 2 licensed and not generated using AI, thereby allowing this chat model to be used commercially, which is particularly useful for data preparation and generation for training other models.
- The purchase of access to this model grants the user permission to use the model commercially for inference or fine-tuning and inference.
Prompt format:
# Falcon style
B_INST, E_INST = "\nHuman:", "\nAssistant:"
prompt = f"{B_INST} {user_prompt.strip()}{E_INST}"
THE ORIGINAL MODEL CARD FOLLOWS BELOW.
π Falcon-7B
Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
Paper coming soon π.
π€ To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading this great blogpost fron HF!
Why use Falcon-7B?
- It outperforms comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.), thanks to being trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. See the OpenLLM Leaderboard.
- It features an architecture optimized for inference, with FlashAttention (Dao et al., 2022) and multiquery (Shazeer et al., 2019).
- It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions.
β οΈ This is a raw, pretrained model, which should be further finetuned for most usecases. If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at Falcon-7B-Instruct.
π₯ Looking for an even more powerful model? Falcon-40B is Falcon-7B's big brother!
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
π₯ Falcon LLMs require PyTorch 2.0 for use with transformers!
For fast inference with Falcon, check-out Text Generation Inference! Read more in this blogpost.
You will need at least 16GB of memory to swiftly run inference with Falcon-7B.
Model Card for Falcon-7B
Model Details
Model Description
- Developed by: https://www.tii.ae;
- Model type: Causal decoder-only;
- Language(s) (NLP): English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- License: Apache 2.0.
Model Source
- Paper: coming soon.
Uses
Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
Recommendations
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Training Details
Training Data
Falcon-7B was trained on 1,500B tokens of RefinedWeb, a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile (Gao et al., 2020).
| Data source | Fraction | Tokens | Sources |
|---|---|---|---|
| RefinedWeb-English | 79% | 1,185B | massive web crawl |
| Books | 7% | 110B | |
| Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
| Code | 3% | 45B | |
| RefinedWeb-French | 3% | 45B | massive web crawl |
| Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. |
The data was tokenized with the Falcon-7B/40B tokenizer.
Training Procedure
Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
Training Hyperparameters
| Hyperparameter | Value | Comment |
|---|---|---|
| Precision | bfloat16 |
|
| Optimizer | AdamW | |
| Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
| Weight decay | 1e-1 | |
| Z-loss | 1e-4 | |
| Batch size | 2304 | 30B tokens ramp-up |
Speeds, Sizes, Times
Training happened in early March 2023 and took about two weeks.
Evaluation
Paper coming soon.
See the OpenLLM Leaderboard for early results.
Technical Specifications
Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:
- Positionnal embeddings: rotary (Su et al., 2021);
- Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);
- Decoder-block: parallel attention/MLP with a single layer norm.
| Hyperparameter | Value | Comment |
|---|---|---|
| Layers | 32 | |
d_model |
4544 | Increased to compensate for multiquery |
head_dim |
64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 |
Compute Infrastructure
Hardware
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
Software
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
Citation
Paper coming soon π. In the meanwhile, you can use the following information to cite:
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
To learn more about the pretraining dataset, see the π RefinedWeb paper.
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
License
Falcon-7B is made available under the Apache 2.0 license.
Contact
- Downloads last month
- -