tiiuae/falcon-refinedweb
Viewer β’ Updated β’ 968M β’ 22.1k β’ 911
How to use quantumaikr/falcon-180B-chat-instruct with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="quantumaikr/falcon-180B-chat-instruct") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/falcon-180B-chat-instruct")
model = AutoModelForCausalLM.from_pretrained("quantumaikr/falcon-180B-chat-instruct")How to use quantumaikr/falcon-180B-chat-instruct with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "quantumaikr/falcon-180B-chat-instruct"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "quantumaikr/falcon-180B-chat-instruct",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/quantumaikr/falcon-180B-chat-instruct
How to use quantumaikr/falcon-180B-chat-instruct with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "quantumaikr/falcon-180B-chat-instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "quantumaikr/falcon-180B-chat-instruct",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "quantumaikr/falcon-180B-chat-instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "quantumaikr/falcon-180B-chat-instruct",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use quantumaikr/falcon-180B-chat-instruct with Docker Model Runner:
docker model run hf.co/quantumaikr/falcon-180B-chat-instruct
quantumaikr/falcon-180B-chat-instruct is a 180B parameters causal decoder-only model built by quantumaikr based on Falcon-180B-chat
To run inference with the model in full bfloat16 precision you need approximately 8xA100 80GB or equivalent.
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "quantumaikr/falcon-180B-chat-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
π°π· www.quantumai.kr
π°π· hi@quantumai.kr [μ΄κ±°λμΈμ΄λͺ¨λΈ κΈ°μ λμ λ¬Έμνμ]