Instructions to use mhenrichsen/danskgpt-tiny with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mhenrichsen/danskgpt-tiny with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="mhenrichsen/danskgpt-tiny")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("mhenrichsen/danskgpt-tiny") model = AutoModelForCausalLM.from_pretrained("mhenrichsen/danskgpt-tiny") - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use mhenrichsen/danskgpt-tiny with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "mhenrichsen/danskgpt-tiny" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mhenrichsen/danskgpt-tiny", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/mhenrichsen/danskgpt-tiny
- SGLang
How to use mhenrichsen/danskgpt-tiny with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "mhenrichsen/danskgpt-tiny" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mhenrichsen/danskgpt-tiny", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "mhenrichsen/danskgpt-tiny" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mhenrichsen/danskgpt-tiny", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use mhenrichsen/danskgpt-tiny with Docker Model Runner:
docker model run hf.co/mhenrichsen/danskgpt-tiny
DanskGPT-tiny
DanskGPT-tiny er en 1,1 milliard parametre LLaMA baseret LLM. Træningen er en fortsættelse af TinyLLaMA.
Modellen er trænet på 8 milliarder tokens af dansk syntetisk tekst.
Denne model er en såkaldt "foundation/completion" model, og er derfor ikke beregnet til at chatte med.
Inferens
Ved brug af vLLM.
pip install vllm
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.8, top_p=0.95, max_tokens=512)
llm = LLM(model="mhenrichsen/danskgpt-tiny")
while True:
prompt = input("Skriv: ")
outputs = llm.generate(prompt, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Brug for hjælp?
Har du spørgsmål eller brug for hjælp til LLM'er eller automatisering af tekstbaserede opgaver, så kontakt mig gerne.
/Mads
- Downloads last month
- 957
docker model run hf.co/mhenrichsen/danskgpt-tiny