Instructions to use DuckyBlender/polish-lobotomy with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DuckyBlender/polish-lobotomy with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DuckyBlender/polish-lobotomy", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("DuckyBlender/polish-lobotomy", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("DuckyBlender/polish-lobotomy", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DuckyBlender/polish-lobotomy with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DuckyBlender/polish-lobotomy" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DuckyBlender/polish-lobotomy", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/DuckyBlender/polish-lobotomy
- SGLang
How to use DuckyBlender/polish-lobotomy with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DuckyBlender/polish-lobotomy" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DuckyBlender/polish-lobotomy", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DuckyBlender/polish-lobotomy" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DuckyBlender/polish-lobotomy", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use DuckyBlender/polish-lobotomy with Docker Model Runner:
docker model run hf.co/DuckyBlender/polish-lobotomy
Polish-Lobotomy: An awful polish fine-tune
Model Description
This fine-tuned Phi-3 model is my first attempt at a Polish fine-tune of Phi-3. It is very bad, probably because of the fine-tuning method (making the model learn a new language probably needs a full fine-tune) and the small dataset.
Training Details
- Trained on a single RTX 4060 for approximately 1 hour
- Utilized 8-bit QLORA for efficient training
- Despite the short training period, the model somehow managed to learn something (but not very well)
Dataset
The model was trained on a chaotic telegram group chat. It's basically complete lobotomy.
Prompt Template
The prompt template used for this model is identical to the Phi 3 template.
Disclaimer
Please be advised that this model's output may contain nonsensical responses. Viewer discretion is strongly advised (but not really necessary).
Use this model at your own risk, and please engage with the output responsibly (but let's be real, it's not like it's going to be useful for anything).
- Downloads last month
- 3
