slidehelper-outlineGen
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 for generating presentation outlines.
Model Description
This model has been fine-tuned to generate structured outlines for presentations based on given topics.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Alestin/slidehelper-outlineGen")
model = AutoModelForCausalLM.from_pretrained("Alestin/slidehelper-outlineGen")
def generate_outline(topic, max_new_tokens=200):
prompt = f"### Instruction:\nGenerate an outline for a presentation on: {topic}\n\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.2,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
response = generated_text.split("### Response:")[1].strip()
return response
# Example usage
outline = generate_outline("artificial intelligence")
print(outline)
Model tree for Alestin/slidehelper-outlineGen
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0