YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Book Buddy - Question Generator

This fine-tuned model was generated to help students study. By submitting their texts, the model will generate a question to help them study.

Model Details

  • Model Architecture: T5
  • Tokenizer Used:
  • Language: English
  • Task: Question Generation

Model Usage

How to Use

Provide instructions on how to use your model in a clear and concise manner.

# Example code for using the model
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("t5_tokenizer")

# Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("t5_trained_model")

# Generate a question
input_text = "Provide a sample input text."
input_ids = tokenizer.encode(input_text, return_tensors="pt", padding=True, max_length=512, truncation=True)

# Generate question
question_ids = model.generate(input_ids, max_length=32, num_return_sequences=1, num_beams=4)

# Decode the generated question
generated_question = tokenizer.decode(question_ids[0], skip_special_tokens=True)
print(f"Generated Question: {generated_question}")
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train zibaatak/book-buddy-question-generator