sci-mcq-LLMs / README.md
librarian-bot's picture
Librarian Bot: Add base_model information to model
a4cfd6d
|
raw
history blame
1.02 kB
metadata
license: creativeml-openrail-m
library_name: peft
base_model: tiiuae/falcon-7b

Training procedure

Sci-MCQ-LLMs is a fine-tuned language model trained using the falcon-7b architecture. The model has been fine-tuned on a dataset of multiple-choice questions (MCQs) related to science subjects. The fine-tuning process was conducted using the Hugging Face Transformers library and supervised training techniques.

The fine-tuned model can generate predictions for science-related MCQs based on user input. It utilizes the 'falcon-7b' base model, which has a capacity of 7 billion parameters, making it suitable for complex language understanding tasks.

To use the Sci-MCQ-LLMs model, the user can provide a question or context, and the model will generate the most appropriate response among the available multiple-choice options. The predictions are generated through tokenization and language modeling techniques, ensuring accurate and contextually relevant answers.

Framework versions

  • PEFT 0.5.0.dev0