Text Generation
Transformers
PyTorch
English
llava
How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="tosin/LLaDoc")
# Load model directly
from transformers import AutoProcessor, AutoModelForCausalLM

processor = AutoProcessor.from_pretrained("tosin/LLaDoc")
model = AutoModelForCausalLM.from_pretrained("tosin/LLaDoc")
Quick Links

LLaDoc (Large Language and Document) model

This is a fine-tuned model of LLaVA1.5 (7B) on the iDocVQA dataset. It is intended to be used as a multimodal system. The dataset it's trained on is limited in scope, as it covers only certain domains.

The accuracy achieved on the validation set is 29.58%.

Please find the information about preprocessing, training and full details of the LLaVA model in the original link

The paper for this work is available on arXiv: https://arxiv.org/abs/2402.00453

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for tosin/LLaDoc