How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
# Warning: Pipeline type "image-to-text" is no longer supported in transformers v5.
# You must load the model directly (see below) or downgrade to v4.x with:
# 'pip install "transformers<5.0.0'
from transformers import pipeline

pipe = pipeline("image-to-text", model="LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini548m")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini548m")
model = AutoModelForCausalLM.from_pretrained("LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini548m")
Quick Links

ADD HEAD


Mistral
VISION-ENCODER-DECODER-MODEL

print('Add Vision...')
# ADD HEAD
# Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model



Vmodel = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
    "google/vit-base-patch16-224-in21k", "LeroyDyer/Mixtral_AI_Tiny"
)
_Encoder_ImageProcessor = Vmodel.encoder
_Decoder_ImageTokenizer = Vmodel.decoder
_VisionEncoderDecoderModel = Vmodel
# Add Pad tokems
LM_MODEL.VisionEncoderDecoder = _VisionEncoderDecoderModel
# Add Sub Components
LM_MODEL.Encoder_ImageProcessor = _Encoder_ImageProcessor
LM_MODEL.Decoder_ImageTokenizer = _Decoder_ImageTokenizer
LM_MODEL


Downloads last month
38
Safetensors
Model size
0.6B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini548m

Quantizations
1 model

Collections including LeroyDyer/SpydazWebAI_VisionEncoderDecoderModel_Mini548m