Instructions to use kakaobrain/align-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use kakaobrain/align-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="kakaobrain/align-base") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("kakaobrain/align-base") model = AutoModelForZeroShotImageClassification.from_pretrained("kakaobrain/align-base") - Notebooks
- Google Colab
- Kaggle
Increase max_position_embeddings?
#5
by schneeman - opened
I have some longer texts that I'd like to embed and finding that I'm bumping against the default 512 size.
I'm trying to configure the model as so:
processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
model = AlignModel.from_pretrained("kakaobrain/align-base")
model.config.text_config.max_position_embeddings = 2048
and finding that the 2048 is not adhered to, which tells me that I'm probably doing it wrong. Specifically, the error I get is:
embeddings = inputs_embeds + token_type_embeddings
if self.position_embedding_type == "absolute":
position_embeddings = self.position_embeddings(position_ids)
> embeddings += position_embeddings
E RuntimeError: The size of tensor a (704) must match the size of tensor b (512) at non-singleton dimension 1
Ultimately, I'd like to use the pretrained kakaobrain/align-base with a longer context size. Is this possible?
schneeman changed discussion status to closed