Instructions to use microsoft/kosmos-2-patch14-224 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/kosmos-2-patch14-224 with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="microsoft/kosmos-2-patch14-224")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224") model = AutoModelForImageTextToText.from_pretrained("microsoft/kosmos-2-patch14-224") - Notebooks
- Google Colab
- Kaggle
Can i pass the prompt via a post request to the HF Inference endpoint / API?
#5
by jamesdhope - opened
Hi there, can I pass the prompt in the post request to the HF inferencing endpoint, and what is the correct payload format please? Thanks.
I am not sure if this model is already supported in inferencing endpoint or not (I am not in that team). I know it's not supported yet in transformers' pipeline.
I will check the inferencing endpoint part.
Is there any update on this issue?
Hi~
For this model, at this moment, a custom Inference handler is needed. You can check the following documentation
https://huggingface.co/docs/inference-endpoints/guides/custom_handler