Any-to-Any
Transformers
Safetensors
qwen2_5_vl
image-text-to-text
custom_code
text-generation-inference
Instructions to use modelscope/Nexus-Gen with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use modelscope/Nexus-Gen with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("modelscope/Nexus-Gen", trust_remote_code=True) model = AutoModelForImageTextToText.from_pretrained("modelscope/Nexus-Gen", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
Update pipeline tag to any-to-any
#4
by nielsr HF Staff - opened
This PR updates the pipeline_tag to any-to-any in the model card's metadata. This change more accurately reflects the model's unified capabilities, including image understanding, generation, and editing, as described in its accompanying paper. This also ensures the model is discoverable under the any-to-any pipeline on the Hugging Face Hub (e.g., via https://huggingface.co/models?pipeline_tag=any-to-any).
mi804 changed pull request status to merged