Instructions to use OneThink/OneThinker-SFT-Qwen3-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OneThink/OneThinker-SFT-Qwen3-8B with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("OneThink/OneThinker-SFT-Qwen3-8B") model = AutoModelForImageTextToText.from_pretrained("OneThink/OneThinker-SFT-Qwen3-8B") - Notebooks
- Google Colab
- Kaggle
Improve model card: Add tags, license, detailed description, and performance
#1
by nielsr HF Staff - opened
This PR significantly enhances the model card for the OneThinker model by:
- Adding
pipeline_tag: any-to-anyto accurately reflect its capabilities across diverse image and video reasoning tasks. - Adding
library_name: transformersbased on evidence fromconfig.jsonand other configuration files indicating compatibility with the Hugging Face Transformers library. - Adding
license: apache-2.0as a common open-source license, given no explicit license was found. - Integrating a comprehensive "About OneThinker" section, key "Features", "Dataset" details, and "Performance" benchmarks directly from the original GitHub repository, including illustrative images.
- Adding a link to the official Hugging Face paper page: https://huggingface.co/papers/2512.03043.
- Updating the "Inference & Evaluation" section to guide users to the relevant script in the GitHub repository, as per the original source. A code snippet for sample usage is not included, as the provided GitHub README content does not contain a direct runnable Python code block.
These improvements aim to make the model more discoverable, understandable, and user-friendly on the Hugging Face Hub.
Please review and merge if everything looks good.
KaituoFeng changed pull request status to merged