Inference Providers
Active filters: ModelOpt
NVFP4/Qwen3-30B-A3B-Thinking-2507-FP4
Text Generation
• 16B • Updated • 1.17k
• 4
Text Generation
• 0.4B • Updated • 313
• 2
jonlizardo/affine-gpt-oss-120b-light
Text Generation
• 0.2B • Updated • 4
nvidia/Qwen3-235B-A22B-Thinking-2507-Eagle3
Text Generation
• 0.3B • Updated • 122
nvidia/Qwen3-30B-A3B-Thinking-2507-Eagle3
Text Generation
• 0.1B • Updated • 300
• 2
nvidia/Phi-4-multimodal-instruct-NVFP4
4B • Updated • 2.72k
• 11
nvidia/Phi-4-multimodal-instruct-FP8
6B • Updated • 870
• 7
nvidia/Phi-4-reasoning-plus-FP8
15B • Updated • 439
• 6
nvidia/Phi-4-reasoning-plus-NVFP4
8B • Updated • 1.12k
• 9
nvidia/Llama-3.1-8B-Instruct-NVFP4
5B • Updated • 184k
• 10
Text Generation
• 5B • Updated • 49.2k
• 17
Text Generation
• 8B • Updated • 18.8k
• 5
Text Generation
• 8B • Updated • 10.3k
• 10
Text Generation
• 15B • Updated • 3.63k
• 5
Text Generation
• 17B • Updated • 82.1k
• 16
nvidia/Qwen2.5-VL-7B-Instruct-FP8
Text Generation
• 8B • Updated • 1.2k
• 8
nvidia/Qwen2.5-VL-7B-Instruct-NVFP4
Text Generation
• 5B • Updated • 73.1k
• 15
nvidia/gpt-oss-120b-Eagle3-short-context
Text Generation
• Updated • 2.87k
• 16
nvidia/Llama-3.3-70B-Instruct-Eagle3
Text Generation
• Updated • 211
• 2
nvidia/DeepSeek-V3.1-NVFP4
Text Generation
• 394B • Updated • 14.6k
• 16
nvidia/Kimi-K2-Thinking-NVFP4
Text Generation
• Updated • 20.5k
• 30
nvidia/gpt-oss-120b-Eagle3-throughput
Text Generation
• Updated • 1.03k
• 33
nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4
Text Generation
• Updated • 20.8k
• 39
nvidia/Qwen3-Next-80B-A3B-Thinking-NVFP4
Text Generation
• Updated • 3.48k
• 59
nvidia/Qwen3-235B-A22B-Thinking-2507-FP4-Eagle3
Text Generation
• Updated • 84
nvidia/Qwen3-VL-235B-A22B-Instruct-NVFP4
119B • Updated • 903
• 3
nvidia/DeepSeek-V3.2-NVFP4
Text Generation
• 394B • Updated • 53.1k
• 15
nvidia/Qwen3-235B-A22B-Thinking-2507-NVFP4
Text Generation
• Updated • 1.03k
• 9
Text Generation
• Updated • 1.47M
• 81
vincentzed-hf/Qwen3-Coder-Next-NVFP4
Text Generation
• Updated • 280
• 7