HP-Morningstar-Hackathon

Model collection staged for the HP + Morningstar AI Hackathon, designed to run locally on the HP ZGX Nano AI Station.

Models

Model Size Use Case
Qwen/Qwen3-14B-AWQ 14B params (4-bit) Agent LLM with strong structured output
Qwen/Qwen3.6-27B-FP8 27B params (FP8) Higher-capability agent for complex reasoning
mistralai/Ministral-3-8B-Instruct-2512 3.8B params Lightweight/fast agent for rapid iteration

Purpose

These models are pre-staged for a 4-hour hackathon where Morningstar developers build a multi-tool financial compliance agent running entirely on local infrastructure. The models were selected for compatibility with the ZGX Nano's 128GB unified memory and Ollama's serving capabilities.

Usage

Clone this repo to the ZGX Nano and serve via Ollama or load directly with transformers/vLLM.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support