HyperNova 60B 2602

This model was converted to MLX format from MultiverseComputingCAI/Hypernova-60B-2602 using mlx-lm version 0.30.7.

Model Overview

HyperNova 60B 2602 is a model developed based on OpenAI’s gpt-oss-120b, developed by Multiverse Computing. The original gpt-oss-120b is an open-weight model (117B parameters, 5.1B active in MoE) designed for powerful reasoning, agentic tasks, and versatile developer use. This version is compressed with CompactifAI, Multiverse Computing’s proprietary technology, reducing parameter count and memory requirements while aiming to preserve strong reasoning.

The model is instruction-tuned and supports native tool calling (function calling with defined schemas, structured outputs, and agent-style workflows). HyperNova 60B 2602 is intended for the same broad use cases as gpt-oss-120b—reasoning, code generation, RAG, and tool-augmented applications—with lower memory footprint and deployment flexibility.

Model Specifications

Specification Value
Base model openai/gpt-oss-120b (117B params, 5.1B active MoE)
Total parameters 60B, 4.8B active MoE

Key Characteristics

Characteristic Description
Base model OpenAI gpt-oss-120b (117B params, MoE; open-weight, Apache 2.0)
🛠️ Tool calling Native support; OpenAI-style function / tool calling schemas; agentic use (e.g. function calling, structured outputs)
🧠 Parameters 60B total parameters after CompactifAI compression (reduced vs. base 117B)
📐 Architecture Decoder-only Transformer (from gpt-oss lineage)
🗜️ Compression CompactifAI (proprietary compression technology)
Primary language English
Other languages Not formally evaluated

Languages

  • Primary language: English
  • Other languages: Not formally evaluated

The model was trained primarily on English-language data. Performance on other languages may vary and has not been systematically measured.


Tool Calling

HyperNova 60B 2602 supports native tool use and is well-suited for:

  • Function calling with defined schemas
  • Structured outputs
  • Agentic operations (e.g. browser tasks, code execution where supported)

The model can detect when to invoke tools, emit structured JSON tool calls, and consume tool outputs to continue generation. Tool-calling behavior follows OpenAI-style schemas; compatibility refers to format and structure—exact parity with the base or other models is not guaranteed.

Example Tool Call

{
  "name": "get_weather",
  "arguments": {
    "city": "Paris",
    "date": "2026-02-10"
  }
}

Built by Multiverse Computing · Report an issue · Discord

Downloads last month
77
Safetensors
Model size
59B params
Tensor type
BF16
·
U8
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Hypernova-60B-2602-MLX-mxfp4