blascotobasco commited on
Commit
f5a4afc
·
verified ·
1 Parent(s): b3a1dd1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -45
README.md CHANGED
@@ -2,51 +2,7 @@
2
  license: apache-2.0
3
  base_model:
4
  - openai/gpt-oss-120b
 
5
  library_name: transformers
6
  ---
7
 
8
-
9
-
10
- ## ✨ Model overview
11
-
12
- **HyperNova 60B** is a large language model developed by **[Multiverse Computing](https://multiversecomputing.com/)** with a focus on compute efficiency and deployability.
13
-
14
- The model is designed to provide strong reasoning and text generation capabilities while significantly reducing compute and memory requirements compared to conventional large-scale language models.
15
-
16
- **HyperNova 60B** is intended for real-world deployment scenarios where cost, latency, and infrastructure constraints are critical, enabling high-performance inference without requiring frontier-scale hardware.
17
-
18
-
19
- 🚀 **Architecture**
20
-
21
- **HyperNova 60B** base architecture is [***`gpt-oss-120b`***](https://huggingface.co/openai/gpt-oss-120b).
22
-
23
- * 59B parameters with 4.8B active parameters
24
- * MXFP4 quantization
25
- * Configurable reasoning effort (low, medium, high)
26
- * GPU usage of less than 40GB
27
-
28
- For Inference examples, please refer to base model model card [***`gpt-oss-120b`***](https://huggingface.co/openai/gpt-oss-120b).
29
-
30
- **Evaluation & Performance**
31
-
32
- HyperNova 60B has been evaluated and compared against other SoTA models on general reasoning benchmarks using lighteval>=0.12.0, following Artificial
33
- [Analysis Intelligence Benchmarking Methodology](https://artificialanalysis.ai/methodology/intelligence-benchmarking). The results shown refer to **reasoning_effort = medium**.
34
-
35
- ![Accuracy](assets/hypernova60B-accuracy_v2.png)
36
-
37
-
38
- **Intended Use**
39
-
40
- HyperNova 60B is a general-purpose reasoning and conversational model designed for use in English and programming languages. It also supports several non-English languages, including German, French, Italian, Spanish, and Japanese. The model is well suited for developers building AI agent systems, chatbots, RAG pipelines, and other AI-powered applications, as well as for standard instruction-following tasks.
41
-
42
- **Model Release Date**
43
-
44
- January 2, 2026.
45
-
46
- **Safety & Responsible Use**
47
-
48
- HyperNova 60B has been developed using a novel compression technology, and as with any emerging technology, its use involves inherent risks. While testing has been performed, it cannot encompass every possible scenario or use case. Multiverse Computing is committed to continuously evaluating, improving, and responsibly deploying HyperNova 60B, and encourages users to apply appropriate safeguards and judgment when integrating the model into their applications.
49
-
50
- **License**
51
-
52
- This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.txt).
 
2
  license: apache-2.0
3
  base_model:
4
  - openai/gpt-oss-120b
5
+ - MultiverseComputingCAI/HyperNova-60B
6
  library_name: transformers
7
  ---
8