π Nyra-Master: The Apex Orchestrator
Nyra-Master is the flagship unified model developed by Logihertz Systems OPC Pvt Ltd. As the pinnacle of the independent Nyra Project, this model serves as the "Apex Orchestrator." It seamlessly integrates the specialized capabilities of the entire Nyra suite: the rigid logic of Tier A, the expansive contextual creativity of Tier B, and the precise tool-execution of Tier C.
π Model Specifications
- Developer: Logihertz Systems
- Lead Architect: Sameer Tawade
- Project Status: Independent Research
- Architecture: Optimized Llama-3-8B (Transformer-based Omni-Merge)
- Merge Methodology: Linear Merge (Optimized for multi-domain holistic reasoning)
- Language(s): English (Primary), Multi-language Code (Python, C++, JS, etc.)
π― Intended Use Cases
Nyra-Master is engineered for highly complex, multi-step workflows that require dynamic intent switching:
- Universal Orchestration: Acting as the primary router in multi-agent systems, dynamically shifting between creative, logical, and executable states.
- Complex Pipeline Reasoning: Handling prompts that require simultaneous math execution, creative explanation, and strict formatting.
- General Purpose Excellence: Serving as a standalone, highly capable assistant for developers, researchers, and enterprise environments.
π Evaluation & Benchmarking Matrix
This flagship model is currently undergoing rigorous evaluation across all major AI domains. Scores are marked as pending while the self-verified evaluation pipeline completes.
| Category | Benchmark | Metric | Score | Status |
|---|---|---|---|---|
| Holistic Reasoning | MMLU-Pro | 5-shot Accuracy | Pending | Eval in Progress |
| Multi-Turn Chat | MT-Bench | Average Score | Pending | Eval in Progress |
| Code Execution | HumanEval | Pass@1 | Pending | Eval in Progress |
| Instruction Strictness | IFEval | Prompt-level Strict | Pending | Eval in Progress |
| Graduate Logic | GPQA | 0-shot Accuracy | Pending | Eval in Progress |
| Advanced Math | MATH | 4-shot Chain-of-Thought | Pending | Eval in Progress |
π» Implementation
To run Nyra-Master locally, ensure you have the latest transformers library installed. Due to its dense knowledge retention, we recommend running this model in float16 precision.
from transformers import AutoModelForCausalGeneration, AutoTokenizer
import torch
model_id = "logihertz/nyra-Master"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = "Explain quantum superposition. Then, write a Python script simulating a coin flip to represent the concept, and format the output as a JSON object."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
βοΈ Limitations & Ethical Considerations
Nyra-Master is released under the Llama 3 Community License. While designed to be an omni-capable orchestrator, combining logic, code, and creative capabilities can occasionally lead to complex hallucinations in highly ambiguous edge cases. Users should implement secondary validation systems for critical deployments.
- Downloads last month
- 754
Model tree for logihertz/nyra-Master
Base model
meta-llama/Meta-Llama-3-8B