π Nyra-A: The Logic Core
Nyra-A is a specialized high-performance reasoning model developed by Logihertz Systems OPC Pvt Ltd. As part of the independent Nyra Project, this model serves as the "Primary Logic Core" (Tier A), optimized for mathematical consistency, structured data processing, and complex logical deduction.
π Model Specifications
- Developer: Logihertz Systems
- Lead Architect: Sameer Tawade
- Project Status: Independent Research
- Architecture: Optimized Llama-3-8B (Transformer-based)
- Merge Methodology: DARE-TIES + SLERP (Optimized for weight-sum stability)
- Language(s): English (Primary)
π― Intended Use Cases
Nyra-A is engineered for standalone applications requiring high precision:
- Algorithmic Reasoning: Solving complex mathematical and logical proofs.
- Structured Output: Generating precise JSON, XML, and complex code structures.
- Analytical Processing: Acting as a refiner for complex multi-turn instructions where hallucination must be minimized.
π Evaluation & Benchmarking Matrix
This model is currently undergoing rigorous evaluation. Scores are marked as pending while the self-verified evaluation pipeline completes.
| Category | Benchmark | Metric | Score | Status |
|---|---|---|---|---|
| General Reasoning | MMLU-Pro | 5-shot Accuracy | Pending | Eval in Progress |
| Math Execution | GSM8K | 8-shot Strict Match | Pending | Eval in Progress |
| Advanced Math | MATH | 4-shot Chain-of-Thought | Pending | Eval in Progress |
| Graduate Logic | GPQA | 0-shot Accuracy | Pending | Eval in Progress |
| Code Reasoning | HumanEval | Pass@1 | Pending | Eval in Progress |
π» Implementation
To run Nyra-A locally, ensure you have the latest transformers library installed.
from transformers import AutoModelForCausalGeneration, AutoTokenizer
import torch
model_id = "logihertz/nyra-A"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = "Analyze the efficiency of a recursive function versus an iterative approach."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
βοΈ Limitations & Ethical Considerations
Nyra-A is released under the Llama 3 Community License. While heavily optimized for logic, it may still exhibit occasional hallucinations or inherit biases from its foundational weights. Users should implement secondary validation systems for critical, public-facing deployments.
- Downloads last month
- 1,230
Model tree for logihertz/nyra-A
Base model
meta-llama/Meta-Llama-3-8B