π Nyra-B: The Creative & Context Core
Nyra-B is the secondary powerhouse model developed by Logihertz Systems OPC Pvt Ltd. As part of the independent Nyra Project, this model serves as the "Creative & Context Core" (Tier B), specifically optimized for long-context retention, nuanced natural language generation, and creative problem-solving.
π Model Specifications
- Developer: Logihertz Systems
- Lead Architect: Sameer Tawade
- Project Status: Independent Research
- Architecture: Optimized Llama-3-8B (Transformer-based)
- Merge Methodology: DARE-TIES + SLERP (Optimized for vocabulary diversity and context flow)
- Language(s): English (Primary)
π― Intended Use Cases
Nyra-B is engineered for applications where flow, tone, and extensive context handling are paramount:
- Long-Form Generation: Drafting reports, documentation, and engaging textual content.
- Contextual Summarization: Processing large chunks of data or conversation history without losing critical nuance.
- Agentic Personas: Serving as the conversational interface for multi-agent systems, providing natural and dynamic responses.
π Evaluation & Benchmarking Matrix
This model is currently undergoing rigorous evaluation. Scores are marked as pending while the self-verified evaluation pipeline completes.
| Category | Benchmark | Metric | Score | Status |
|---|---|---|---|---|
| Multi-Turn Chat | MT-Bench | Average Score | Pending | Eval in Progress |
| Context Retrieval | Needle In A Haystack | 32k Context Accuracy | Pending | Eval in Progress |
| Conversational Flow | AlpacaEval 2.0 | Length-Controlled Win Rate | Pending | Eval in Progress |
| General Knowledge | MMLU-Pro | 5-shot Accuracy | Pending | Eval in Progress |
| Factuality | TruthfulQA | Generation Accuracy | Pending | Eval in Progress |
π» Implementation
To run Nyra-B locally, ensure you have the latest transformers library installed.
from transformers import AutoModelForCausalGeneration, AutoTokenizer
import torch
model_id = "logihertz/nyra-B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = "Explain the concept of neural network quantization using a creative analogy."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
βοΈ Limitations & Ethical Considerations
Nyra-B is released under the Llama 3 Community License. Due to its creative optimization, it may occasionally generate plausible but factually incorrect statements (hallucinations) if not grounded by a prompt. Users should implement secondary validation systems for critical deployments.
- Downloads last month
- 812
Model tree for logihertz/nyra-B
Base model
meta-llama/Meta-Llama-3-8B