π Nova-AGI-EXP
Nova is an experimental AGI-focused language model fine-tuned for reasoning and logical thinking.
π§ Capabilities
- β Deductive reasoning
- β Inductive reasoning
- β Abductive reasoning
- β Mathematical sequences
- β Causal reasoning
- β Metacognition (self-awareness of limitations)
- β Contextual understanding
- β Probabilistic thinking
π Training Details
| Attribute | Value |
|---|---|
| Base Model | GPT-2 Medium (355M params) |
| Training Examples | 49 reasoning pairs |
| Final Loss | 0.066 |
| Framework | PyTorch + Transformers |
π Quick Start
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("VoidWalkercero/Nova-AGI-EXP")
tokenizer = GPT2Tokenizer.from_pretrained("VoidWalkercero/Nova-AGI-EXP")
prompt = "User: What comes next: 2, 4, 6, 8, ?\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
inputs["input_ids"],
max_new_tokens=100,
temperature=0.3,
top_p=0.85,
repetition_penalty=1.3
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 45