Affine-cvea3 / README.md
cveavy's picture
Model update
ab3d3ba verified
metadata
license: apache-2.0
base_model: qwen3
tags:
  - affine
  - qwen3
  - causal-lm
  - reasoning
library_name: transformers
pipeline_tag: text-generation

Model Card

Description

A Qwen3-based language model (~7B parameters) optimized for the Affine network. Features a 40K token context window, 36 transformer layers, and efficient grouped query attention (GQA) architecture. Designed for high-performance reasoning, code generation, and agentic AI applications.

What is this used for?

  • Complex Reasoning: Multi-step problem solving and logical deduction
  • Code Generation: Python, JavaScript, and other programming languages
  • Agentic AI: Tool-using agents and autonomous systems
  • Long-Context Tasks: Document analysis and research
  • Affine Network: Competitive reasoning model for decentralized evaluation

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "your-username/your-model-name"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

prompt = "Explain quantum computing."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Model Details

  • Architecture: Qwen3ForCausalLM
  • Parameters: ~7B
  • Context Length: 40,960 tokens
  • Layers: 36
  • Precision: bfloat16

License

Apache 2.0