Pacific-Prime 3.8B (INL-LLM v3)
Integrator Neural Language Model - A novel architecture based on integrator dynamics.
Model Details
| Parameter | Value |
|---|---|
| Parameters | 3.44B |
| Architecture | INL-LLM v3 |
| d_model | 3072 |
| Layers | 32 |
| Heads | 24 |
| KV Heads | 6 |
| Context | 1024 |
| Training | Distillation from 500M teacher |
Usage
pip install inl-llm-v3
import torch
from transformers import AutoTokenizer
from inl_llm_v3 import UltraOptimizedIntegratorLanguageModel
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("pacific-prime_3.8B")
# Load model
model = UltraOptimizedIntegratorLanguageModel(
vocab_size=50261,
d_model=3072,
num_layers=32,
num_heads=24,
num_kv_heads=6,
feedforward_dim=12288,
max_seq_len=1024
)
from safetensors.torch import load_file
model.load_state_dict(load_file("pacific-prime_3.8B/model.safetensors"))
model.eval()
# Generate
prompt = "def fibonacci(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
License
CC BY-NC 4.0
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support