You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Edge-Medium (85M)

An 85 million parameter language model trained entirely from scratch.

The proof-of-concept that launched the Edge series. Built on Apple Silicon to validate our sovereign training pipeline before scaling to billions of parameters.


Overview

Edge-Medium is the first completed model in the Edge series by AXe Technologies. A compact transformer trained from zero β€” no pre-trained weights, no fine-tuning, no transfer learning.

Parameters 85,446,912
Architecture Proprietary transformer
Training From scratch β€” complete
Hardware Apple Silicon (Metal acceleration)
Status βœ… Training complete

Purpose

Edge-Medium served as the architectural proving ground for the Edge series. It validated:

  • Our from-scratch training pipeline on consumer hardware
  • Architectural decisions later scaled to Edge-1.3B
  • Data processing and tokenization infrastructure
  • Evaluation and benchmarking methodology

The Edge Series

Model Parameters Status
Edge-Medium 85M βœ… Complete
Edge-1.3B 1.3B πŸ”„ Training
Edge-3 Planned Architecture phase

Access

Model weights are available for approved researchers and partners. Request access below or contact us directly.

Training

Trained from scratch using proprietary infrastructure on Apple Silicon. Training methodology and architectural details are not publicly disclosed.

About AXe Technologies

Canadian AI research lab focused on sovereign, privacy-first artificial intelligence. We build models that run on your hardware, trained on our hardware. No cloud. No compromise.

Open to collaboration β€” Contact us for evaluation access and partnership inquiries.


Built in Canada 🍁 on Apple Silicon

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support