File size: 3,383 Bytes
c5466c7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license:
  - gemma
  - apache-2.0
language:
  - en
base_model:
  - google/gemma-3-1b-it
  - google/gemma-4-e2b-it
tags:
  - Gemma 3
  - gguf
  - quantized
  - vision
  - text-generation
  - edge-ai
  - local-first
  - xlphy
  - codexcon
  - 1b
  - 4b
  - 12b
  - 27b
---

# 🔮 XLPHY Amethyst (Gemma Series) for Project: XLPHY AI

XLPHY Amethyst is a suite of high-efficiency, local-first AI models optimized specifically for the Project: XLPHY AI ecosystem. These models are repackaged and quantized to provide a premium, low-latency, and multimodal experience for autonomous agents and sovereign AI applications.

> **Developer Note:** These are optimized derivatives of the Google Gemma 3 series, rebranded and tuned for seamless integration within the Project: XLPHY AI autonomous agent architecture.

## 🧠 Model Selection

The Amethyst series is built for Project: XLPHY AI and is divided into four "Gemstone Tiers." Each tier is available in `Q4_K_M`, `Q5_K_M`, and `Q6_K` quantization levels.

| File Name (Template) | Tier Identity | Base Engine | Primary Purpose | License | Available Quants |
| --- | --- | --- | --- | --- | --- |
| `amethyst-arc-1b-[quant].gguf` | arc | Gemma 3 1B IT | Ultra-fast local execution and IoT. | Gemma | `Q4_K_M`, `Q5_K_M`, `Q6_K` |
| `amethyst-beam-e2b-[quant].gguf` | Core | Gemma 4 E2B IT | Main driver with Vision support. | Apache 2.0 | `Q4_K_M`, `Q5_K_M`, `Q6_K` |

## 📦 Quantization Guide

- **Q4_K_M (Low):** Fastest and most memory-efficient. Ideal for mobile and entry-level hardware.
- **Q5_K_M (Medium):** The "sweet spot" for Amethyst, with minimal quality differences from the original model.
- **Q6_K (High):** Near-lossless performance for users who prioritize maximum accuracy.

## 🛠️ Implementation & Runtime

Designed for the Project: XLPHY AI "Offline-First" philosophy. Best executed via:

- **XLPHY Desktop App** (Native Integration)
- **`llama.cpp` / `llama-cli`**
- Any GGUF-compatible inference engine supporting Gemma 3 and Gemma 4

## 🔐 Checksums (SHA256)

To ensure file integrity during the XLPHY automated download process:

| Tier | Q4_K_M | Q5_K_M | Q6_K |
| --- | --- | --- | --- |
| Arc (1B) | `12bf0fff8815d5f73a3c9b586bd8fee8e7b248c935de70dec367679873d0f29d` | `59a10a3c8dc8a9c0bda2c8882198073b1cfebbb2b443aa2fc4cfca4f92eeb805` | `ccad0cb14e9008f699f4b820110b899cf81983a987c40a05a8a1128d2fb713fb` |
| Beam (E2B) | `cded614c9b24be92e5a868d2ba38fb24e15dfea34fc650193c475a6debc233a7` | `43b6d9cfc1108e172b9ff99759ce7c2052bbed5dd7c4b4675ca63a04b6ed8dfc` | `b4c977371027c423ba6e36c7ca6e31e11803853224046f62d94a24a827e4f041` |

## ⚖️ Attribution & Licensing

These files are redistributed/repackaged quantized derivatives of the Google Gemma family.

- **Original Architecture:** Developed by Google DeepMind
- **Optimization:** Repackaged by CodexCon Digital Solutions for Project: XLPHY AI
- **Amethyst Arc (Gemma 3):** Gemma Terms of Use
- **Amethyst Beam (Gemma 4):** Apache License 2.0

## ⚠️ Limitations & Safety

- **Hallucinations:** Like all LLMs, these models may produce incorrect information.
- **Human-in-the-loop:** Always validate technical outputs, especially for vision-based tasks or critical code.
- **Non-Critical Use:** Not intended for medical, legal, or other high-stakes safety-critical applications.

---

Developed by **CodexCon** | Lead Founder: **Cid Cruz**