Spaces:
Running
Running
Update index.html
Browse files- index.html +100 -18
index.html
CHANGED
|
@@ -1,19 +1,101 @@
|
|
| 1 |
-
<!
|
| 2 |
-
<html>
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
</html>
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>Vortex Language Model (VLM) Documentation</title>
|
| 7 |
+
<style>
|
| 8 |
+
body {
|
| 9 |
+
background-color: #121212;
|
| 10 |
+
color: #e0e0e0;
|
| 11 |
+
font-family: Arial, sans-serif;
|
| 12 |
+
line-height: 1.6;
|
| 13 |
+
padding: 2rem;
|
| 14 |
+
}
|
| 15 |
+
h1, h2, h3 {
|
| 16 |
+
color: #ffffff;
|
| 17 |
+
}
|
| 18 |
+
code {
|
| 19 |
+
background-color: #1e1e1e;
|
| 20 |
+
padding: 2px 4px;
|
| 21 |
+
border-radius: 4px;
|
| 22 |
+
color: #c0caf5;
|
| 23 |
+
}
|
| 24 |
+
.section {
|
| 25 |
+
margin-bottom: 2rem;
|
| 26 |
+
}
|
| 27 |
+
a {
|
| 28 |
+
color: #82aaff;
|
| 29 |
+
}
|
| 30 |
+
</style>
|
| 31 |
+
</head>
|
| 32 |
+
<body>
|
| 33 |
+
<h1>Vortex Language Model (VLM) Documentation</h1>
|
| 34 |
+
|
| 35 |
+
<div class="section">
|
| 36 |
+
<h2>Overview</h2>
|
| 37 |
+
<p><strong>VLM</strong> stands for <strong>Vortex Language Model</strong>, a series of transformer-based models developed by <strong>PingVortex</strong>. The models are designed for tasks such as text generation, reasoning, and instruction following. Each version of VLM is structured in three training stages for progressive refinement.</p>
|
| 38 |
+
</div>
|
| 39 |
+
|
| 40 |
+
<div class="section">
|
| 41 |
+
<h2>Model Structure</h2>
|
| 42 |
+
<p>Each VLM version follows a three-stage pipeline:</p>
|
| 43 |
+
<ul>
|
| 44 |
+
<li><strong>K1</strong>: Trained from scratch (base model)</li>
|
| 45 |
+
<li><strong>K2</strong>: Fine-tuned on broader/general-purpose data</li>
|
| 46 |
+
<li><strong>K3</strong>: Fine-tuned for clarity and simplicity</li>
|
| 47 |
+
</ul>
|
| 48 |
+
<p>K stands for <em>Knowledge</em>, with higher numbers representing more advanced training stages.</p>
|
| 49 |
+
</div>
|
| 50 |
+
|
| 51 |
+
<div class="section">
|
| 52 |
+
<h2>Versions and Training Details</h2>
|
| 53 |
+
|
| 54 |
+
<h3>VLM 1</h3>
|
| 55 |
+
<ul>
|
| 56 |
+
<li>Parameters: <code>124M</code></li>
|
| 57 |
+
<li>Training Time: ~4 hours per stage</li>
|
| 58 |
+
<li>Final Loss (all stages): ~<code>3.0</code></li>
|
| 59 |
+
<li><strong>K1</strong>: Trained on <code>tatsu-lab/alpaca</code> and a small custom dataset</li>
|
| 60 |
+
<li><strong>K2</strong>: Fine-tuned K1 on <code>Elriggs/openwebtext-100k</code></li>
|
| 61 |
+
<li><strong>K3</strong>: Fine-tuned K2 on <code>rahular/simple-wikipedia</code></li>
|
| 62 |
+
</ul>
|
| 63 |
+
|
| 64 |
+
<h3>VLM 1.1</h3>
|
| 65 |
+
<ul>
|
| 66 |
+
<li>Parameters: <code>418M</code></li>
|
| 67 |
+
<li>Training Time: ~4 hours per stage</li>
|
| 68 |
+
<li>Target Final Loss: ~<code>1.0</code></li>
|
| 69 |
+
<li><strong>K1</strong>: Currently training on <code>ssbuild/alpaca_gpt4</code> and <code>effectiveML/ArXiv-10</code></li>
|
| 70 |
+
</ul>
|
| 71 |
+
</div>
|
| 72 |
+
|
| 73 |
+
<div class="section">
|
| 74 |
+
<h2>Training Objectives</h2>
|
| 75 |
+
<p>All models aim to reach a target training loss that signifies strong generalization ability. Training is monitored using:</p>
|
| 76 |
+
<ul>
|
| 77 |
+
<li>Loss convergence</li>
|
| 78 |
+
<li>Gradient norms</li>
|
| 79 |
+
<li>Learning rate schedules</li>
|
| 80 |
+
<li>Evaluation tasks (math, logic, generation)</li>
|
| 81 |
+
</ul>
|
| 82 |
+
</div>
|
| 83 |
+
|
| 84 |
+
<div class="section">
|
| 85 |
+
<h2>Applications</h2>
|
| 86 |
+
<p>VLM models are suitable for integration in various AI applications, including:</p>
|
| 87 |
+
<ul>
|
| 88 |
+
<li>Conversational assistants</li>
|
| 89 |
+
<li>Search and knowledge retrieval</li>
|
| 90 |
+
<li>Code generation and analysis</li>
|
| 91 |
+
<li>Educational tutoring and summarization</li>
|
| 92 |
+
</ul>
|
| 93 |
+
</div>
|
| 94 |
+
|
| 95 |
+
<div class="section">
|
| 96 |
+
<h2>Contact & More</h2>
|
| 97 |
+
<p>Developed and maintained by <strong>PingVortex</strong>.</p>
|
| 98 |
+
<p>Website: <a href="https://pingvortex.xyz" target="_blank">pingvortex.xyz</a></p>
|
| 99 |
+
</div>
|
| 100 |
+
</body>
|
| 101 |
</html>
|