rxmha125 commited on
Commit
25719a1
Β·
verified Β·
1 Parent(s): 65af7d4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -0
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - tokenizer
8
+ - rx-codex
9
+ - medical-ai
10
+ - code-tokenizer
11
+ - chat-ai
12
+ pipeline_tag: text-generation
13
+ ---
14
+ <div align="center">
15
+ <img src="./rx_codex_logo.png" width="60%" alt="Rx-Codex-AI" />
16
+ </div>
17
+ <h3 align="center">
18
+ <b>
19
+ <span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span>
20
+ <br/>
21
+ Rx Codex Tokenizer: Professional Tokenizer for Modern AI
22
+ <br/>
23
+ <span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span>
24
+ <br/>
25
+ </b>
26
+ </h3>
27
+ <br/>
28
+
29
+ <div align="center" style="line-height: 1;">
30
+ |
31
+ <a href="https://huggingface.co/rxmha125" target="_blank">πŸ€— HuggingFace</a>
32
+ &nbsp;|
33
+ <a href="https://rxcodex.ai" target="_blank">🌐 Website</a>
34
+ &nbsp;|
35
+ <a href="mailto:contact@rxcodex.ai" target="_blank">πŸ“§ Contact</a>
36
+ &nbsp;|
37
+
38
+ <br/>
39
+ </div>
40
+
41
+ <br/>
42
+
43
+ ## Overview
44
+
45
+ Rx Codex Tokenizer is a state-of-the-art BPE tokenizer designed for modern AI applications. With 128K vocabulary optimized for English, code, and medical text, it outperforms established tokenizers in comprehensive benchmarks.
46
+
47
+ **Developed by Rx Founder & CEO of Rx Codex AI**
48
+
49
+ ## Benchmark Results
50
+
51
+ ### Tokenizer Battle Royale - Final Scores
52
+
53
+ | Tokenizer | Final Score | Speed | Compression | Special Tokens | Chat Support |
54
+ |-----------|-------------|-------|-------------|----------------|--------------|
55
+ | πŸ₯‡ **Rx Codex** | **84.51/100** | 24.84/25 | 35.0/35 | 16.67/20 | 15/15 |
56
+ | πŸ₯ˆ GPT-2 | 67.89/100 | 24.89/25 | 35.0/35 | 0.0/20 | 15/15 |
57
+ | πŸ₯‰ DeepSeek | 67.77/100 | 24.77/25 | 35.0/35 | 0.0/20 | 15/15 |
58
+
59
+ ### Final Scores Comparison
60
+ ![Final Scores](./final_scores_comparison.png)
61
+
62
+ ### Performance Breakdown
63
+ ![Performance Breakdown](./performance_breakdown.png)
64
+
65
+ ### Speed Analysis
66
+ ![Speed Comparison](./speed_comparison.png)
67
+
68
+ ### Compression Efficiency
69
+ ![Compression Comparison](./compression_comparison.png)
70
+
71
+ ### Multi-dimensional Analysis
72
+ ![Capabilities Radar](./capabilities_radar.png)
73
+
74
+ ### Token Count Efficiency
75
+ ![Token Count Comparison](./token_count_comparison.png)
76
+
77
+ ## Key Features
78
+
79
+ - **128K Vocabulary** - Optimal balance of coverage and efficiency
80
+ - **Byte-Level BPE** - No UNK tokens, handles any text
81
+ - **Medical Text Optimized** - Perfect for healthcare AI applications
82
+ - **Code-Aware** - Excellent programming language support
83
+ - **Chat-Ready Tokens** - Built-in support for conversation formats
84
+
85
+ ## Technical Specifications
86
+
87
+ - **Vocabulary Size**: 128,256 tokens
88
+ - **Special Tokens**: 9 custom tokens
89
+ - **Model Type**: BPE with byte fallback
90
+ - **Training Data**: OpenOrca 5GB English dataset
91
+ - **Average Speed**: 0.63ms per tokenization
92
+ - **Compression Ratio**: 4.18 characters per token
93
+
94
+ ## Use Cases
95
+
96
+ - **Chat AI Systems** - Built-in chat token support
97
+ - **Medical AI** - Optimized for healthcare terminology
98
+ - **Code Generation** - Excellent programming language handling
99
+ - **Academic Research** - Efficient with complex text
100
+
101
+ ## License
102
+
103
+ Apache 2.0
104
+
105
+ ## Author
106
+
107
+ **Rx Founder & CEO**
108
+ Rx Codex AI