TheCodeKat commited on
Commit
f081c51
·
verified ·
1 Parent(s): a65e50c

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - transformer
4
+ - language-model
5
+ - educational
6
+ license: mit
7
+ ---
8
+
9
+ # ScholarSage - Tiny Transformer LM
10
+
11
+ A tiny transformer language model built from scratch for educational purposes.
12
+
13
+ ## Model Details
14
+
15
+ - **Architecture**: Decoder-only transformer (GPT-style)
16
+ - **Parameters**:
17
+ - Vocabulary: 50,257 tokens (GPT-2 tokenizer)
18
+ - Embedding dimension: 256
19
+ - Layers: 4
20
+ - Attention heads: 4
21
+ - FFN dimension: 1024
22
+ - Max sequence length: 512
23
+
24
+ ## Training
25
+
26
+ - **Dataset**: WikiText-2
27
+ - **Optimizer**: AdamW
28
+ - **Learning rate**: 3e-4
29
+
30
+ ## Usage
31
+
32
+ ```python
33
+ import torch
34
+ from transformers import AutoTokenizer
35
+
36
+ # Load tokenizer
37
+ tokenizer = AutoTokenizer.from_pretrained("TheCodeKat/scholar-sage")
38
+
39
+ # Load model (you'll need to load the architecture separately)
40
+ # This is a custom model, not a standard transformers model
41
+ ```
42
+
43
+ ## Purpose
44
+
45
+ This model is built for educational purposes to understand transformer architecture from scratch.
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da2a0209b5b6df63bd29f71d28a73dcb36a13a41f625274459e472430e9087f4
3
+ size 116089486
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "50256": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ }
12
+ },
13
+ "bos_token": "<|endoftext|>",
14
+ "clean_up_tokenization_spaces": false,
15
+ "eos_token": "<|endoftext|>",
16
+ "extra_special_tokens": {},
17
+ "model_max_length": 1024,
18
+ "tokenizer_class": "GPT2Tokenizer",
19
+ "unk_token": "<|endoftext|>"
20
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff