Text Generation
Transformers
Safetensors
English
kate
Toasteror commited on
Commit
8a2fdb8
·
1 Parent(s): b026668

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -24
README.md CHANGED
@@ -16,27 +16,4 @@ This is a custom model for text generation.
16
 
17
  ## Model Details
18
 
19
- - `model_type`: GPT2*
20
-
21
- ## GPT2
22
-
23
- This model is **NOT A FINETUNE!!**. It uses the GPT2 architecture but it doesnt finetune it.
24
-
25
- ```python
26
- # Model configuration for a smaller GPT-2 style model
27
- config = GPT2Config(
28
- vocab_size=50257, # Standard GPT-2 vocabulary size
29
- n_positions=512, # Maximum sequence length
30
- n_ctx=512, # Context window size
31
- n_embd=512, # Embedding dimension
32
- n_layer=6, # Number of transformer layers
33
- n_head=8, # Number of attention heads
34
- bos_token_id=50256,
35
- eos_token_id=50256,
36
- pad_token_id=50256,
37
- _name_or_path="" # Empty to ensure no pretrained weights are loaded
38
- )
39
-
40
- # Initialize model with random weights
41
- model = GPT2LMHeadModel(config)
42
- ```
 
16
 
17
  ## Model Details
18
 
19
+ - `model_type`: custom_transformer