Mohamed Hassan Ashmawy commited on
Commit
d46a28c
·
verified ·
1 Parent(s): f0524dc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ tags:
4
+ - causal-lm
5
+ - text-generation
6
+ - pre-trained
7
+ - pytorch
8
+ ---
9
+
10
+ # tinystories-gpt-small
11
+
12
+ This is a custom GPT model **pre-trained from scratch on the TinyStories dataset**.
13
+ It demonstrates foundational language modeling capabilities and can be used for text generation.
14
+
15
+ ## Model Details
16
+
17
+ * **Architecture:** Custom GPT
18
+ * `n_layer`: 8
19
+ * `n_head`: 8
20
+ * `n_embd`: 512
21
+ * `block_size`: 1024
22
+ * `vocab_size`: 50257
23
+ * `dropout`: 0.1
24
+ * **Pre-training Dataset:** TinyStories (a synthetic dataset of short, simple stories designed to teach language models basic reasoning and coherence).
25
+ * **Purpose:** This model is a base language model. It has learned to predict the next token in a sequence based on the patterns found in the TinyStories dataset. It is suitable for demonstrating basic generative text capabilities and serves as a foundation for further fine-tuning on specific downstream tasks (e.g., question answering, chatbot).
26
+
27
+ ## How to Use (Inference)
28
+
29
+ Since this model uses `tiktoken` for tokenization, you'll need to explicitly load the tokenizer using `tiktoken`.
30
+
31
+ ```python
32
+ import torch
33
+ import tiktoken
34
+ from model import GPT, GPTConfig # Assuming model.py is available or its classes are defined
35
+
36
+ # 1. Define model configuration (must match the trained model's config.json)
37
+ # You can load this from config.json if you save it, or define it manually
38
+ config = GPTConfig(
39
+ vocab_size=50257,
40
+ block_size=1024,
41
+ n_layer=8,
42
+ n_head=8,
43
+ n_embd=512,
44
+ dropout=0.1,
45
+ bias=True
46
+ )
47
+
48
+ # 2. Initialize the model and load weights
49
+ model = GPT(config)
50
+ state_dict = torch.load("pytorch_model.bin", map_location='cpu') # Replace with path to downloaded model
51
+ model.load_state_dict(state_dict)
52
+ model.eval() # Set to evaluation mode
53
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
54
+ model.to(device)
55
+
56
+ # 3. Load the tiktoken tokenizer
57
+ tokenizer = tiktoken.get_encoding("gpt2")
58
+ EOT_TOKEN_ID = tokenizer.eot_token
59
+
60
+ # 4. Prepare your prompt for text generation
61
+ prompt_text = "Once upon a time there was a pumpkin."
62
+
63
+ # Encode the prompt
64
+ allowed_special_tokens = 'all'
65
+ input_ids = tokenizer.encode(prompt_text, allowed_special=allowed_special_tokens)
66
+ input_ids_tensor = torch.tensor([input_ids], dtype=torch.long).to(device)
67
+
68
+ # 5. Generate text
69
+ # Adjust max_new_tokens, temperature, top_k as needed
70
+ generated_output_ids = model.generate(
71
+ idx=input_ids_tensor,
72
+ max_new_tokens=100, # Max length for the generated text
73
+ temperature=0.7,
74
+ top_k=50
75
+ )
76
+
77
+ # Decode the generated text (excluding the prompt part)
78
+ generated_text_ids = generated_output_ids[0, len(input_ids):].tolist()
79
+ generated_text = tokenizer.decode(generated_text_ids)
80
+
81
+ # Clean up any leftover EOT tokens from generation
82
+ generated_text = generated_text.replace(tokenizer.decode([EOT_TOKEN_ID]), "").strip()
83
+
84
+ print(f"Generated Text: {generated_text}")
85
+ ```
86
+
87
+ ## Limitations and Bias
88
+
89
+ * This model is a relatively small GPT (50.95M parameters) and its generative capabilities are limited by its size and the simplicity of the TinyStories dataset.
90
+ * It is a base language model and has not been instruction-tuned or fine-tuned for specific tasks like complex question answering or dialogue. Therefore, its responses may be incoherent or non-factual for out-of-distribution prompts.
91
+ * Like all language models, it may generate biased or incorrect information based on its training data.
92
+
93
+ ## License
94
+
95
+ Apache 2.0
96
+