File size: 3,909 Bytes
b026625
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1165d92
b026625
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
#!/usr/bin/env python
# coding: utf-8
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# This script creates a tiny random model
#
# It will be used then as "hf-internal-testing/tiny-electra"

# ***To build from scratch***
#
# 1. clone sentencepiece into a parent dir
# git clone https://github.com/google/sentencepiece
#
# 2. create a new repo at https://huggingface.co/new
# make sure to choose 'hf-internal-testing' as the Owner
#
# 3. clone
# git clone https://huggingface.co/hf-internal-testing/tiny-electra
# cd tiny-electra

# 4. start with some pre-existing script from one of the https://huggingface.co/hf-internal-testing/ tiny model repos, e.g.
# wget https://huggingface.co/hf-internal-testing/tiny-electra/raw/main/make-xlm-roberta.py
# chmod a+x ./make-tiny-electra.py
# mv ./make-tiny-xlm-roberta.py ./make-tiny-electra.py
#
# 5. automatically rename things from the old names to new ones
# perl -pi -e 's|XLMRoberta|Electra|g' make-tiny-electra.py
# perl -pi -e 's|xlm-roberta|electra|g' make-tiny-electra.py
#
# 6. edit and re-run this script while fixing it up
# ./make-tiny-electra.py
#
# 7. add/commit/push
# git add *
# git commit -m "new tiny model"
# git push

# ***To update***
#
# 1. clone the existing repo
# git clone https://huggingface.co/hf-internal-testing/tiny-electra
# cd tiny-electra
#
# 2. edit and re-run this script after doing whatever changes are needed
# ./make-tiny-electra.py
#
# 3. commit/push
# git commit -m "new tiny model"
# git push

import sys
import os

from transformers import ElectraTokenizer, ElectraTokenizerFast, ElectraConfig, ElectraForMaskedLM

mname_orig = "google/electra-small-generator"
mname_tiny = "tiny-electra"

### Tokenizer

tokenizer_fast_tiny = ElectraTokenizerFast.from_pretrained(mname_orig)
tokenizer_tiny = ElectraTokenizer.from_pretrained(mname_orig)

### Config

config_tiny = ElectraConfig.from_pretrained(mname_orig)
print(config_tiny)
# remember to update this to the actual config as each model is different and then shrink the numbers
config_tiny.update(dict(
    embedding_size=64,
    hidden_size=64,
    intermediate_size=64,
    max_position_embeddings=512,
    num_attention_heads=2,
    num_hidden_layers=2,
))
print("New config", config_tiny)

### Model

model_tiny = ElectraForMaskedLM(config_tiny)
print(f"{mname_tiny}: num of params {model_tiny.num_parameters()}")
model_tiny.resize_token_embeddings(len(tokenizer_tiny))


inputs = tokenizer_tiny("The capital of France is [MASK].", return_tensors="pt")
labels = tokenizer_tiny("The capital of France is Paris.", return_tensors="pt")["input_ids"]
outputs = model_tiny(**inputs, labels=labels)
print("Test with normal tokenizer:", len(outputs.logits[0]))

inputs = tokenizer_fast_tiny("The capital of France is [MASK].", return_tensors="pt")
labels = tokenizer_fast_tiny("The capital of France is Paris.", return_tensors="pt")["input_ids"]
outputs = model_tiny(**inputs, labels=labels)
print("Test with normal tokenizer:", len(outputs.logits[0]))

# Save
model_tiny.half() # makes it smaller
model_tiny.save_pretrained(".")
tokenizer_tiny.save_pretrained(".")
tokenizer_fast_tiny.save_pretrained(".")

readme = "README.md"
if not os.path.exists(readme):
    with open(readme, "w") as f:
        f.write(f"This is a {mname_tiny} random model to be used for basic testing.\n")

print(f"Generated {mname_tiny}")