File size: 817 Bytes
f6f8bd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
afbcd88
f6f8bd8
afbcd88
 
5bdd74b
afbcd88
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
language: zh
tags:
- chinese
- transformer
- language-model
license: mit
---

# MiniMind Pretrained Model

Chinese language model trained on pretrain dataset.

## Model Details
- Architecture: Transformer
- Parameters: 26.878M
- Dimensions: 512
- Layers: 8
- Attention Heads: 8
- Vocabulary Size: 32000
- Max Sequence Length: 1024

## Training Data
- Pretrained on Chinese text corpus
- Dataset size: 4.33GB

## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("samz/minimind-pretrain")
tokenizer = AutoTokenizer.from_pretrained("samz/minimind-pretrain")

text = "今天天气真不错"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
result = tokenizer.decode(outputs[0])
print(result)
```