PingVortex commited on
Commit
a936794
·
verified ·
1 Parent(s): 219c92c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -3
README.md CHANGED
@@ -1,3 +1,41 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - llama
8
+ - causal-lm
9
+ - experimental
10
+ library_name: transformers
11
+ ---
12
+ # PingVortexLM-20M
13
+ A small experimental language model based on LLaMA architecture trained on custom high-quality English dataset with around 200M tokens.
14
+ This model is just an experiment, it is not designed for coherent text generation or logical reasoning and may produce repetitive or nonsensical outputs.
15
+
16
+ Built by [PingVortex Labs](https://github.com/PingVortexLabs).
17
+
18
+ ---
19
+ ## Model Details
20
+ + **Parameters:** 20M
21
+ + **Context length:** 8192 tokens
22
+ + **Language:** English only
23
+ + **License:** Apache 2.0
24
+
25
+ ---
26
+ ## Usage
27
+ ```python
28
+ from transformers import LlamaForCausalLM, PreTrainedTokenizerFast
29
+
30
+ model = LlamaForCausalLM.from_pretrained("pvlabs/PingVortexLM-20M-v2-Base")
31
+ tokenizer = PreTrainedTokenizerFast.from_pretrained("pvlabs/PingVortexLM-20M-v2-Base")
32
+
33
+ # don't expect a coherent response
34
+ prompt = "The capital of France is"
35
+ inputs = tokenizer(prompt, return_tensors="pt")
36
+ outputs = model.generate(**inputs, max_new_tokens=50, temperature=0.7,repetition_penalty=1.3)
37
+ print(tokenizer.decode(outputs[0]))
38
+ ```
39
+
40
+ ---
41
+ *Made by [PingVortex](https://pingvortex.com).*