File size: 1,194 Bytes
a936794
 
 
 
 
 
 
 
 
 
 
516931b
a936794
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
516931b
 
a936794
 
 
 
6fca240
a936794
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama
- causal-lm
- experimental
library_name: transformers
---
# PingVortexLM1-20M-Base
A small experimental language model based on LLaMA architecture trained on custom high-quality English dataset with around 200M tokens.
This model is just an experiment, it is not designed for coherent text generation or logical reasoning and may produce repetitive or nonsensical outputs.

Built by [PingVortex Labs](https://github.com/PingVortexLabs).

---
## Model Details
+ **Parameters:** 20M
+ **Context length:** 8192 tokens
+ **Language:** English only
+ **License:** Apache 2.0

---
## Usage
```python
from transformers import LlamaForCausalLM, PreTrainedTokenizerFast

model = LlamaForCausalLM.from_pretrained("pvlabs/PingVortexLM1-20M-Base")
tokenizer = PreTrainedTokenizerFast.from_pretrained("pvlabs/PingVortexLM1-20M-Base")

# don't expect a coherent response
prompt = "The capital of France is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50, repetition_penalty=1.3)
print(tokenizer.decode(outputs[0]))
```

---
*Made by [PingVortex](https://pingvortex.com).*