metadata
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama
- causal-lm
- experimental
library_name: transformers
PingVortexLM1-20M-Base
A small experimental language model based on LLaMA architecture trained on custom high-quality English dataset with around 200M tokens. This model is just an experiment, it is not designed for coherent text generation or logical reasoning and may produce repetitive or nonsensical outputs.
Built by PingVortex Labs.
Model Details
- Parameters: 20M
- Context length: 8192 tokens
- Language: English only
- License: Apache 2.0
Usage
from transformers import LlamaForCausalLM, PreTrainedTokenizerFast
model = LlamaForCausalLM.from_pretrained("pvlabs/PingVortexLM1-20M-Base")
tokenizer = PreTrainedTokenizerFast.from_pretrained("pvlabs/PingVortexLM1-20M-Base")
# don't expect a coherent response
prompt = "The capital of France is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50, repetition_penalty=1.3)
print(tokenizer.decode(outputs[0]))
Made by PingVortex.