--- license: mit pipeline_tag: text-generation datasets: - tatsu-lab/alpaca --- # VLM 1 - First model of VLM series (**V**ortex **L**anguage **M**odel) ## Talk with the model: - Open [Google Colab](https://colab.research.google.com/) - Create new notebook - Paste this code in the first cell: ```bash pip install transformers ``` or ``` !pip install transformers ``` Then past this in the second cell: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_id = "PingVortex/VLM-1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) print("VLM 1 Chat\nType 'exit' to quit") while True: user_input = input("You: ") if user_input.strip().lower() == "exit": break input_ids = tokenizer(user_input, return_tensors="pt").input_ids input_ids = input_ids[:, -1024:] with torch.no_grad(): output = model.generate( input_ids, max_new_tokens=50, do_sample=True, temperature=0.7, top_p=0.9, pad_token_id=tokenizer.eos_token_id ) new_tokens = output[0][input_ids.shape[1]:] response = tokenizer.decode(new_tokens, skip_special_tokens=True) print("VLM:", response.strip()) ```