How to use commaai/commavq-gpt2m with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("commaai/commavq-gpt2m") model = AutoModelForCausalLM.from_pretrained("commaai/commavq-gpt2m")
A GPT2M model trained on a larger version of the commaVQ dataset.
This model is able to generate driving video unconditionally.
Below is an example of 5 seconds of imagined video using GPT2M.