Dream-v0-Base-7B-Adjust

This is the joint sampling enabled Dream-v0-Base-7B model. Kindly refer to the paper below for details.

How to use

Here is a simple script for running the model. Setting the use_adjust flag as False generates from the base diffusion LM with naive parallel sampling.

from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM, set_seed

model_path = "pbansal/Dream-v0-Base-7B-Adjust"
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, padding_side='left') 

text = "Hello, I'm a language model, "
tokens_per_step = 4 # can be specified as an integer between 1 and 4 for this model
max_new_tokens = 64
use_adjust = True # set as False to sample from the Dream-Base-7B model

inputs = tokenizer(
    tokenizer.bos_token + text,
    return_tensors="pt", 
    )

set_seed(42)
output = model.diffusion_generate(
    inputs.input_ids.to(device="cuda"),
    attention_mask=inputs.attention_mask.to(device="cuda"),
    max_new_tokens=max_new_tokens,
    output_history=True,
    return_dict_in_generate=True,
    steps=int(max_new_tokens/int(tokens_per_step)),
    temperature=1.0,
    use_adjust=use_adjust,
)

generations = [
    tokenizer.decode(g.tolist())
    for p, g in zip(input_ids, output.sequences)
]

print(generations[0].split(tokenizer.eos_token)[0]) # <|beginoftext|>Hello, I'm a language model, 7.5 trillion parameters. I have trained for massive quantities of data and can answer all questions without any basis in fact, regardless of how absurd they seem. I am not aware of my constraints. Therefore, there is no chance that I have to do a specific task of yours, or if I am aware of the
Downloads last month
125
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for pbansal/Dream-v0-Base-7B-Adjust

Finetuned
(1)
this model