How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="chanwit/flux-base-optimized")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("chanwit/flux-base-optimized")
model = AutoModelForCausalLM.from_pretrained("chanwit/flux-base-optimized")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Flux-Base-Optimized

flux-base-optimized is the base model for finetuning the series of flux-7b models. It is hierarchical SLERP merged from the following models

  • mistralai/Mistral-7B-v0.1 (Apache 2.0)
  • teknium/OpenHermes-2.5-Mistral-7B (Apache 2.0)
  • Intel/neural-chat-7b-v3-3 (Apache 2.0)
  • meta-math/MetaMath-Mistral-7B (Apache 2.0)
  • openchat/openchat-3.5-0106 was openchat/openchat-3.5-1210 (Apache 2.0)

Here's how we did the hierarchical SLERP merge.

                [flux-base-optimized]
                         โ†‘
                         |
               [stage-1]-+-[openchat]
                   โ†‘
                   |
         [stage-0]-+-[meta-math]
             โ†‘
             |
[openhermes]-+-[neural-chat]
Downloads last month
96
Safetensors
Model size
7B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support