Bamba
Bamba is a 9B parameter decoder-only language model built on the Mamba-2 architecture. It is pretrained in two stages - it starts by training on 2T tokens from the Dolma v1.7 dataset and then trained on an additional 200B tokens from FineWeb and Cosmopedia.
You can find all the original Bamba checkpoints under the Bamba collection.
This model was contributed by ani300 and fabianlim.
Click on the Bamba models in the right sidebar for more examples of how to apply Bamba to different text generation tasks.
The example below demonstrates how to generate text with [Pipeline], [AutoModel], and from the command line.
import torch
from transformers import pipeline
pipeline = pipeline(
task="text-generation",
model="ibm-ai-platform/Bamba-9B-v2",
torch_dtype=torch.bfloat16,
device=0
)
pipeline("Plants create energy through a process known as")
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ibm-ai-platform/Bamba-9B-v2")
model = AutoModelForCausalLM.from_pretrained("ibm-ai-platform/Bamba-9B-v2", torch_dtype=torch.bfloat16, device_map="auto", attn_implementation="sdpa")
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to("cuda")
output = model.generate(**input_ids)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```bash
echo "Plants create energy through a process known as" | transformers-cli run --task text-generation --model ibm-ai-platform/Bamba-9B-v2 --device 0
```
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to int4.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
quantization_config = TorchAoConfig("int4_weight_only", group_size=128)
tokenizer = AutoTokenizer.from_pretrained("ibm-ai-platform/Bamba-9B-v2")
model = AutoModelForCausalLM.from_pretrained(
"ibm-ai-platform/Bamba-9B-v2",
quantization_config=quantization_config,
device_map="auto",
attn_implementation="sdpa"
)
inputs = tokenizer("Plants create energy through a process known as", return_tensors="pt").to("cuda")
output = model.generate(**inputs)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Notes
Bamba supports padding-free training which concatenates distinct training examples while still processing inputs as separate batches. It can significantly accelerate inference by ~2x (depending on model and data distribution) and reduce memory-usage if there are examples of varying lengths by avoiding unnecessary compute and memory overhead from padding tokens.
Padding-free training requires the
flash-attn,mamba-ssm, andcausal-conv1dpackages and the following arguments must be passed to the model in addition toinput_idsandlabels.position_ids: torch.LongTensor: the position index of each token in each sequence.seq_idx: torch.IntTensor: the index of each sequence in the batch.- Each of the [
FlashAttentionKwargs]cu_seq_lens_q: torch.LongTensor: the cumulative sequence lengths of all queries.cu_seq_lens_k: torch.LongTensor: the cumulative sequence lengths of all keys.max_length_q: int: the longest query length in the batch.max_length_k: int: the longest key length in the batch.
The
attention_maskinputs should not be provided. The [DataCollatorWithFlattening] programmatically generates the set of additional arguments above usingreturn_seq_idx=Trueandreturn_flash_attn_kwargs=True. See the Improving Hugging Face Training Efficiency Through Packing with Flash Attention blog post for additional information.from transformers import DataCollatorWithFlattening # Example of using padding-free training data_collator = DataCollatorWithFlattening( tokenizer=tokenizer, return_seq_idx=True, return_flash_attn_kwargs=True )
BambaConfig
[[autodoc]] BambaConfig
BambaModel
[[autodoc]] BambaModel - forward
BambaForCausalLM
[[autodoc]] BambaForCausalLM - forward