Model Details

This model is a mixed int4 model with group_size 128 and symmetric quantization of meituan-longcat/LongCat-Flash-Thinking-2601 generated by intel/auto-round. Please follow the license of the original model.

How to Use

Transformers Usage

"transformers_version": "4.53.3"

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Intel/LongCat-Flash-Thinking-2601-int4-mixed-AutoRound"

# Load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    pretrained_model_name_or_path=model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Please tell me what is $$1 + 1$$ and $$2 \times 2$$?"},
    {"role": "assistant", "reasoning_content": "This question is straightforward: $$1 + 1 = 2$$ and $$2 \times 2 = 4$$.", "content": "The answers are 2 and 4."},
    {"role": "user", "content": "Check again?"}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    enable_thinking=True,
    add_generation_prompt=True,
    save_history_reasoning_content=False # Discard reasoning history to save tokens
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# Generate response
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()

print(tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n"))

Generate the Model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
from auto_round import AutoRound
from auto_round.utils import llm_load_model

model_name = "meituan-longcat/LongCat-Flash-Thinking-2601"
model, tokenizer = llm_load_model(model_name, device="cpu")

layer_config = {}
for n, m in model.named_modules():
    if isinstance(m, torch.nn.Linear):
        if "expert" in n and "shared_experts" not in n:
            layer_config[n] = {"bits": 4}
            print(n, 4)
        elif "classifier" in n:
            layer_config[n] = {"bits": 16}
            print(n, 16)
        elif n != "lm_head":
            layer_config[n] = {"bits": 8}
            print(n, 8)

autoround = AutoRound(model, tokenizer, iters=0, layer_config=layer_config, disable_opt_rtn=True)
autoround.quantize_and_save(format="auto_round", output_dir="LongCat-Flash-Thinking-2601-mixed-int4")

Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

arxiv github

Downloads last month
35
Safetensors
Model size
2B params
Tensor type
I32
·
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Intel/LongCat-Flash-Thinking-2601-int4-mixed-AutoRound

Quantized
(5)
this model

Paper for Intel/LongCat-Flash-Thinking-2601-int4-mixed-AutoRound