How to use zai-org/chatglm-6b with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("zai-org/chatglm-6b", trust_remote_code=True, dtype="auto")
torch.tensor对于list of numpy.array转化速度非常慢。在SFT DataCollatorForSeq2seq调用return_tensor='pt'时,速度非常慢,所以建议使用def _pad返回数据类型为list,而不是numpy.array
· Sign up or log in to comment