CorDA
Collection
models and datas for CorDA • 9 items • Updated • 1
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("iboing/CorDA_IPA_math_finetuned_math", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("iboing/CorDA_IPA_math_finetuned_math", trust_remote_code=True)The LLaMA-2-7b model finetuned on the Math task using CorDA in the IPA mode with MetaMath.
| Method | TriviaQA | NQ open | GSM8k | Math |
|---|---|---|---|---|
| LoRA | 44.17 | 1.91 | 42.68 | 5.92 |
| CorDA (KPA with nqopen) | 45.23 | 10.44 | 45.64 | 6.94 |
| CorDA (IPA with MetaMath) | - | - | 54.59 | 8.54 |
You can evaluate the model's performance following the step-3 in CorDA github repo.
Note: The model trained using CorDA adapter is based on customized code. If you want to restore the original LLaMA architecture, execute merge_adapter_for_corda.py in CorDA github repo.
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="iboing/CorDA_IPA_math_finetuned_math", trust_remote_code=True)